id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
list | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
10,796 | Feminist film theory | Feminist film theory is a theoretical film criticism derived from feminist politics and feminist theory influenced by second-wave feminism and brought about around the 1970s in the United States. With the advancements in film throughout the years feminist film theory has developed and changed to analyse the current ways of film and also go back to analyse films past. Feminists have many approaches to cinema analysis, regarding the film elements analyzed and their theoretical underpinnings.
The development of feminist film theory was influenced by second wave feminism and women's studies in the 1960s and 1970s. Initially in the United States in the early 1970s feminist film theory was generally based on sociological theory and focused on the function of female characters in film narratives or genres. Feminist film theory, such as Marjorie Rosen's Popcorn Venus: Women, Movies, and the American Dream (1973) and Molly Haskell’s From Reverence to Rape: The Treatment of Women in Movies (1974) analyze the ways in which women are portrayed in film, and how this relates to a broader historical context. Additionally, feminist critiques also examine common stereotypes depicted in film, the extent to which the women were shown as active or passive, and the amount of screen time given to women.
In contrast, film theoreticians in England concerned themselves with critical theory, psychoanalysis, semiotics, and Marxism. Eventually, these ideas gained hold within the American scholarly community in the 1980s. Analysis generally focused on the meaning within a film's text and the way in which the text constructs a viewing subject. It also examined how the process of cinematic production affects how women are represented and reinforces sexism.
British feminist film theorist, Laura Mulvey, best known for her essay, "Visual Pleasure and Narrative Cinema", written in 1973 and published in 1975 in the influential British film theory journal, Screen was influenced by the theories of Sigmund Freud and Jacques Lacan. "Visual Pleasure" is one of the first major essays that helped shift the orientation of film theory towards a psychoanalytic framework. Prior to Mulvey, film theorists such as Jean-Louis Baudry and Christian Metz used psychoanalytic ideas in their theoretical accounts of cinema. Mulvey's contribution, however, initiated the intersection of film theory, psychoanalysis and feminism.
In 1976, the journal Camera Obscura was published by beginning graduate students Janet Bergstrom, Sandy Flitterman, Elisabeth Lyon, and Constance Penley. They discussed how women were portrayed in films, but excluded from the development process. Camera Obscura is still published to this day by Duke University Press and has moved from just film theory to media studies.
Other key influences come from Metz's essay The Imaginary Signifier, "Identification, Mirror," where he argues that viewing film is only possible through scopophilia (pleasure from looking, related to voyeurism), which is best exemplified in silent film. Also, according to Cynthia A. Freeland in "Feminist Frameworks for Horror Films," feminist studies of horror films have focused on psychodynamics where the chief interest is "on viewers' motives and interests in watching horror films".
Beginning in the early 1980s feminist film theory began to look at film through a more intersectional lens. The film journal Jump Cut published a special issue about titled "Lesbians and Film" in 1981 which examined the lack of lesbian identities in film. Jane Gaines's essay "White Privilege and Looking Relations: Race and Gender in Feminist Film Theory" examined the erasure of black women in cinema by white male filmmakers. While Lola Young argues that filmmakers of all races fail to break away from the use to tired stereotypes when depicting black women. Other theorists who wrote about feminist film theory and race include bell hooks and Michele Wallace.
From 1985 onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory. Her concept, from her book, The Matrixial Gaze, has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of "coemergence", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films, by female directors, like Chantal Akerman, as well as by male directors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma.
Recently, scholars have expanded their work to include analysis of television and digital media. Additionally, they have begun to explore notions of difference, engaging in dialogue about the differences among women (part of movement away from essentialism in feminist work more generally), the various methodologies and perspectives contained under the umbrella of feminist film theory, and the multiplicity of methods and intended effects that influence the development of films. Scholars are also taking increasingly global perspectives, responding to postcolonialist criticisms of perceived Anglo- and Eurocentrism in the academy more generally. Increased focus has been given to, "disparate feminisms, nationalisms, and media in various locations and across class, racial, and ethnic groups throughout the world". Scholars in recent years have also turned their attention towards women in the silent film industry and their erasure from the history of those films and women's bodies and how they are portrayed in the films. Jane Gaines's Women's Film Pioneer Project (WFPP), a database of women who worked in the silent-era film industry, has been cited as a major achievement in recognizing pioneering women in the field of silent and non-silent film by scholars such as Rachel Schaff.
As of recent years many believe feminist film theory to be a fading area of feminism with the massive amount of coverage currently around media studies and theory. As these areas have grown the framework created in feminist film theory have been adapted to fit into analysing other forms of media.
Considering the way that films are put together, many feminist film critics have pointed to what they argue is the "male gaze" that predominates classical Hollywood filmmaking. Budd Boetticher summarizes the view:
Laura Mulvey expands on this conception to argue that in cinema, women are typically depicted in a passive role that provides visual pleasure through scopophilia, and identification with the on-screen male actor. She asserts: "In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote to-be-looked-at-ness," and as a result contends that in film a woman is the "bearer of meaning, not maker of meaning." Mulvey argues that the psychoanalytic theory of Jacques Lacan is the key to understanding how film creates such a space for female sexual objectification and exploitation through the combination of the patriarchal order of society, and 'looking' in itself as a pleasurable act of scopophilia, as "the cinema satisfies a primordial wish for pleasurable looking."
While Laura Mulvey's paper has a particular place in the feminist film theory, it is important to note that her ideas regarding ways of watching the cinema (from the voyeuristic element to the feelings of identification) are important to some feminist film theorists in terms of defining spectatorship from the psychoanalytical viewpoint.
Mulvey identifies three "looks" or perspectives that occur in film which, she argues, serve to sexually objectify women. The first is the perspective of the male character and how he perceives the female character. The second is the perspective of the spectator as they see the female character on screen. The third "look" joins the first two looks together: it is the male audience member's perspective of the male character in the film. This third perspective allows the male audience to take the female character as his own personal sex object because he can relate himself, through looking, to the male character in the film.
In the paper, Mulvey calls for a destruction of modern film structure as the only way to free women from their sexual objectification in film. She argues for a removal of the voyeurism encoded into film by creating distance between the male spectator and the female character. The only way to do so, Mulvey argues, is by destroying the element of voyeurism and "the invisible guest". Mulvey also asserts that the dominance men embody is only so because women exist, as without a woman for comparison, a man and his supremacy as the controller of visual pleasure are insignificant. For Mulvey, it is the presence of the female that defines the patriarchal order of society as well as the male psychology of thought.
Mulvey's argument is likely influenced by the time period in which she was writing. "Visual Pleasure and Narrative Cinema" was composed during the period of second-wave feminism, which was concerned with achieving equality for women in the workplace, and with exploring the psychological implications of sexual stereotypes. Mulvey calls for an eradication of female sexual objectivity, aligning herself with second-wave feminism. She argues that in order for women to be equally represented in the workplace, women must be portrayed as men are: as lacking sexual objectification.
Mulvey proposes in her notes to the Criterion Collection DVD of Michael Powell's controversial film, Peeping Tom (a film about a homicidal voyeur who films the deaths of his victims), that the cinema spectator's own voyeurism is made shockingly obvious and even more shockingly, the spectator identifies with the perverted protagonist. The inference is that she includes female spectators in that, identifying with the male observer rather than the female object of the gaze.
The early work of Marjorie Rosen and Molly Haskell on the representation of women in film was part of a movement to depict women more realistically, both in documentaries and narrative cinema. The growing female presence in the film industry was seen as a positive step toward realizing this goal, by drawing attention to feminist issues and putting forth an alternative, true-to-life view of women. However, Rosen and Haskell argue that these images are still mediated by the same factors as traditional film, such as the "moving camera, composition, editing, lighting, and all varieties of sound." While acknowledging the value in inserting positive representations of women in film, some critics asserted that real change would only come about from reconsidering the role of film in society, often from a semiotic point of view.
Claire Johnston put forth the idea that women's cinema can function as "counter cinema." Through consciousness of the means of production and opposition of sexist ideologies, films made by women have the potential to posit an alternative to traditional Hollywood films. Initially, the attempt to show "real" women was praised, eventually critics such as Eileen McGarry claimed that the "real" women being shown on screen were still just contrived depictions. In reaction to this article, many women filmmakers integrated "alternative forms and experimental techniques" to "encourage audiences to critique the seemingly transparent images on the screen and to question the manipulative techniques of filming and editing".
B. Ruby Rich argues that feminist film theory should shift to look at films in a broader sense. Rich's essay In the Name of Feminist Film Criticism claims that films by women often receive praise for certain elements, while feminist undertones are ignored. Rich goes on to say that because of this feminist theory needs to focus on how film by women are being received.
Coming from a black feminist perspective, American scholar, bell hooks, put forth the notion of the “oppositional gaze,” encouraging black women not to accept stereotypical representations in film, but rather actively critique them. The “oppositional gaze” is a response to Mulvey's visual pleasure and states that just as women do not identify with female characters that are not "real," women of color should respond similarly to the one denominational caricatures of black women. Janet Bergstrom's article “Enunciation and Sexual Difference” (1979) uses Sigmund Freud's ideas of bisexual responses, arguing that women are capable of identifying with male characters and men with women characters, either successively or simultaneously. Miriam Hansen, in "Pleasure, Ambivalence, Identification: Valentino and Female Spectatorship" (1984) put forth the idea that women are also able to view male characters as erotic objects of desire. In "The Master's Dollhouse: Rear Window," Tania Modleski argues that Hitchcock's film, Rear Window, is an example of the power of male gazer and the position of the female as a prisoner of the "master's dollhouse".
Carol Clover, in her popular and influential book, Men, Women, and Chainsaws: Gender in the Modern Horror Film (Princeton University Press, 1992), argues that young male viewers of the Horror Genre (young males being the primary demographic) are quite prepared to identify with the female-in-jeopardy, a key component of the horror narrative, and to identify on an unexpectedly profound level. Clover further argues that the "final girl" in the psychosexual subgenre of exploitation horror invariably triumphs through her own resourcefulness, and is not by any means a passive, or inevitable, victim. Laura Mulvey, in response to these and other criticisms, revisited the topic in "Afterthoughts on 'Visual Pleasure and Narrative Cinema' inspired by Duel in the Sun" (1981). In addressing the heterosexual female spectator, she revised her stance to argue that women can take two possible roles in relation to film: a masochistic identification with the female object of desire that is ultimately self-defeating, or an identification with men as the active viewers of the text. A new version of the gaze was offered in the early 1990s by Bracha Ettinger, who proposed the notion of the "matrixial gaze". | [
{
"paragraph_id": 0,
"text": "Feminist film theory is a theoretical film criticism derived from feminist politics and feminist theory influenced by second-wave feminism and brought about around the 1970s in the United States. With the advancements in film throughout the years feminist film theory has developed and changed to analyse the current ways of film and also go back to analyse films past. Feminists have many approaches to cinema analysis, regarding the film elements analyzed and their theoretical underpinnings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The development of feminist film theory was influenced by second wave feminism and women's studies in the 1960s and 1970s. Initially in the United States in the early 1970s feminist film theory was generally based on sociological theory and focused on the function of female characters in film narratives or genres. Feminist film theory, such as Marjorie Rosen's Popcorn Venus: Women, Movies, and the American Dream (1973) and Molly Haskell’s From Reverence to Rape: The Treatment of Women in Movies (1974) analyze the ways in which women are portrayed in film, and how this relates to a broader historical context. Additionally, feminist critiques also examine common stereotypes depicted in film, the extent to which the women were shown as active or passive, and the amount of screen time given to women.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "In contrast, film theoreticians in England concerned themselves with critical theory, psychoanalysis, semiotics, and Marxism. Eventually, these ideas gained hold within the American scholarly community in the 1980s. Analysis generally focused on the meaning within a film's text and the way in which the text constructs a viewing subject. It also examined how the process of cinematic production affects how women are represented and reinforces sexism.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "British feminist film theorist, Laura Mulvey, best known for her essay, \"Visual Pleasure and Narrative Cinema\", written in 1973 and published in 1975 in the influential British film theory journal, Screen was influenced by the theories of Sigmund Freud and Jacques Lacan. \"Visual Pleasure\" is one of the first major essays that helped shift the orientation of film theory towards a psychoanalytic framework. Prior to Mulvey, film theorists such as Jean-Louis Baudry and Christian Metz used psychoanalytic ideas in their theoretical accounts of cinema. Mulvey's contribution, however, initiated the intersection of film theory, psychoanalysis and feminism.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1976, the journal Camera Obscura was published by beginning graduate students Janet Bergstrom, Sandy Flitterman, Elisabeth Lyon, and Constance Penley. They discussed how women were portrayed in films, but excluded from the development process. Camera Obscura is still published to this day by Duke University Press and has moved from just film theory to media studies.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Other key influences come from Metz's essay The Imaginary Signifier, \"Identification, Mirror,\" where he argues that viewing film is only possible through scopophilia (pleasure from looking, related to voyeurism), which is best exemplified in silent film. Also, according to Cynthia A. Freeland in \"Feminist Frameworks for Horror Films,\" feminist studies of horror films have focused on psychodynamics where the chief interest is \"on viewers' motives and interests in watching horror films\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Beginning in the early 1980s feminist film theory began to look at film through a more intersectional lens. The film journal Jump Cut published a special issue about titled \"Lesbians and Film\" in 1981 which examined the lack of lesbian identities in film. Jane Gaines's essay \"White Privilege and Looking Relations: Race and Gender in Feminist Film Theory\" examined the erasure of black women in cinema by white male filmmakers. While Lola Young argues that filmmakers of all races fail to break away from the use to tired stereotypes when depicting black women. Other theorists who wrote about feminist film theory and race include bell hooks and Michele Wallace.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "From 1985 onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory. Her concept, from her book, The Matrixial Gaze, has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of \"coemergence\", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films, by female directors, like Chantal Akerman, as well as by male directors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Recently, scholars have expanded their work to include analysis of television and digital media. Additionally, they have begun to explore notions of difference, engaging in dialogue about the differences among women (part of movement away from essentialism in feminist work more generally), the various methodologies and perspectives contained under the umbrella of feminist film theory, and the multiplicity of methods and intended effects that influence the development of films. Scholars are also taking increasingly global perspectives, responding to postcolonialist criticisms of perceived Anglo- and Eurocentrism in the academy more generally. Increased focus has been given to, \"disparate feminisms, nationalisms, and media in various locations and across class, racial, and ethnic groups throughout the world\". Scholars in recent years have also turned their attention towards women in the silent film industry and their erasure from the history of those films and women's bodies and how they are portrayed in the films. Jane Gaines's Women's Film Pioneer Project (WFPP), a database of women who worked in the silent-era film industry, has been cited as a major achievement in recognizing pioneering women in the field of silent and non-silent film by scholars such as Rachel Schaff.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "As of recent years many believe feminist film theory to be a fading area of feminism with the massive amount of coverage currently around media studies and theory. As these areas have grown the framework created in feminist film theory have been adapted to fit into analysing other forms of media.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Considering the way that films are put together, many feminist film critics have pointed to what they argue is the \"male gaze\" that predominates classical Hollywood filmmaking. Budd Boetticher summarizes the view:",
"title": "Key themes"
},
{
"paragraph_id": 11,
"text": "Laura Mulvey expands on this conception to argue that in cinema, women are typically depicted in a passive role that provides visual pleasure through scopophilia, and identification with the on-screen male actor. She asserts: \"In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote to-be-looked-at-ness,\" and as a result contends that in film a woman is the \"bearer of meaning, not maker of meaning.\" Mulvey argues that the psychoanalytic theory of Jacques Lacan is the key to understanding how film creates such a space for female sexual objectification and exploitation through the combination of the patriarchal order of society, and 'looking' in itself as a pleasurable act of scopophilia, as \"the cinema satisfies a primordial wish for pleasurable looking.\"",
"title": "Key themes"
},
{
"paragraph_id": 12,
"text": "While Laura Mulvey's paper has a particular place in the feminist film theory, it is important to note that her ideas regarding ways of watching the cinema (from the voyeuristic element to the feelings of identification) are important to some feminist film theorists in terms of defining spectatorship from the psychoanalytical viewpoint.",
"title": "Key themes"
},
{
"paragraph_id": 13,
"text": "Mulvey identifies three \"looks\" or perspectives that occur in film which, she argues, serve to sexually objectify women. The first is the perspective of the male character and how he perceives the female character. The second is the perspective of the spectator as they see the female character on screen. The third \"look\" joins the first two looks together: it is the male audience member's perspective of the male character in the film. This third perspective allows the male audience to take the female character as his own personal sex object because he can relate himself, through looking, to the male character in the film.",
"title": "Key themes"
},
{
"paragraph_id": 14,
"text": "In the paper, Mulvey calls for a destruction of modern film structure as the only way to free women from their sexual objectification in film. She argues for a removal of the voyeurism encoded into film by creating distance between the male spectator and the female character. The only way to do so, Mulvey argues, is by destroying the element of voyeurism and \"the invisible guest\". Mulvey also asserts that the dominance men embody is only so because women exist, as without a woman for comparison, a man and his supremacy as the controller of visual pleasure are insignificant. For Mulvey, it is the presence of the female that defines the patriarchal order of society as well as the male psychology of thought.",
"title": "Key themes"
},
{
"paragraph_id": 15,
"text": "Mulvey's argument is likely influenced by the time period in which she was writing. \"Visual Pleasure and Narrative Cinema\" was composed during the period of second-wave feminism, which was concerned with achieving equality for women in the workplace, and with exploring the psychological implications of sexual stereotypes. Mulvey calls for an eradication of female sexual objectivity, aligning herself with second-wave feminism. She argues that in order for women to be equally represented in the workplace, women must be portrayed as men are: as lacking sexual objectification.",
"title": "Key themes"
},
{
"paragraph_id": 16,
"text": "Mulvey proposes in her notes to the Criterion Collection DVD of Michael Powell's controversial film, Peeping Tom (a film about a homicidal voyeur who films the deaths of his victims), that the cinema spectator's own voyeurism is made shockingly obvious and even more shockingly, the spectator identifies with the perverted protagonist. The inference is that she includes female spectators in that, identifying with the male observer rather than the female object of the gaze.",
"title": "Key themes"
},
{
"paragraph_id": 17,
"text": "The early work of Marjorie Rosen and Molly Haskell on the representation of women in film was part of a movement to depict women more realistically, both in documentaries and narrative cinema. The growing female presence in the film industry was seen as a positive step toward realizing this goal, by drawing attention to feminist issues and putting forth an alternative, true-to-life view of women. However, Rosen and Haskell argue that these images are still mediated by the same factors as traditional film, such as the \"moving camera, composition, editing, lighting, and all varieties of sound.\" While acknowledging the value in inserting positive representations of women in film, some critics asserted that real change would only come about from reconsidering the role of film in society, often from a semiotic point of view.",
"title": "Key themes"
},
{
"paragraph_id": 18,
"text": "Claire Johnston put forth the idea that women's cinema can function as \"counter cinema.\" Through consciousness of the means of production and opposition of sexist ideologies, films made by women have the potential to posit an alternative to traditional Hollywood films. Initially, the attempt to show \"real\" women was praised, eventually critics such as Eileen McGarry claimed that the \"real\" women being shown on screen were still just contrived depictions. In reaction to this article, many women filmmakers integrated \"alternative forms and experimental techniques\" to \"encourage audiences to critique the seemingly transparent images on the screen and to question the manipulative techniques of filming and editing\".",
"title": "Key themes"
},
{
"paragraph_id": 19,
"text": "B. Ruby Rich argues that feminist film theory should shift to look at films in a broader sense. Rich's essay In the Name of Feminist Film Criticism claims that films by women often receive praise for certain elements, while feminist undertones are ignored. Rich goes on to say that because of this feminist theory needs to focus on how film by women are being received.",
"title": "Key themes"
},
{
"paragraph_id": 20,
"text": "Coming from a black feminist perspective, American scholar, bell hooks, put forth the notion of the “oppositional gaze,” encouraging black women not to accept stereotypical representations in film, but rather actively critique them. The “oppositional gaze” is a response to Mulvey's visual pleasure and states that just as women do not identify with female characters that are not \"real,\" women of color should respond similarly to the one denominational caricatures of black women. Janet Bergstrom's article “Enunciation and Sexual Difference” (1979) uses Sigmund Freud's ideas of bisexual responses, arguing that women are capable of identifying with male characters and men with women characters, either successively or simultaneously. Miriam Hansen, in \"Pleasure, Ambivalence, Identification: Valentino and Female Spectatorship\" (1984) put forth the idea that women are also able to view male characters as erotic objects of desire. In \"The Master's Dollhouse: Rear Window,\" Tania Modleski argues that Hitchcock's film, Rear Window, is an example of the power of male gazer and the position of the female as a prisoner of the \"master's dollhouse\".",
"title": "Key themes"
},
{
"paragraph_id": 21,
"text": "Carol Clover, in her popular and influential book, Men, Women, and Chainsaws: Gender in the Modern Horror Film (Princeton University Press, 1992), argues that young male viewers of the Horror Genre (young males being the primary demographic) are quite prepared to identify with the female-in-jeopardy, a key component of the horror narrative, and to identify on an unexpectedly profound level. Clover further argues that the \"final girl\" in the psychosexual subgenre of exploitation horror invariably triumphs through her own resourcefulness, and is not by any means a passive, or inevitable, victim. Laura Mulvey, in response to these and other criticisms, revisited the topic in \"Afterthoughts on 'Visual Pleasure and Narrative Cinema' inspired by Duel in the Sun\" (1981). In addressing the heterosexual female spectator, she revised her stance to argue that women can take two possible roles in relation to film: a masochistic identification with the female object of desire that is ultimately self-defeating, or an identification with men as the active viewers of the text. A new version of the gaze was offered in the early 1990s by Bracha Ettinger, who proposed the notion of the \"matrixial gaze\".",
"title": "Key themes"
}
]
| Feminist film theory is a theoretical film criticism derived from feminist politics and feminist theory influenced by second-wave feminism and brought about around the 1970s in the United States. With the advancements in film throughout the years feminist film theory has developed and changed to analyse the current ways of film and also go back to analyse films past. Feminists have many approaches to cinema analysis, regarding the film elements analyzed and their theoretical underpinnings. | 2001-03-07T10:05:01Z | 2023-12-26T20:32:47Z | [
"Template:Columns-list",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Cite book",
"Template:Short description",
"Template:Rp",
"Template:Citation",
"Template:Feminist theory",
"Template:Women in Media",
"Template:Filmstudies",
"Template:Feminism sidebar"
]
| https://en.wikipedia.org/wiki/Feminist_film_theory |
10,798 | Formalist film theory | Formalist film theory is an approach to film theory that is focused on the formal or technical elements of a film: i.e., the lighting, scoring, sound and set design, use of color, shot composition, and editing. This approach was proposed by Hugo Münsterberg, Rudolf Arnheim, Sergei Eisenstein, and Béla Balázs. Today, it is a major approach in film studies.
Formalism, at its most general, considers the synthesis (or lack of synthesis) of the multiple elements of film production, and the effects, emotional and intellectual, of that synthesis and of the individual elements. For example, take the single element of editing. A formalist might study how standard Hollywood "continuity editing" creates a more comforting effect and non-continuity or jump cut editing might become more disconcerting or volatile.
Or one might consider the synthesis of several elements, such as editing, shot composition, and music. The shoot-out that ends Sergio Leone's Spaghetti Western Dollars Trilogy is a notable example of how these elements work together to produce an effect: the shot selection goes from very wide to very close and tense; the length of shots decreases as the sequence progresses towards its end; the music builds. All of these elements, in combination rather than individually, create tension.
Formalism is unique in that it embraces both ideological and auteurist branches of criticism. In both these cases, the common denominator for formalist criticism is style. Ideologues focus on how socio-economic pressures create a particular style, and auteurists on how auteurs put their own stamp on the material. Formalism is primarily concerned with style and how it communicates ideas, emotions, and themes (rather than, as critics of formalism point out, concentrating on the themes of a work itself).
Two examples of ideological interpretations that are related to formalism are the classical Hollywood cinema and film noir.
The classical Hollywood cinema has a very distinct style, sometimes called the institutional mode of representation: continuity editing, massive coverage, three-point lighting, "mood" music, dissolves, all designed to make the experience as pleasant as possible. The socio-economic ideological explanation for this is that Hollywood wants to make as much money and appeal to as many ticket-buyers as possible.
Film noir, which was given its name by Nino Frank, is marked by lower production values, darker images, under lighting, location shooting, and general nihilism: this is because, we are told, during the war and post-war years filmmakers (as well as filmgoers) were generally more pessimistic. Also, the German Expressionists (including Fritz Lang, who was not technically an expressionist as popularly believed) immigrated to America and brought their stylized lighting effects (and disillusionment due to the war) to American soil.
It can be argued that, by this approach, the style or 'language' of these films is directly affected not by the individuals responsible, but by social, economic, and political pressures, of which the filmmakers themselves may be aware or not. It is this branch of criticism that gives us such categories as the classical Hollywood cinema, the American independent movement, the new queer cinema, and the French, German, and Czech new waves.
If the ideological approach is concerned with broad movements and the effects of the world around the filmmaker, then the auteur theory is diametrically opposite to it, celebrating the individual, usually in the person of the filmmaker, and how their personal decisions, thoughts, and style manifest themselves in the material.
This branch of criticism, begun by François Truffaut and the other young film critics writing for Cahiers du Cinéma, was created for two reasons.
First, it was created to redeem the art of film itself. By arguing that films had auteurs, or authors, Truffaut sought to make films (and their directors) at least as important as the more widely accepted art forms, such as literature, music, and painting. Each of these art forms, and the criticism thereof, is primarily concerned with a sole creative force: the author of a novel (not, for example, their editor or type-setter), the composer of a piece of music (though sometimes the performers are given credence, akin to actors in film today), or the painter of a fresco (not their assistants who mix the colours or often do some of the painting themselves). By elevating the director, and not the screenwriter, to the same importance as novelists, composers, or painters, it sought to free the cinema from its popular conception as a bastard art, somewhere between theater and literature.
Secondly, it sought to redeem many filmmakers who were looked down upon by mainstream film critics. It argued that genre filmmakers and low-budget B-movies were just as important, if not more, than the prestige pictures commonly given more press and legitimacy in France and the United States. According to Truffaut's theory, auteurs took material that was beneath their talents—a thriller, a pulpy action film, a romance—and, through their style, put their own personal stamp on it. | [
{
"paragraph_id": 0,
"text": "Formalist film theory is an approach to film theory that is focused on the formal or technical elements of a film: i.e., the lighting, scoring, sound and set design, use of color, shot composition, and editing. This approach was proposed by Hugo Münsterberg, Rudolf Arnheim, Sergei Eisenstein, and Béla Balázs. Today, it is a major approach in film studies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Formalism, at its most general, considers the synthesis (or lack of synthesis) of the multiple elements of film production, and the effects, emotional and intellectual, of that synthesis and of the individual elements. For example, take the single element of editing. A formalist might study how standard Hollywood \"continuity editing\" creates a more comforting effect and non-continuity or jump cut editing might become more disconcerting or volatile.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "Or one might consider the synthesis of several elements, such as editing, shot composition, and music. The shoot-out that ends Sergio Leone's Spaghetti Western Dollars Trilogy is a notable example of how these elements work together to produce an effect: the shot selection goes from very wide to very close and tense; the length of shots decreases as the sequence progresses towards its end; the music builds. All of these elements, in combination rather than individually, create tension.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Formalism is unique in that it embraces both ideological and auteurist branches of criticism. In both these cases, the common denominator for formalist criticism is style. Ideologues focus on how socio-economic pressures create a particular style, and auteurists on how auteurs put their own stamp on the material. Formalism is primarily concerned with style and how it communicates ideas, emotions, and themes (rather than, as critics of formalism point out, concentrating on the themes of a work itself).",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Two examples of ideological interpretations that are related to formalism are the classical Hollywood cinema and film noir.",
"title": "Ideological formalism"
},
{
"paragraph_id": 5,
"text": "The classical Hollywood cinema has a very distinct style, sometimes called the institutional mode of representation: continuity editing, massive coverage, three-point lighting, \"mood\" music, dissolves, all designed to make the experience as pleasant as possible. The socio-economic ideological explanation for this is that Hollywood wants to make as much money and appeal to as many ticket-buyers as possible.",
"title": "Ideological formalism"
},
{
"paragraph_id": 6,
"text": "Film noir, which was given its name by Nino Frank, is marked by lower production values, darker images, under lighting, location shooting, and general nihilism: this is because, we are told, during the war and post-war years filmmakers (as well as filmgoers) were generally more pessimistic. Also, the German Expressionists (including Fritz Lang, who was not technically an expressionist as popularly believed) immigrated to America and brought their stylized lighting effects (and disillusionment due to the war) to American soil.",
"title": "Ideological formalism"
},
{
"paragraph_id": 7,
"text": "It can be argued that, by this approach, the style or 'language' of these films is directly affected not by the individuals responsible, but by social, economic, and political pressures, of which the filmmakers themselves may be aware or not. It is this branch of criticism that gives us such categories as the classical Hollywood cinema, the American independent movement, the new queer cinema, and the French, German, and Czech new waves.",
"title": "Ideological formalism"
},
{
"paragraph_id": 8,
"text": "If the ideological approach is concerned with broad movements and the effects of the world around the filmmaker, then the auteur theory is diametrically opposite to it, celebrating the individual, usually in the person of the filmmaker, and how their personal decisions, thoughts, and style manifest themselves in the material.",
"title": "Formalism in auteur theory"
},
{
"paragraph_id": 9,
"text": "This branch of criticism, begun by François Truffaut and the other young film critics writing for Cahiers du Cinéma, was created for two reasons.",
"title": "Formalism in auteur theory"
},
{
"paragraph_id": 10,
"text": "First, it was created to redeem the art of film itself. By arguing that films had auteurs, or authors, Truffaut sought to make films (and their directors) at least as important as the more widely accepted art forms, such as literature, music, and painting. Each of these art forms, and the criticism thereof, is primarily concerned with a sole creative force: the author of a novel (not, for example, their editor or type-setter), the composer of a piece of music (though sometimes the performers are given credence, akin to actors in film today), or the painter of a fresco (not their assistants who mix the colours or often do some of the painting themselves). By elevating the director, and not the screenwriter, to the same importance as novelists, composers, or painters, it sought to free the cinema from its popular conception as a bastard art, somewhere between theater and literature.",
"title": "Formalism in auteur theory"
},
{
"paragraph_id": 11,
"text": "Secondly, it sought to redeem many filmmakers who were looked down upon by mainstream film critics. It argued that genre filmmakers and low-budget B-movies were just as important, if not more, than the prestige pictures commonly given more press and legitimacy in France and the United States. According to Truffaut's theory, auteurs took material that was beneath their talents—a thriller, a pulpy action film, a romance—and, through their style, put their own personal stamp on it.",
"title": "Formalism in auteur theory"
}
]
| Formalist film theory is an approach to film theory that is focused on the formal or technical elements of a film: i.e., the lighting, scoring, sound and set design, use of color, shot composition, and editing. This approach was proposed by Hugo Münsterberg, Rudolf Arnheim, Sergei Eisenstein, and Béla Balázs. Today, it is a major approach in film studies. | 2001-03-07T10:14:40Z | 2023-12-04T10:19:44Z | [
"Template:More footnotes",
"Template:Lang",
"Template:Reflist",
"Template:Cite book",
"Template:Filmstudies",
"Template:Short description",
"Template:Essay-like"
]
| https://en.wikipedia.org/wiki/Formalist_film_theory |
10,799 | Film theory | Film theory is a set of scholarly approaches within the academic discipline of film or cinema studies that began in the 2010s by questioning the formal essential attributes of motion pictures; and that now provides conceptual frameworks for understanding film's relationship to reality, the other arts, individual viewers, and society at large. Film theory is not to be confused with general film criticism, or film history, though these three disciplines interrelate.
Although some branches of film theory are derived from linguistics and literary theory, it also originated and overlaps with the philosophy of film.
French philosopher Henri Bergson's Matter and Memory (1896) anticipated the development of film theory during the birth of cinema in the early twentieth century. Bergson commented on the need for new ways of thinking about movement, and coined the terms "the movement-image" and "the time-image". However, in his 1906 essay L'illusion cinématographique (in L'évolution créatrice; English: The cinematic illusion) he rejects film as an example of what he had in mind. Nonetheless, decades later, in Cinéma I and Cinema II (1983–1985), the philosopher Gilles Deleuze took Matter and Memory as the basis of his philosophy of film and revisited Bergson's concepts, combining them with the semiotics of Charles Sanders Peirce. Early film theory arose in the silent era and was mostly concerned with defining the crucial elements of the medium. Ricciotto Canudo was an early Italian film theoretician who saw cinema as "plastic art in motion", and gave cinema the label "the Sixth Art", later changed to "the Seventh Art".
In 1915, Vachel Lindsay wrote a book on film, followed a year later by Hugo Münsterberg. Lindsay argued that films could be classified into three categories: action films, intimate films, as well as films of splendour. According to him, the action film was sculpture-in-motion, while the intimate film was painting-in-motion, and splendour film architecture-in-motion. He also argued against the contemporary notion of calling films photoplays and seen as filmed versions of theatre, instead seeing film with camera-born opportunities. He also described cinema as hieroglyphic in the sense of containing symbols in its images. He believed this visuality gave film the potential for universal accessibility. Münsterberg in turn noted the analogies between cinematic techniques and certain mental processes. For example, he compared the close-up to the mind paying attention. The flashback, in turn, was similar to remembering. This was later followed by the formalism of Rudolf Arnheim, who studied how techniques influenced film as art.
Among early French theorists, Germaine Dulac brought the concept of impressionism to film by describing cinema that explored the malleability of the border between internal experience and external reality, for example through superimposition. Surrealism also had an influence on early French film culture. The term photogénie was important to both, having been brought to use by Louis Delluc in 1919 and becoming widespread in its usage to capture the unique power of cinema. Jean Epstein noted how filming gives a "personality" or a "spirit" to objects while also being able to reveal "the untrue, the unreal, the 'surreal'". This was similar to defamiliarization used by avant-garde artists to recreate the world. He saw the close-up as the essence of photogénie. Béla Balázs also praised the close-up for similar reasons. Arnheim also believed defamiliarization to be a critical element of film.
After the Russian Revolution, a chaotic situation in the country also created a sense of excitement at new possibilities. This gave rise to montage theory in the work of Dziga Vertov and Sergei Eisenstein. After the establishment of the Moscow Film School, Lev Kuleshov set up a workshop to study the formal structure of film, focusing on editing as "the essence of cinematography". This produced findings on the Kuleshov effect. Editing was also associated with the foundational Marxist concept of dialectical materialism. To this end, Eisenstein claimed that "montage is conflict". Eisenstein's theories were focused on montage having the ability create meaning transcending the sum of its parts with a thematic effect in a way that ideograms turned graphics into abstract symbols. Multiple scenes could work to produce themes (tonal montage), while multiple themes could create even higher levels of meaning (intellectual montage). Vertov in turn focused on developing Kino-Pravda, film truth, and the Kino-Eye , which he claimed showed a deeper truth than could be seen with the naked eye.
In the years after World War II, the French film critic and theorist André Bazin argued that film's essence lay in its ability to mechanically reproduce reality, not in its difference from reality. This had followed the rise of poetic realism in French cinema in the 1930's. He believed that the purpose of art is to preserve reality, even famously claiming that "The photographic image is the object itself". Based on this, he advocated for the use of long takes and deep focus, to reveal the structural depth of reality and finding meaning objectively in images. This was soon followed by the rise of Italian neorealism. Siegfried Kracauer was also notable for arguing that realism is the most important function of cinema.
The Auteur theory derived from the approach of critic and filmmaker Alexandre Astruc, among others, and was originally developed in articles in Cahiers du Cinéma, a film journal that had been co-founded by Bazin. François Truffaut issued auteurism's manifestos in two Cahiers essays: "Une certaine tendance du cinéma français" (January 1954) and "Ali Baba et la 'Politique des auteurs'" (February 1955). His approach was brought to American criticism by Andrew Sarris in 1962. The auteur theory was based on films depicting the directors' own worldviews and impressions of the subject matter, by varying lighting, camerawork, staging, editing, and so on. Georges Sadoul deemed a film's putative "author" potentially even an actor, but a film indeed collaborative. Aljean Harmetz cited major control even by film executives. David Kipen's view of screenwriter as indeed main author is termed Schreiber theory.
In the 1960s and 1970s, film theory took up residence in academia importing concepts from established disciplines like psychoanalysis, gender studies, anthropology, literary theory, semiotics and linguistics—as advanced by scholars such as Christian Metz. However, not until the late 1980s or early 1990s did film theory per se achieve much prominence in American universities by displacing the prevailing humanistic, auteur theory that had dominated cinema studies and which had been focused on the practical elements of film writing, production, editing and criticism. American scholar David Bordwell has spoken against many prominent developments in film theory since the 1970s. He uses the derogatory term "SLAB theory" to refer to film studies based on the ideas of Ferdinand de Saussure, Jacques Lacan, Louis Althusser, and Roland Barthes. Instead, Bordwell promotes what he describes as "neoformalism" (a revival of formalist film theory).
During the 1990s the digital revolution in image technologies has influenced film theory in various ways. There has been a refocus onto celluloid film's ability to capture an "indexical" image of a moment in time by theorists like Mary Ann Doane, Philip Rosen and Laura Mulvey who was informed by psychoanalysis. From a psychoanalytical perspective, after the Lacanian notion of "the Real", Slavoj Žižek offered new aspects of "the gaze" extensively used in contemporary film analysis. From the 1990s onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory. Her concept The Matrixial Gaze, that has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of "coemergence", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films by female authors, like Chantal Akerman, as well as by male authors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma. There has also been a historical revisiting of early cinema screenings, practices and spectatorship modes by writers Tom Gunning, Miriam Hansen and Yuri Tsivian.
In Critical Cinema: Beyond the Theory of Practice (2011), Clive Meyer suggests that 'cinema is a different experience to watching a film at home or in an art gallery', and argues for film theorists to re-engage the specificity of philosophical concepts for cinema as a medium distinct from others. | [
{
"paragraph_id": 0,
"text": "Film theory is a set of scholarly approaches within the academic discipline of film or cinema studies that began in the 2010s by questioning the formal essential attributes of motion pictures; and that now provides conceptual frameworks for understanding film's relationship to reality, the other arts, individual viewers, and society at large. Film theory is not to be confused with general film criticism, or film history, though these three disciplines interrelate.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although some branches of film theory are derived from linguistics and literary theory, it also originated and overlaps with the philosophy of film.",
"title": ""
},
{
"paragraph_id": 2,
"text": "French philosopher Henri Bergson's Matter and Memory (1896) anticipated the development of film theory during the birth of cinema in the early twentieth century. Bergson commented on the need for new ways of thinking about movement, and coined the terms \"the movement-image\" and \"the time-image\". However, in his 1906 essay L'illusion cinématographique (in L'évolution créatrice; English: The cinematic illusion) he rejects film as an example of what he had in mind. Nonetheless, decades later, in Cinéma I and Cinema II (1983–1985), the philosopher Gilles Deleuze took Matter and Memory as the basis of his philosophy of film and revisited Bergson's concepts, combining them with the semiotics of Charles Sanders Peirce. Early film theory arose in the silent era and was mostly concerned with defining the crucial elements of the medium. Ricciotto Canudo was an early Italian film theoretician who saw cinema as \"plastic art in motion\", and gave cinema the label \"the Sixth Art\", later changed to \"the Seventh Art\".",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1915, Vachel Lindsay wrote a book on film, followed a year later by Hugo Münsterberg. Lindsay argued that films could be classified into three categories: action films, intimate films, as well as films of splendour. According to him, the action film was sculpture-in-motion, while the intimate film was painting-in-motion, and splendour film architecture-in-motion. He also argued against the contemporary notion of calling films photoplays and seen as filmed versions of theatre, instead seeing film with camera-born opportunities. He also described cinema as hieroglyphic in the sense of containing symbols in its images. He believed this visuality gave film the potential for universal accessibility. Münsterberg in turn noted the analogies between cinematic techniques and certain mental processes. For example, he compared the close-up to the mind paying attention. The flashback, in turn, was similar to remembering. This was later followed by the formalism of Rudolf Arnheim, who studied how techniques influenced film as art.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Among early French theorists, Germaine Dulac brought the concept of impressionism to film by describing cinema that explored the malleability of the border between internal experience and external reality, for example through superimposition. Surrealism also had an influence on early French film culture. The term photogénie was important to both, having been brought to use by Louis Delluc in 1919 and becoming widespread in its usage to capture the unique power of cinema. Jean Epstein noted how filming gives a \"personality\" or a \"spirit\" to objects while also being able to reveal \"the untrue, the unreal, the 'surreal'\". This was similar to defamiliarization used by avant-garde artists to recreate the world. He saw the close-up as the essence of photogénie. Béla Balázs also praised the close-up for similar reasons. Arnheim also believed defamiliarization to be a critical element of film.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "After the Russian Revolution, a chaotic situation in the country also created a sense of excitement at new possibilities. This gave rise to montage theory in the work of Dziga Vertov and Sergei Eisenstein. After the establishment of the Moscow Film School, Lev Kuleshov set up a workshop to study the formal structure of film, focusing on editing as \"the essence of cinematography\". This produced findings on the Kuleshov effect. Editing was also associated with the foundational Marxist concept of dialectical materialism. To this end, Eisenstein claimed that \"montage is conflict\". Eisenstein's theories were focused on montage having the ability create meaning transcending the sum of its parts with a thematic effect in a way that ideograms turned graphics into abstract symbols. Multiple scenes could work to produce themes (tonal montage), while multiple themes could create even higher levels of meaning (intellectual montage). Vertov in turn focused on developing Kino-Pravda, film truth, and the Kino-Eye , which he claimed showed a deeper truth than could be seen with the naked eye.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the years after World War II, the French film critic and theorist André Bazin argued that film's essence lay in its ability to mechanically reproduce reality, not in its difference from reality. This had followed the rise of poetic realism in French cinema in the 1930's. He believed that the purpose of art is to preserve reality, even famously claiming that \"The photographic image is the object itself\". Based on this, he advocated for the use of long takes and deep focus, to reveal the structural depth of reality and finding meaning objectively in images. This was soon followed by the rise of Italian neorealism. Siegfried Kracauer was also notable for arguing that realism is the most important function of cinema.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Auteur theory derived from the approach of critic and filmmaker Alexandre Astruc, among others, and was originally developed in articles in Cahiers du Cinéma, a film journal that had been co-founded by Bazin. François Truffaut issued auteurism's manifestos in two Cahiers essays: \"Une certaine tendance du cinéma français\" (January 1954) and \"Ali Baba et la 'Politique des auteurs'\" (February 1955). His approach was brought to American criticism by Andrew Sarris in 1962. The auteur theory was based on films depicting the directors' own worldviews and impressions of the subject matter, by varying lighting, camerawork, staging, editing, and so on. Georges Sadoul deemed a film's putative \"author\" potentially even an actor, but a film indeed collaborative. Aljean Harmetz cited major control even by film executives. David Kipen's view of screenwriter as indeed main author is termed Schreiber theory.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 1960s and 1970s, film theory took up residence in academia importing concepts from established disciplines like psychoanalysis, gender studies, anthropology, literary theory, semiotics and linguistics—as advanced by scholars such as Christian Metz. However, not until the late 1980s or early 1990s did film theory per se achieve much prominence in American universities by displacing the prevailing humanistic, auteur theory that had dominated cinema studies and which had been focused on the practical elements of film writing, production, editing and criticism. American scholar David Bordwell has spoken against many prominent developments in film theory since the 1970s. He uses the derogatory term \"SLAB theory\" to refer to film studies based on the ideas of Ferdinand de Saussure, Jacques Lacan, Louis Althusser, and Roland Barthes. Instead, Bordwell promotes what he describes as \"neoformalism\" (a revival of formalist film theory).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "During the 1990s the digital revolution in image technologies has influenced film theory in various ways. There has been a refocus onto celluloid film's ability to capture an \"indexical\" image of a moment in time by theorists like Mary Ann Doane, Philip Rosen and Laura Mulvey who was informed by psychoanalysis. From a psychoanalytical perspective, after the Lacanian notion of \"the Real\", Slavoj Žižek offered new aspects of \"the gaze\" extensively used in contemporary film analysis. From the 1990s onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory. Her concept The Matrixial Gaze, that has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of \"coemergence\", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films by female authors, like Chantal Akerman, as well as by male authors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma. There has also been a historical revisiting of early cinema screenings, practices and spectatorship modes by writers Tom Gunning, Miriam Hansen and Yuri Tsivian.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In Critical Cinema: Beyond the Theory of Practice (2011), Clive Meyer suggests that 'cinema is a different experience to watching a film at home or in an art gallery', and argues for film theorists to re-engage the specificity of philosophical concepts for cinema as a medium distinct from others.",
"title": "History"
}
]
| Film theory is a set of scholarly approaches within the academic discipline of film or cinema studies that began in the 2010s by questioning the formal essential attributes of motion pictures; and that now provides conceptual frameworks for understanding film's relationship to reality, the other arts, individual viewers, and society at large. Film theory is not to be confused with general film criticism, or film history, though these three disciplines interrelate. Although some branches of film theory are derived from linguistics and literary theory, it also originated and overlaps with the philosophy of film. | 2001-05-12T03:18:20Z | 2023-11-10T14:38:27Z | [
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Citation",
"Template:Filmstudies",
"Template:For",
"Template:Sfn",
"Template:Columns-list",
"Template:Cite book",
"Template:Authority control",
"Template:Short description",
"Template:Page needed",
"Template:--"
]
| https://en.wikipedia.org/wiki/Film_theory |
10,802 | Film noir | Film noir (/nwɑːr/; French: [film nwaʁ]) is a cinematic term used primarily to describe stylized Hollywood crime dramas, particularly those that emphasize cynical attitudes and motivations. The 1940s and 1950s are generally regarded as the "classic period" of American film noir. Film noir of this era is associated with a low-key, black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression.
The term film noir, French for 'black film' (literal) or 'dark film' (closer meaning), was first applied to Hollywood films by French critic Nino Frank in 1946, but was unrecognized by most American film industry professionals of that era. Frank is believed to have been inspired by the French literary publishing imprint Série noire, founded in 1945.
Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic films noir were referred to as "melodramas". Whether film noir qualifies as a distinct genre or whether it is more of a filmmaking style is a matter of ongoing and heavy debate among scholars.
Film noir encompasses a range of plots: the central figure may be a private investigator (The Big Sleep), a plainclothes police officer (The Big Heat), an aging boxer (The Set-Up), a hapless grifter (Night and the City), a law-abiding citizen lured into a life of crime (Gun Crazy), a femme fatale (Gilda) or simply a victim of circumstance (D.O.A.). Although film noir was originally associated with American productions, the term has been used to describe films from around the world. Many films released from the 1960s onward share attributes with films noir of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir. The clichés of film noir have inspired parody since the mid-1940s.
The questions of what defines film noir, and what sort of category it is, provoke continuing debate. "We'd be oversimplifying things in calling film noir oneiric, strange, erotic, ambivalent, and cruel ..."—this set of attributes constitutes the first of many attempts to define film noir made by French critics Raymond Borde and Étienne Chaumeton in their 1955 book Panorama du film noir américain 1941–1953 (A Panorama of American Film Noir), the original and seminal extended treatment of the subject. They emphasize that not every noir film embodies all five attributes in equal measure—one might be more dreamlike; another, particularly brutal. The authors' caveats and repeated efforts at alternative definition have been echoed in subsequent scholarship, but in the words of cinema historian Mark Bould, film noir remains an "elusive phenomenon."
Though film noir is often identified with a visual style that emphasizes low-key lighting and unbalanced compositions, films commonly identified as noir evidence a variety of visual approaches, including ones that fit comfortably within the Hollywood mainstream. Film noir similarly embraces a variety of genres, from the gangster film to the police procedural to the gothic romance to the social problem picture—any example of which from the 1940s and 1950s, now seen as noir's classical era, was likely to be described as a melodrama at the time.
It is night, always. The hero enters a labyrinth on a quest. He is alone and off balance. He may be desperate, in flight, or coldly calculating, imagining he is the pursuer rather than the pursued.
A woman invariably joins him at a critical juncture, when he is most vulnerable. [Her] eventual betrayal of him (or herself) is as ambiguous as her feelings about him.
Nicholas Christopher, Somewhere in the Night (1997)
While many critics refer to film noir as a genre itself, others argue that it can be no such thing. Foster Hirsch defines a genre as determined by "conventions of narrative structure, characterization, theme, and visual design." Hirsch, as one who has taken the position that film noir is a genre, argues that these elements are present "in abundance." Hirsch notes that there are unifying features of tone, visual style and narrative sufficient to classify noir as a distinct genre.
Others argue that film noir is not a genre. It is often associated with an urban setting, but many classic noirs take place in small towns, suburbia, rural areas, or on the open road; setting is not a determinant, as with the Western. Similarly, while the private eye and the femme fatale are stock character types conventionally identified with noir, the majority of films in the genre feature neither. Nor does film noir rely on anything as evident as the monstrous or supernatural elements of the horror film, the speculative leaps of the science fiction film, or the song-and-dance routines of the musical.
An analogous case is that of the screwball comedy, widely accepted by film historians as constituting a "genre": screwball is defined not by a fundamental attribute, but by a general disposition and a group of elements, some—but rarely and perhaps never all—of which are found in each of the genre's films. Because of the diversity of noir (much greater than that of the screwball comedy), certain scholars in the field, such as film historian Thomas Schatz, treat it as not a genre but a "style". Alain Silver, the most widely published American critic specializing in film noir studies, refers to film noir as a "cycle" and a "phenomenon", even as he argues that it has—like certain genres—a consistent set of visual and thematic codes. Screenwriter Eric R. Williams labels both film noir and screwball comedy a "pathway" in his screenwriters taxonomy; explaining that a pathway has two parts: 1) the way the audience connects with the protagonist and 2) the trajectory the audience expects the story to follow. Other critics treat film noir as a "mood," a "series", or simply a chosen set of films they regard as belonging to the noir "canon." There is no consensus on the matter.
The aesthetics of film noir were influenced by German Expressionism, an artistic movement of the 1910s and 1920s that involved theater, music, photography, painting, sculpture and architecture, as well as cinema. The opportunities offered by the booming Hollywood film industry and then the threat of Nazism led to the emigration of many film artists working in Germany who had been involved in the Expressionist movement or studied with its practitioners. M (1931), shot only a few years before director Fritz Lang's departure from Germany, is among the first crime films of the sound era to join a characteristically noirish visual style with a noir-type plot, in which the protagonist is a criminal (as are his most successful pursuers). Directors such as Lang, Jacques Tourneur, Robert Siodmak and Michael Curtiz brought a dramatically shadowed lighting style and a psychologically expressive approach to visual composition (mise-en-scène) with them to Hollywood, where they made some of the most famous classic noirs.
By 1931, Curtiz had already been in Hollywood for half a decade, making as many as six films a year. Movies of his such as 20,000 Years in Sing Sing (1932) and Private Detective 62 (1933) are among the early Hollywood sound films arguably classifiable as noir—scholar Marc Vernet offers the latter as evidence that dating the initiation of film noir to 1940 or any other year is "arbitrary". Expressionism-orientated filmmakers had free stylistic rein in Universal horror pictures such as Dracula (1931), The Mummy (1932)—the former photographed and the latter directed by the Berlin-trained Karl Freund—and The Black Cat (1934), directed by Austrian émigré Edgar G. Ulmer. The Universal horror film that comes closest to noir, in story and sensibility, is The Invisible Man (1933), directed by Englishman James Whale and photographed by American Arthur Edeson. Edeson later photographed The Maltese Falcon (1941), widely regarded as the first major film noir of the classic era.
Josef von Sternberg was directing in Hollywood during the same period. Films of his such as Shanghai Express (1932) and The Devil Is a Woman (1935), with their hothouse eroticism and baroque visual style anticipated central elements of classic noir. The commercial and critical success of Sternberg's silent Underworld (1927) was largely responsible for spurring a trend of Hollywood gangster films. Successful films in that genre such as Little Caesar (1931), The Public Enemy (1931) and Scarface (1932) demonstrated that there was an audience for crime dramas with morally reprehensible protagonists. An important, possibly influential, cinematic antecedent to classic noir was 1930s French poetic realism, with its romantic, fatalistic attitude and celebration of doomed heroes. The movement's sensibility is mirrored in the Warner Bros. drama I Am a Fugitive from a Chain Gang (1932), a forerunner of noir. Among films not considered noir, perhaps none had a greater effect on the development of the genre than Citizen Kane (1941), directed by Orson Welles. Its visual intricacy and complex, voiceover narrative structure are echoed in dozens of classic films noir.
Italian neorealism of the 1940s, with its emphasis on quasi-documentary authenticity, was an acknowledged influence on trends that emerged in American noir. The Lost Weekend (1945), directed by Billy Wilder, another Vienna-born, Berlin-trained American auteur, tells the story of an alcoholic in a manner evocative of neorealism. It also exemplifies the problem of classification: one of the first American films to be described as a film noir, it has largely disappeared from considerations of the field. Director Jules Dassin of The Naked City (1948) pointed to the neorealists as inspiring his use of location photography with non-professional extras. This semidocumentary approach characterized a substantial number of noirs in the late 1940s and early 1950s. Along with neorealism, the style had an American precedent cited by Dassin, in director Henry Hathaway's The House on 92nd Street (1945), which demonstrated the parallel influence of the cinematic newsreel.
The primary literary influence on film noir was the hardboiled school of American detective and crime fiction, led in its early years by such writers as Dashiell Hammett (whose first novel, Red Harvest, was published in 1929) and James M. Cain (whose The Postman Always Rings Twice appeared five years later), and popularized in pulp magazines such as Black Mask. The classic film noirs The Maltese Falcon (1941) and The Glass Key (1942) were based on novels by Hammett; Cain's novels provided the basis for Double Indemnity (1944), Mildred Pierce (1945), The Postman Always Rings Twice (1946), and Slightly Scarlet (1956; adapted from Love's Lovely Counterfeit). A decade before the classic era, a story by Hammett was the source for the gangster melodrama City Streets (1931), directed by Rouben Mamoulian and photographed by Lee Garmes, who worked regularly with Sternberg. Released the month before Lang's M, City Streets has a claim to being the first major film noir; both its style and story had many noir characteristics.
Raymond Chandler, who debuted as a novelist with The Big Sleep in 1939, soon became the most famous author of the hardboiled school. Not only were Chandler's novels turned into major noirs—Murder, My Sweet (1944; adapted from Farewell, My Lovely), The Big Sleep (1946), and Lady in the Lake (1947)—he was an important screenwriter in the genre as well, producing the scripts for Double Indemnity, The Blue Dahlia (1946), and Strangers on a Train (1951). Where Chandler, like Hammett, centered most of his novels and stories on the character of the private eye, Cain featured less heroic protagonists and focused more on psychological exposition than on crime solving; the Cain approach has come to be identified with a subset of the hardboiled genre dubbed "noir fiction". For much of the 1940s, one of the most prolific and successful authors of this often downbeat brand of suspense tale was Cornell Woolrich (sometimes under the pseudonym George Hopley or William Irish). No writer's published work provided the basis for more noir films of the classic period than Woolrich's: thirteen in all, including Black Angel (1946), Deadline at Dawn (1946), and Fear in the Night (1947).
Another crucial literary source for film noir was W. R. Burnett, whose first novel to be published was Little Caesar, in 1929. It was turned into a hit for Warner Bros. in 1931; the following year, Burnett was hired to write dialogue for Scarface, while The Beast of the City (1932) was adapted from one of his stories. At least one important reference work identifies the latter as a film noir despite its early date. Burnett's characteristic narrative approach fell somewhere between that of the quintessential hardboiled writers and their noir fiction compatriots—his protagonists were often heroic in their own way, which happened to be that of the gangster. During the classic era, his work, either as author or screenwriter, was the basis for seven films now widely regarded as noir, including three of the most famous: High Sierra (1941), This Gun for Hire (1942), and The Asphalt Jungle (1950).
The 1940s and 1950s are generally regarded as the classic period of American film noir. While City Streets and other pre-WWII crime melodramas such as Fury (1936) and You Only Live Once (1937), both directed by Fritz Lang, are categorized as full-fledged noir in Alain Silver and Elizabeth Ward's film noir encyclopedia, other critics tend to describe them as "proto-noir" or in similar terms.
The film now most commonly cited as the first "true" film noir is Stranger on the Third Floor (1940), directed by Latvian-born, Soviet-trained Boris Ingster. Hungarian émigré Peter Lorre—who had starred in Lang's M—was top-billed, although he did not play the primary lead. (He later played secondary roles in several other formative American noirs.) Although modestly budgeted, at the high end of the B movie scale, Stranger on the Third Floor still lost its studio, RKO, US$56,000 (equivalent to $1,169,748 in 2022), almost a third of its total cost. Variety magazine found Ingster's work: "...too studied and when original, lacks the flare to hold attention. It's a film too arty for average audiences, and too humdrum for others." Stranger on the Third Floor was not recognized as the beginning of a trend, let alone a new genre, for many decades.
Whoever went to the movies with any regularity during 1946 was caught in the midst of Hollywood's profound postwar affection for morbid drama. From January through December deep shadows, clutching hands, exploding revolvers, sadistic villains and heroines tormented with deeply rooted diseases of the mind flashed across the screen in a panting display of psychoneurosis, unsublimated sex and murder most foul.
Donald Marshman, Life (August 25, 1947)
Most film noirs of the classic period were similarly low- and modestly-budgeted features without major stars—B movies either literally or in spirit. In this production context, writers, directors, cinematographers, and other craftsmen were relatively free from typical big-picture constraints. There was more visual experimentation than in Hollywood filmmaking as a whole: the Expressionism now closely associated with noir and the semi-documentary style that later emerged represent two very different tendencies. Narrative structures sometimes involved convoluted flashbacks uncommon in non-noir commercial productions. In terms of content, enforcement of the Production Code ensured that no film character could literally get away with murder or be seen sharing a bed with anyone but a spouse; within those bounds, however, many films now identified as noir feature plot elements and dialogue that were very risqué for the time.
Thematically, films noir were most exceptional for the relative frequency with which they centered on portrayals of women of questionable virtue—a focus that had become rare in Hollywood films after the mid-1930s and the end of the pre-Code era. The signal film in this vein was Double Indemnity, directed by Billy Wilder; setting the mold was Barbara Stanwyck's femme fatale, Phyllis Dietrichson—an apparent nod to Marlene Dietrich, who had built her extraordinary career playing such characters for Sternberg. An A-level feature, the film's commercial success and seven Oscar nominations made it probably the most influential of the early noirs. A slew of now-renowned noir "bad girls" followed, such as those played by Rita Hayworth in Gilda (1946), Lana Turner in The Postman Always Rings Twice (1946), Ava Gardner in The Killers (1946), and Jane Greer in Out of the Past (1947). The iconic noir counterpart to the femme fatale, the private eye, came to the fore in films such as The Maltese Falcon (1941), with Humphrey Bogart as Sam Spade, and Murder, My Sweet (1944), with Dick Powell as Philip Marlowe.
The prevalence of the private eye as a lead character declined in film noir of the 1950s, a period during which several critics describe the form as becoming more focused on extreme psychologies and more exaggerated in general. A prime example is Kiss Me Deadly (1955); based on a novel by Mickey Spillane, the best-selling of all the hardboiled authors, here the protagonist is a private eye, Mike Hammer. As described by Paul Schrader, "Robert Aldrich's teasing direction carries noir to its sleaziest and most perversely erotic. Hammer overturns the underworld in search of the 'great whatsit' [which] turns out to be—joke of jokes—an exploding atomic bomb." Orson Welles's baroquely styled Touch of Evil (1958) is frequently cited as the last noir of the classic period. Some scholars believe film noir never really ended, but continued to transform even as the characteristic noir visual style began to seem dated and changing production conditions led Hollywood in different directions—in this view, post-1950s films in the noir tradition are seen as part of a continuity with classic noir. A majority of critics, however, regard comparable films made outside the classic era to be something other than genuine film noir. They regard true film noir as belonging to a temporally and geographically limited cycle or period, treating subsequent films that evoke the classics as fundamentally different due to general shifts in filmmaking style and latter-day awareness of noir as a historical source for allusion. These later films are often called neo-noir.
While the inceptive noir, Stranger on the Third Floor, was a B picture directed by a virtual unknown, many of the films noir still remembered were A-list productions by well-known film makers. Debuting as a director with The Maltese Falcon (1941), John Huston followed with Key Largo (1948) and The Asphalt Jungle (1950). Opinion is divided on the noir status of several Alfred Hitchcock thrillers from the era; at least four qualify by consensus: Shadow of a Doubt (1943), Notorious (1946), Strangers on a Train (1951) and The Wrong Man (1956), Otto Preminger's success with Laura (1944) made his name and helped demonstrate noir's adaptability to a high-gloss 20th Century-Fox presentation. Among Hollywood's most celebrated directors of the era, arguably none worked more often in a noir mode than Preminger; his other noirs include Fallen Angel (1945), Whirlpool (1949), Where the Sidewalk Ends (1950) (all for Fox) and Angel Face (1952). A half-decade after Double Indemnity and The Lost Weekend, Billy Wilder made Sunset Boulevard (1950) and Ace in the Hole (1951), noirs that were not so much crime dramas as satires on Hollywood and the news media respectively. In a Lonely Place (1950) was Nicholas Ray's breakthrough; his other noirs include his debut, They Live by Night (1948) and On Dangerous Ground (1952), noted for their unusually sympathetic treatment of characters alienated from the social mainstream.
Orson Welles had notorious problems with financing but his three film noirs were well-budgeted: The Lady from Shanghai (1947) received top-level, "prestige" backing, while The Stranger (1946), his most conventional film, and Touch of Evil (1958), an unmistakably personal work, were funded at levels lower but still commensurate with headlining releases. Like The Stranger, Fritz Lang's The Woman in the Window (1944) was a production of the independent International Pictures. Lang's follow-up, Scarlet Street (1945), was one of the few classic noirs to be officially censored: filled with erotic innuendo, it was temporarily banned in Milwaukee, Atlanta and New York State. Scarlet Street was a semi-independent, cosponsored by Universal and Lang's Diana Productions, of which the film's co-star, Joan Bennett, was the second biggest shareholder. Lang, Bennett and her husband, the Universal veteran and Diana production head Walter Wanger, made Secret Beyond the Door (1948) in similar fashion.
Before leaving the United States while subject to the Hollywood blacklist, Jules Dassin made two classic noirs that also straddled the major/independent line: Brute Force (1947) and the influential documentary-style The Naked City (1948) were developed by producer Mark Hellinger, who had an "inside/outside" contract with Universal similar to Wanger's. Years earlier, working at Warner Bros., Hellinger had produced three films for Raoul Walsh, the proto-noirs They Drive by Night (1940), Manpower (1941) and High Sierra (1941), now regarded as a seminal work in noir's development. Walsh had no great name during his half-century as a director but his noirs White Heat (1949) and The Enforcer (1951) had A-list stars and are seen as important examples of the cycle. Other directors associated with top-of-the-bill Hollywood films noir include Edward Dmytryk (Murder, My Sweet (1944), Crossfire (1947))—the first important noir director to fall prey to the industry blacklist—as well as Henry Hathaway (The Dark Corner (1946), Kiss of Death (1947)) and John Farrow (The Big Clock (1948), Night Has a Thousand Eyes (1948)).
Most of the Hollywood films considered to be classic noirs fall into the category of the B movie. Some were Bs in the most precise sense, produced to run on the bottom of double bills by a low-budget unit of one of the major studios or by one of the smaller Poverty Row outfits, from the relatively well-off Monogram to shakier ventures such as Producers Releasing Corporation (PRC). Jacques Tourneur had made over thirty Hollywood Bs (a few now highly regarded, most forgotten) before directing the A-level Out of the Past, described by scholar Robert Ottoson as "the ne plus ultra of forties film noir". Movies with budgets a step up the ladder, known as "intermediates" by the industry, might be treated as A or B pictures depending on the circumstances. Monogram created Allied Artists in the late 1940s to focus on this sort of production. Robert Wise (Born to Kill [1947], The Set-Up [1949]) and Anthony Mann (T-Men [1947] and Raw Deal [1948]) each made a series of impressive intermediates, many of them noirs, before graduating to steady work on big-budget productions. Mann did some of his most celebrated work with cinematographer John Alton, a specialist in what James Naremore called "hypnotic moments of light-in-darkness". He Walked by Night (1948), shot by Alton though credited solely to Alfred Werker, directed in large part by Mann, demonstrates their technical mastery and exemplifies the late 1940s trend of "police procedural" crime dramas. It was released, like other Mann-Alton noirs, by the small Eagle-Lion company; it was the inspiration for the Dragnet series, which debuted on radio in 1949 and television in 1951.
Several directors associated with noir built well-respected oeuvres largely at the B-movie/intermediate level. Samuel Fuller's brutal, visually energetic films such as Pickup on South Street (1953) and Underworld U.S.A. (1961) earned him a unique reputation; his advocates praise him as "primitive" and "barbarous". Joseph H. Lewis directed noirs as diverse as Gun Crazy (1950) and The Big Combo (1955). The former—whose screenplay was written by the blacklisted Dalton Trumbo, disguised by a front—features a bank hold-up sequence shown in an unbroken take of over three minutes that was influential. The Big Combo was shot by John Alton and took the shadowy noir style to its outer limits. The most distinctive films of Phil Karlson (The Phenix City Story [1955] and The Brothers Rico [1957]) tell stories of vice organized on a monstrous scale. The work of other directors in this tier of the industry, such as Felix E. Feist (The Devil Thumbs a Ride [1947], Tomorrow Is Another Day [1951]), has become obscure. Edgar G. Ulmer spent most of his Hollywood career working at B studios and once in a while on projects that achieved intermediate status; for the most part, on unmistakable Bs. In 1945, while at PRC, he directed a noir cult classic, Detour. Ulmer's other noirs include Strange Illusion (1945), also for PRC; Ruthless (1948), for Eagle-Lion, which had acquired PRC the previous year and Murder Is My Beat (1955), for Allied Artists.
A number of low- and modestly-budgeted noirs were made by independent, often actor-owned, companies contracting with larger studios for distribution. Serving as producer, writer, director and top-billed performer, Hugo Haas made films like Pickup (1951), The Other Woman (1954) and Jacques Tourneur, The Fearmakers (1958). It was in this way that accomplished noir actress Ida Lupino established herself as the sole female director in Hollywood during the late 1940s and much of the 1950s. She does not appear in the best-known film she directed, The Hitch-Hiker (1953), developed by her company, The Filmakers, with support and distribution by RKO. It is one of the seven classic film noirs produced largely outside of the major studios that have been chosen for the United States National Film Registry. Of the others, one was a small-studio release: Detour. Four were independent productions distributed by United Artists, the "studio without a studio": Gun Crazy; Kiss Me Deadly; D.O.A. (1950), directed by Rudolph Maté and Sweet Smell of Success (1957), directed by Alexander Mackendrick. One was an independent distributed by MGM, the industry leader: Force of Evil (1948), directed by Abraham Polonsky and starring John Garfield, both of whom were blacklisted in the 1950s. Independent production usually meant restricted circumstances but Sweet Smell of Success, despite the plans of the production team, was clearly not made on the cheap, though like many other cherished A-budget noirs, it might be said to have a B-movie soul.
Perhaps no director better displayed that spirit than the German-born Robert Siodmak, who had already made a score of films before his 1940 arrival in Hollywood. Working mostly on A features, he made eight films now regarded as classic-era noir (a figure matched only by Lang and Mann). In addition to The Killers, Burt Lancaster's debut and a Hellinger/Universal co-production, Siodmak's other important contributions to the genre include 1944's Phantom Lady (a top-of-the-line B and Woolrich adaptation), the ironically titled Christmas Holiday (1944), and Cry of the City (1948). Criss Cross (1949), with Lancaster again the lead, exemplifies how Siodmak brought the virtues of the B-movie to the A noir. In addition to the relatively looser constraints on character and message at lower budgets, the nature of B production lent itself to the noir style for economic reasons: dim lighting saved on electricity and helped cloak cheap sets (mist and smoke also served the cause). Night shooting was often compelled by hurried production schedules. Plots with obscure motivations and intriguingly elliptical transitions were sometimes the consequence of hastily written scripts. There was not always enough time or money to shoot every scene. In Criss Cross, Siodmak achieved these effects, wrapping them around Yvonne De Carlo, who played the most understandable of femme fatales; Dan Duryea, in one of his many charismatic villain roles; and Lancaster as an ordinary laborer turned armed robber, doomed by a romantic obsession.
Some critics regard classic film noir as a cycle exclusive to the United States; Alain Silver and Elizabeth Ward, for example, argue, "With the Western, film noir shares the distinction of being an indigenous American form ... a wholly American film style." However, although the term "film noir" was originally coined to describe Hollywood movies, it was an international phenomenon. Even before the beginning of the generally accepted classic period, there were films made far from Hollywood that can be seen in retrospect as films noir, for example, the French productions Pépé le Moko (1937), directed by Julien Duvivier, and Le Jour se lève (1939), directed by Marcel Carné. In addition, Mexico experienced a vibrant film noir period from roughly 1946 to 1952, which was around the same time film noir was blossoming in the United States.
During the classic period, there were many films produced in Europe, particularly in France, that share elements of style, theme, and sensibility with American films noir and may themselves be included in the genre's canon. In certain cases, the interrelationship with Hollywood noir is obvious: American-born director Jules Dassin moved to France in the early 1950s as a result of the Hollywood blacklist, and made one of the most famous French film noirs, Rififi (1955). Other well-known French films often classified as noir include Quai des Orfèvres (1947) and Les Diaboliques (1955), both directed by Henri-Georges Clouzot. Casque d'Or (1952), Touchez pas au grisbi (1954), and Le Trou (1960) directed by Jacques Becker; and Ascenseur pour l'échafaud (1958), directed by Louis Malle. French director Jean-Pierre Melville is widely recognized for his tragic, minimalist films noir—Bob le flambeur (1955), from the classic period, was followed by Le Doulos (1962), Le deuxième souffle 1966), Le Samouraï (1967), and Le Cercle rouge (1970). In the 1960s, Greek films noir "The Secret of the Red Mantle" and "The Fear" allowed audience for an anti-ableist reading which challenged stereotypes of disability. .
Scholar Andrew Spicer argues that British film noir evidences a greater debt to French poetic realism than to the expressionistic American mode of noir. Examples of British noir (sometimes described as "Brit noir") from the classic period include Brighton Rock (1947), directed by John Boulting; They Made Me a Fugitive (1947), directed by Alberto Cavalcanti; The Small Back Room (1948), directed by Michael Powell and Emeric Pressburger; The October Man (1950), directed by Roy Ward Baker; and Cast a Dark Shadow (1955), directed by Lewis Gilbert. Terence Fisher directed several low-budget thrillers in a noir mode for Hammer Film Productions, including The Last Page (a.k.a. Man Bait; 1952), Stolen Face (1952), and Murder by Proxy (a.k.a. Blackout; 1954). Before leaving for France, Jules Dassin had been obliged by political pressure to shoot his last English-language film of the classic noir period in Great Britain: Night and the City (1950). Though it was conceived in the United States and was not only directed by an American but also stars two American actors—Richard Widmark and Gene Tierney—it is technically a UK production, financed by 20th Century-Fox's British subsidiary. The most famous of classic British noirs is director Carol Reed's The Third Man (1949), from a screenplay by Graham Greene. Set in Vienna immediately after World War II, it also stars two American actors, Joseph Cotten and Orson Welles, who had appeared together in Citizen Kane.
Elsewhere, Italian director Luchino Visconti adapted Cain's The Postman Always Rings Twice as Ossessione (1943), regarded both as one of the great noirs and a seminal film in the development of neorealism. (This was not even the first screen version of Cain's novel, having been preceded by the French Le Dernier Tournant in 1939.) In Japan, the celebrated Akira Kurosawa directed several films recognizable as films noir, including Drunken Angel (1948), Stray Dog (1949), The Bad Sleep Well (1960), and High and Low (1963). Spanish author Mercedes Formica's novel La ciudad perdida (The Lost City) was adapted into film in 1960.
Among the first major neo-noir films—the term often applied to films that consciously refer back to the classic noir tradition—was the French Tirez sur le pianiste (1960), directed by François Truffaut from a novel by one of the gloomiest of American noir fiction writers, David Goodis. Noir crime films and melodramas have been produced in many countries in the post-classic area. Some of these are quintessentially self-aware neo-noirs—for example, Il Conformista (1969; Italy), Der Amerikanische Freund (1977; Germany), The Element of Crime (1984; Denmark), and El Aura (2005; Argentina). Others simply share narrative elements and a version of the hardboiled sensibility associated with classic noir, such as Castle of Sand (1974; Japan), Insomnia (1997; Norway), Croupier (1998; UK), and Blind Shaft (2003; China).
The neo-noir film genre developed mid-way into the Cold War. This cinematological trend reflected much of the cynicism and the possibility of nuclear annihilation of the era. This new genre introduced innovations that were not available to earlier noir films. The violence was also more potent.
While it is hard to draw a line between some of the noir films of the early 1960s such as Blast of Silence (1961) and Cape Fear (1962) and the noirs of the late 1950s, new trends emerged in the post-classic era. The Manchurian Candidate (1962), directed by John Frankenheimer, Shock Corridor (1963), directed by Samuel Fuller, and Brainstorm (1965), directed by experienced noir character actor William Conrad, all treat the theme of mental dispossession within stylistic and tonal frameworks derived from classic film noir. The Manchurian Candidate examined the situation of American prisoners of war (POWs) during the Korean War. Incidents that occurred during the war as well as those post-war functioned as an inspiration for a "Cold War Noir" subgenre. The television series The Fugitive (1963–67) brought classic noir themes and mood to the small screen for an extended run.
In a different vein, films began to appear that self-consciously acknowledged the conventions of classic film noir as historical archetypes to be revived, rejected, or reimagined. These efforts typify what came to be known as neo-noir. Though several late classic noirs, Kiss Me Deadly (1955) in particular, were deeply self-knowing and post-traditional in conception, none tipped its hand so evidently as to be remarked on by American critics at the time. The first major film to overtly work this angle was French director Jean-Luc Godard's À bout de souffle (Breathless; 1960), which pays its literal respects to Bogart and his crime films while brandishing a bold new style for a new day. In the United States, Arthur Penn (1965's Mickey One, drawing inspiration from Truffaut's Tirez sur le pianiste and other French New Wave films), John Boorman (1967's Point Blank, similarly caught up, though in the Nouvelle vague's deeper waters), and Alan J. Pakula (1971's Klute) directed films that knowingly related themselves to the original films noir, inviting audiences in on the game.
A manifest affiliation with noir traditions—which, by its nature, allows different sorts of commentary on them to be inferred—can also provide the basis for explicit critiques of those traditions. In 1973, director Robert Altman flipped off noir piety with The Long Goodbye. Based on the novel by Raymond Chandler, it features one of Bogart's most famous characters, but in iconoclastic fashion: Philip Marlowe, the prototypical hardboiled detective, is replayed as a hapless misfit, almost laughably out of touch with contemporary mores and morality. Where Altman's subversion of the film noir mythos was so irreverent as to outrage some contemporary critics, around the same time Woody Allen was paying affectionate, at points idolatrous homage to the classic mode with Play It Again, Sam (1972). The "blaxploitation" film Shaft (1971), wherein Richard Roundtree plays the titular African-American private eye, John Shaft, takes conventions from classic noir.
The most acclaimed of the neo-noirs of the era was director Roman Polanski's 1974 Chinatown. Written by Robert Towne, it is set in 1930s Los Angeles, an accustomed noir locale nudged back some few years in a way that makes the pivotal loss of innocence in the story even crueler. Where Polanski and Towne raised noir to a black apogee by turning rearward, director Martin Scorsese and screenwriter Paul Schrader brought the noir attitude crashing into the present day with Taxi Driver (1976), a crackling, bloody-minded gloss on bicentennial America. In 1978, Walter Hill wrote and directed The Driver, a chase film as might have been imagined by Jean-Pierre Melville in an especially abstract mood.
Hill was already a central figure in 1970s noir of a more straightforward manner, having written the script for director Sam Peckinpah's The Getaway (1972), adapting a novel by pulp master Jim Thompson, as well as for two tough private eye films: an original screenplay for Hickey & Boggs (1972) and an adaptation of a novel by Ross Macdonald, the leading literary descendant of Hammett and Chandler, for The Drowning Pool (1975). Some of the strongest 1970s noirs, in fact, were unwinking remakes of the classics, "neo" mostly by default: the heartbreaking Thieves Like Us (1974), directed by Altman from the same source as Ray's They Live by Night, and Farewell, My Lovely (1975), the Chandler tale made classically as Murder, My Sweet, remade here with Robert Mitchum in his last notable noir role. Detective series, prevalent on American television during the period, updated the hardboiled tradition in different ways, but the show conjuring the most noir tone was a horror crossover touched with shaggy, Long Goodbye-style humor: Kolchak: The Night Stalker (1974–75), featuring a Chicago newspaper reporter investigating strange, usually supernatural occurrences.
The turn of the decade brought Scorsese's black-and-white Raging Bull (1980, cowritten by Schrader). An acknowledged masterpiece—in 2007 the American Film Institute ranked it as the greatest American film of the 1980s and the fourth greatest of all time—it tells the story of a boxer's moral self-destruction that recalls in both theme and visual ambiance noir dramas such as Body and Soul (1947) and Champion (1949). From 1981, Body Heat, written and directed by Lawrence Kasdan, invokes a different set of classic noir elements, this time in a humid, erotically charged Florida setting. Its success confirmed the commercial viability of neo-noir at a time when the major Hollywood studios were becoming increasingly risk averse. The mainstreaming of neo-noir is evident in such films as Black Widow (1987), Shattered (1991), and Final Analysis (1992). Few neo-noirs have made more money or more wittily updated the tradition of the noir double entendre than Basic Instinct (1992), directed by Paul Verhoeven and written by Joe Eszterhas. The film also demonstrates how neo-noir's polychrome palette can reproduce many of the expressionistic effects of classic black-and-white noir.
Like Chinatown, its more complex predecessor, Curtis Hanson's Oscar-winning L.A. Confidential (1997), based on the James Ellroy novel, demonstrates the opposite tendency—the deliberately retro film noir; its tale of corrupt cops and femmes fatale is seemingly lifted straight from a film of 1953, the year in which it is set. Director David Fincher followed the immensely successful neo-noir Seven (1995) with a film that developed into a cult favorite after its original, disappointing release: Fight Club (1999), a sui generis mix of noir aesthetic, perverse comedy, speculative content, and satiric intent.
Working generally with much smaller budgets, brothers Joel and Ethan Coen have created one of the most extensive oeuvres influenced by classic noir, with films such as Blood Simple (1984) and Fargo (1996), the latter considered by some a supreme work in the neo-noir mode. The Coens cross noir with other generic traditions in the gangster drama Miller's Crossing (1990)—loosely based on the Dashiell Hammett novels Red Harvest and The Glass Key—and the comedy The Big Lebowski (1998), a tribute to Chandler and an homage to Altman's version of The Long Goodbye. The characteristic work of David Lynch combines film noir tropes with scenarios driven by disturbed characters such as the sociopathic criminal played by Dennis Hopper in Blue Velvet (1986) and the delusionary protagonist of Lost Highway (1997). The Twin Peaks cycle, both the TV series (1990–91) and a film, Fire Walk with Me (1992), puts a detective plot through a succession of bizarre spasms. David Cronenberg also mixes surrealism and noir in Naked Lunch (1991), inspired by William S. Burroughs' novel.
Perhaps no American neo-noirs better reflect the classic noir B movie spirit than those of director-writer Quentin Tarantino. Neo-noirs of his such as Reservoir Dogs (1992) and Pulp Fiction (1994) display a relentlessly self-reflexive, sometimes tongue-in-cheek sensibility, similar to the work of the New Wave directors and the Coens. Other films from the era readily identifiable as neo-noir (some retro, some more au courant) include director John Dahl's Kill Me Again (1989), Red Rock West (1992), and The Last Seduction (1993); four adaptations of novels by Jim Thompson—The Kill-Off (1989), After Dark, My Sweet (1990), The Grifters (1990), and the remake of The Getaway (1994); and many more, including adaptations of the work of other major noir fiction writers: The Hot Spot (1990), from Hell Hath No Fury, by Charles Williams; Miami Blues (1990), from the novel by Charles Willeford; and Out of Sight (1998), from the novel by Elmore Leonard. Several films by director-writer David Mamet involve noir elements: House of Games (1987), Homicide (1991), The Spanish Prisoner (1997), and Heist (2001). On television, Moonlighting (1985–89) paid homage to classic noir while demonstrating an unusual appreciation of the sense of humor often found in the original cycle. Between 1983 and 1989, Mickey Spillane's hardboiled private eye Mike Hammer was played with wry gusto by Stacy Keach in a series and several stand-alone television films (an unsuccessful revival followed in 1997–98). The British miniseries The Singing Detective (1986), written by Dennis Potter, tells the story of a mystery writer named Philip Marlow; widely considered one of the finest neo-noirs in any medium, some critics rank it among the greatest television productions of all time.
Among big-budget auteurs, Michael Mann has worked frequently in a neo-noir mode, with such films as Thief (1981) and Heat (1995) and the TV series Miami Vice (1984–89) and Crime Story (1986–88). Mann's output exemplifies a primary strain of neo-noir, or as it is affectionately called, "neon noir", in which classic themes and tropes are revisited in a contemporary setting with an up-to-date visual style and rock- or hip hop-based musical soundtrack.
Neo-noir film borrows from and reflects many of the characteristics of the film noir: the presence of crime and violence, complex characters and plot-lines, mystery, and moral ambivalence, all of which come into play in the neon-noir sub-genre. But more than just exhibiting the superficial traits of the genre, neon-noir emphasizes the socio-critique of film noir, recalling the specific socio-cultural dimensions of the interwar years when noirs first became prominent; a time of global existential crisis, depression and the mass movement of the rural population to cities. Long shots or montages of cityscapes, often portrayed as dark and menacing, are suggestive of what Dueck referred to as a ‘bleak societal perspective’, providing a critique on global capitalism and consumerism. Other characteristics include the use of highly stylized lighting techniques such chiaroscuro, and neon signs and brightly lit buildings that provide a sense of alienation and entrapment.
Accentuating the use of artificial and neon lighting in the films-noir of the '40s and '50s, neon-noir films accentuate this aesthetic with electrifying color and manipulated light in order to highlight their socio-cultural critiques and their references to contemporary and pop culture. In doing so, neon-noir films present the themes of urban decay, consumerist decadence and capitalism, existentialism, sexuality, and issues of race and violence in the contemporary culture, not only in America, but the globalized world at large.
Neon-noirs seek to bring the contemporary noir, somewhat diluted under the umbrella of neo-noir, back to the exploration of culture: class, race, gender, patriarchy, and capitalism. Neon-noirs present an existential exploration of society in a hyper-technological and globalized world. Illustrating society as decadent and consumerist, and identity as confused and anxious, neon-noirs reposition the contemporary noir in the setting of urban decay, often featuring scenes set in underground city haunts: brothels, nightclubs, casinos, strip bars, pawnshops, laundromats.
Neon-noirs were popularized in the '70s and '80s by films such as Taxi Driver (1976), Blade Runner (1982), and films from David Lynch, such as Blue Velvet (1986) and later, Lost Highway (1997). Other titles from this era included Brian De Palma's Blow Out (1981) and the Coen Brothers' debut Blood Simple (1984). More currently, films such as Harmony Korine’s highly provocative Spring Breakers (2012), and Danny Boyle’s Trance (2013) have been especially noted for their neon-infused rendering of film noir; while Trance was celebrated for ‘shak(ing) the ingredients (of the noir) like colored sand in a jar’, Spring Breakers notoriously produced a slew of criticism referring to its ‘fever-dream’ aesthetic and ‘neon-caked explosion of excess’ (Kohn). Another neon-noir endowed with the 'fever-dream' aesthetic is The Persian Connection, expressly linked to Lynchian aesthetics as a neon-drenched contemporary noir.
Neon-noir can be seen as a response to the over-use of the term neo-noir. While the term neo-noir functions to bring noir into the contemporary landscape, it has often been criticized for its dilution of the noir genre. Author Robert Arnett commented on its "amorphous" reach: "any film featuring a detective or crime qualifies". The neon-noir, more specifically, seeks to revive noir sensibilities in a more targeted manner of reference, focalizing socio-cultural commentary and a hyper-stylized aesthetic.
The Coen brothers make reference to the noir tradition again with The Man Who Wasn't There (2001); a black-and-white crime melodrama set in 1949; it features a scene apparently staged to mirror one from Out of the Past. Lynch's Mulholland Drive (2001) continued in his characteristic vein, making the classic noir setting of Los Angeles the venue for a noir-inflected psychological jigsaw puzzle. British-born director Christopher Nolan's black-and-white debut, Following (1998), was an overt homage to classic noir. During the new century's first decade, he was one of the leading Hollywood directors of neo-noir with the acclaimed Memento (2000) and the remake of Insomnia (2002).
Director Sean Penn's The Pledge (2001), though adapted from a very self-reflexive novel by Friedrich Dürrenmatt, plays noir comparatively straight, to devastating effect. Screenwriter David Ayer updated the classic noir bad-cop tale, typified by Shield for Murder (1954) and Rogue Cop (1954), with his scripts for Training Day (2001) and, adapting a story by James Ellroy, Dark Blue (2002); he later wrote and directed the even darker Harsh Times (2006). Michael Mann's Collateral (2004) features a performance by Tom Cruise as an assassin in the lineage of Le Samouraï. The torments of The Machinist (2004), directed by Brad Anderson, evoke both Fight Club and Memento. In 2005, Shane Black directed Kiss Kiss Bang Bang, basing his screenplay in part on a crime novel by Brett Halliday, who published his first stories back in the 1920s. The film plays with an awareness not only of classic noir but also of neo-noir reflexivity itself.
With ultra-violent films such as Sympathy for Mr. Vengeance (2002) and Thirst (2009), Park Chan-wook of South Korea has been the most prominent director outside of the United States to work regularly in a noir mode in the new millennium. The most commercially successful neo-noir of this period has been Sin City (2005), directed by Robert Rodriguez in extravagantly stylized black and white with splashes of color. The film is based on a series of comic books created by Frank Miller (credited as the film's codirector), which are in turn openly indebted to the works of Spillane and other pulp mystery authors. Similarly, graphic novels provide the basis for Road to Perdition (2002), directed by Sam Mendes, and A History of Violence (2005), directed by David Cronenberg; the latter was voted best film of the year in the annual Village Voice poll. Writer-director Rian Johnson's Brick (2005), featuring present-day high schoolers speaking a version of 1930s hardboiled argot, won the Special Jury Prize for Originality of Vision at the Sundance Film Festival. The television series Veronica Mars (2004–07) and the movie Veronica Mars (2014) also brought a youth-oriented twist to film noir. Examples of this sort of generic crossover have been dubbed "teen noir".
Neo-noir films released in the 2010s include Kim Jee-woon’s I Saw the Devil (2010), Fred Cavaye’s Point Blank (2010), Na Hong-jin’s The Yellow Sea (2010), Nicolas Winding Refn’s Drive (2011), Claire Denis' Bastards (2013) and Dan Gilroy's Nightcrawler (2014).
The Science Channel broadcast the 2021 science documentary series Killers of the Cosmos in a format it describes as "space noir." In the series, actor Aidan Gillen in animated form serves as the host of the series while portraying a private investigator who takes on "cases" in which he "hunts down" lethal threats to humanity posed by the cosmos. The animated sequences combine the characteristics of film noir with those of a pulp fiction graphic novel set in the mid-20th century, and they link conventional live-action documentary segments in which experts describe the potentially deadly phenomena.
In the post-classic era, a significant trend in noir crossovers has involved science fiction. In Jean-Luc Godard's Alphaville (1965), Lemmy Caution is the name of the old-school private eye in the city of tomorrow. The Groundstar Conspiracy (1972) centers on another implacable investigator and an amnesiac named Welles. Soylent Green (1973), the first major American example, portrays a dystopian, near-future world via a noir detection plot; starring Charlton Heston (the lead in Touch of Evil), it also features classic noir standbys Joseph Cotten, Edward G. Robinson, and Whit Bissell. The film was directed by Richard Fleischer, who two decades before had directed several strong B noirs, including Armored Car Robbery (1950) and The Narrow Margin (1952).
The cynical and stylized perspective of classic film noir had a formative effect on the cyberpunk genre of science fiction that emerged in the early 1980s; the film most directly influential on cyberpunk was Blade Runner (1982), directed by Ridley Scott, which pays evocative homage to the classic noir mode (Scott subsequently directed the poignant 1987 noir crime melodrama Someone to Watch Over Me). Scholar Jamaluddin Bin Aziz has observed how "the shadow of Philip Marlowe lingers on" in such other "future noir" films as 12 Monkeys (1995), Dark City (1998) and Minority Report (2002). Fincher's feature debut was Alien 3 (1992), which evoked the classic noir jail film Brute Force.
David Cronenberg's Crash (1996), an adaptation of the speculative novel by J. G. Ballard, has been described as a "film noir in bruise tones". The hero is the target of investigation in Gattaca (1997), which fuses film noir motifs with a scenario indebted to Brave New World. The Thirteenth Floor (1999), like Blade Runner, is an explicit homage to classic noir, in this case involving speculations about virtual reality. Science fiction, noir, and anime are brought together in the Japanese films of 90s Ghost in the Shell (1995) and Ghost in the Shell 2: Innocence (2004), both directed by Mamoru Oshii. The Animatrix (2003), based on and set within the world of The Matrix film trilogy, contains an anime short film in classic noir style titled "A Detective Story". Anime television series with science fiction noir themes include Noir (2001) and Cowboy Bebop (1998).
The 2015 film Ex Machina puts an understated film noir spin on the Frankenstein mythos, with the sentient android Ava as a potential femme fatale, her creator Nathan embodying the abusive husband or father trope, and her would-be rescuer Caleb as a "clueless drifter" enthralled by Ava.
Film noir has been parodied many times in many manners. In 1945, Danny Kaye starred in what appears to be the first intentional film noir parody, Wonder Man. That same year, Deanna Durbin was the singing lead in the comedic noir Lady on a Train, which makes fun of Woolrich-brand wistful miserablism. Bob Hope inaugurated the private-eye noir parody with My Favorite Brunette (1947), playing a baby-photographer who is mistaken for an ironfisted detective. In 1947 as well, The Bowery Boys appeared in Hard Boiled Mahoney, which had a similar mistaken-identity plot; they spoofed the genre once more in Private Eyes (1953). Two RKO productions starring Robert Mitchum take film noir over the border into self-parody: The Big Steal (1949), directed by Don Siegel, and His Kind of Woman (1951). The "Girl Hunt" ballet in Vincente Minnelli's The Band Wagon (1953) is a ten-minute distillation of—and play on—noir in dance. The Cheap Detective (1978), starring Peter Falk, is a broad spoof of several films, including the Bogart classics The Maltese Falcon and Casablanca. Carl Reiner's black-and-white Dead Men Don't Wear Plaid (1982) appropriates clips of classic noirs for a farcical pastiche, while his Fatal Instinct (1993) sends up noir classic (Double Indemnity) and neo-noir (Basic Instinct). Robert Zemeckis's Who Framed Roger Rabbit (1988) develops a noir plot set in 1940s Los Angeles around a host of cartoon characters.
Noir parodies come in darker tones as well. Murder by Contract (1958), directed by Irving Lerner, is a deadpan joke on noir, with a denouement as bleak as any of the films it kids. An ultra-low-budget Columbia Pictures production, it may qualify as the first intentional example of what is now called a neo-noir film; it was likely a source of inspiration for both Melville's Le Samouraï and Scorsese's Taxi Driver. Belying its parodic strain, The Long Goodbye's final act is seriously grave. Taxi Driver caustically deconstructs the "dark" crime film, taking it to an absurd extreme and then offering a conclusion that manages to mock every possible anticipated ending—triumphant, tragic, artfully ambivalent—while being each, all at once. Flirting with splatter status even more brazenly, the Coens' Blood Simple is both an exacting pastiche and a gross exaggeration of classic noir. Adapted by director Robinson Devor from a novel by Charles Willeford, The Woman Chaser (1999) sends up not just the noir mode but the entire Hollywood filmmaking process, with each shot seemingly staged as the visual equivalent of an acerbic Marlowe wisecrack.
In other media, the television series Sledge Hammer! (1986–88) lampoons noir, along with such topics as capital punishment, gun fetishism, and Dirty Harry. Sesame Street (1969–curr.) occasionally casts Kermit the Frog as a private eye; the sketches refer to some of the typical motifs of noir films, in particular the voiceover. Garrison Keillor's radio program A Prairie Home Companion features the recurring character Guy Noir, a hardboiled detective whose adventures always wander into farce (Guy also appears in the Altman-directed film based on Keillor's show). Firesign Theatre's Nick Danger has trodden the same not-so-mean streets, both on radio and in comedy albums. Cartoons such as Garfield's Babes and Bullets (1989) and comic strip characters such as Tracer Bullet of Calvin and Hobbes have parodied both film noir and the kindred hardboiled tradition—one of the sources from which film noir sprang and which it now overshadows. It’s Always Sunny in Philadelphia parodied the noir genre in its season 14 episode "The Janitor Always Mops Twice."
In their original 1955 canon of film noir, Raymond Borde and Etienne Chaumeton identified twenty-two Hollywood films released between 1941 and 1952 as core examples; they listed another fifty-nine American films from the period as significantly related to the field of noir. A half-century later, film historians and critics had come to agree on a canon of approximately three hundred films from 1940 to 1958. There remain, however, many differences of opinion over whether other films of the era, among them a number of well-known ones, qualify as films noir or not. For instance, The Night of the Hunter (1955), starring Robert Mitchum in an acclaimed performance, is treated as a film noir by some critics, but not by others. Some critics include Suspicion (1941), directed by Alfred Hitchcock, in their catalogues of noir; others ignore it. Concerning films made either before or after the classic period, or outside of the United States at any time, consensus is even rarer.
To support their categorization of certain films as noirs and their rejection of others, many critics refer to a set of elements they see as marking examples of the mode. The question of what constitutes the set of noir's identifying characteristics is a fundamental source of controversy. For instance, critics tend to define the model film noir as having a tragic or bleak conclusion, but many acknowledged classics of the genre have clearly happy endings (e.g., Stranger on the Third Floor, The Big Sleep, Dark Passage, and The Dark Corner), while the tone of many other noir denouements is ambivalent. Some critics perceive classic noir's hallmark as a distinctive visual style. Others, observing that there is actually considerable stylistic variety among noirs, instead emphasize plot and character type. Still others focus on mood and attitude. No survey of classic noir's identifying characteristics can therefore be considered definitive. In the 1990s and 2000s, critics have increasingly turned their attention to that diverse field of films called neo-noir; once again, there is even less consensus about the defining attributes of such films made outside the classic period. Roger Ebert offered "A Guide to Film Noir", writing that "Film noir is...
The low-key lighting schemes of many classic films noir are associated with stark light/dark contrasts and dramatic shadow patterning—a style known as chiaroscuro (a term adopted from Renaissance painting). The shadows of Venetian blinds or banister rods, cast upon an actor, a wall, or an entire set, are an iconic visual in noir and had already become a cliché well before the neo-noir era. Characters' faces may be partially or wholly obscured by darkness—a relative rarity in conventional Hollywood filmmaking. While black-and-white cinematography is considered by many to be one of the essential attributes of classic noir, the color films Leave Her to Heaven (1945) and Niagara (1953) are routinely included in noir filmographies, while Slightly Scarlet (1956), Party Girl (1958), and Vertigo (1958) are classified as noir by varying numbers of critics.
Film noir is also known for its use of low-angle, wide-angle, and skewed, or Dutch angle shots. Other devices of disorientation relatively common in film noir include shots of people reflected in one or more mirrors, shots through curved or frosted glass or other distorting objects (such as during the strangulation scene in Strangers on a Train), and special effects sequences of a sometimes bizarre nature. Night-for-night shooting, as opposed to the Hollywood norm of day-for-night, was often employed. From the mid-1940s forward, location shooting became increasingly frequent in noir.
In an analysis of the visual approach of Kiss Me Deadly, a late and self-consciously stylized example of classic noir, critic Alain Silver describes how cinematographic choices emphasize the story's themes and mood. In one scene, the characters, seen through a "confusion of angular shapes", thus appear "caught in a tangible vortex or enclosed in a trap." Silver makes a case for how "side light is used ... to reflect character ambivalence", while shots of characters in which they are lit from below "conform to a convention of visual expression which associates shadows cast upward of the face with the unnatural and ominous".
Films noir tend to have unusually convoluted story lines, frequently involving flashbacks and other editing techniques that disrupt and sometimes obscure the narrative sequence. Framing the entire primary narrative as a flashback is also a standard device. Voiceover narration, sometimes used as a structuring device, came to be seen as a noir hallmark; while classic noir is generally associated with first-person narration (i.e., by the protagonist), Stephen Neale notes that third-person narration is common among noirs of the semidocumentary style. Neo-noirs as varied as The Element of Crime (surrealist), After Dark, My Sweet (retro), and Kiss Kiss Bang Bang (meta) have employed the flashback/voiceover combination.
Bold experiments in cinematic storytelling were sometimes attempted during the classic era: Lady in the Lake, for example, is shot entirely from the point of view of protagonist Philip Marlowe; the face of star (and director) Robert Montgomery is seen only in mirrors. The Chase (1946) takes oneirism and fatalism as the basis for its fantastical narrative system, redolent of certain horror stories, but with little precedent in the context of a putatively realistic genre. In their different ways, both Sunset Boulevard and D.O.A. are tales told by dead men. Latter-day noir has been in the forefront of structural experimentation in popular cinema, as exemplified by such films as Pulp Fiction, Fight Club, and Memento.
Crime, usually murder, is an element of almost all films noir; in addition to standard-issue greed, jealousy is frequently the criminal motivation. A crime investigation—by a private eye, a police detective (sometimes acting alone), or a concerned amateur—is the most prevalent, but far from dominant, basic plot. In other common plots the protagonists are implicated in heists or con games, or in murderous conspiracies often involving adulterous affairs. False suspicions and accusations of crime are frequent plot elements, as are betrayals and double-crosses. According to J. David Slocum, "protagonists assume the literal identities of dead men in nearly fifteen percent of all noir." Amnesia is fairly epidemic—"noir's version of the common cold", in the words of film historian Lee Server.
Films noir tend to revolve around heroes who are more flawed and morally questionable than the norm, often fall guys of one sort or another. The characteristic protagonists of noir are described by many critics as "alienated"; in the words of Silver and Ward, "filled with existential bitterness". Certain archetypal characters appear in many film noirs—hardboiled detectives, femme fatales, corrupt policemen, jealous husbands, intrepid claims adjusters, and down-and-out writers. Among characters of every stripe, cigarette smoking is rampant. From historical commentators to neo-noir pictures to pop culture ephemera, the private eye and the femme fatale have been adopted as the quintessential film noir figures, though they do not appear in most films now regarded as classic noir. Of the twenty-six National Film Registry noirs, in only four does the star play a private eye: The Maltese Falcon, The Big Sleep, Out of the Past, and Kiss Me Deadly. Just four others readily qualify as detective stories: Laura, The Killers, The Naked City, and Touch of Evil. There is usually an element of drug or alcohol use, particularly as part of the detective's method to solving the crime, as an example the character of Mike Hammer in the 1955 film Kiss Me Deadly who walks into a bar saying "Give me a double bourbon, and leave the bottle". Chaumeton and Borde have argued that film noir grew out of the "literature of drugs and alcohol".
Film noir is often associated with an urban setting, and a few cities—Los Angeles, San Francisco, New York, and Chicago, in particular—are the location of many of the classic films. In the eyes of many critics, the city is presented in noir as a "labyrinth" or "maze". Bars, lounges, nightclubs, and gambling dens are frequently the scene of action. The climaxes of a substantial number of film noirs take place in visually complex, often industrial settings, such as refineries, factories, trainyards, power plants—most famously the explosive conclusion of White Heat, set at a chemical plant. In the popular (and, frequently enough, critical) imagination, in noir it is always night and it always raining.
A substantial trend within latter-day noir—dubbed "film soleil" by critic D. K. Holm—heads in precisely the opposite direction, with tales of deception, seduction, and corruption exploiting bright, sun-baked settings, stereotypically the desert or open water, to searing effect. Significant predecessors from the classic and early post-classic eras include The Lady from Shanghai; the Robert Ryan vehicle Inferno (1953); the French adaptation of Patricia Highsmith's The Talented Mr. Ripley, Plein soleil (Purple Noon in the United States, more accurately rendered elsewhere as Blazing Sun or Full Sun; 1960); and director Don Siegel's version of The Killers (1964). The tendency was at its peak during the late 1980s and 1990s, with films such as Dead Calm (1989), After Dark, My Sweet (1990), The Hot Spot (1990), Delusion (1991), Red Rock West (1993) and the television series Miami Vice.
Film noir is often described as essentially pessimistic. The noir stories that are regarded as most characteristic tell of people trapped in unwanted situations (which, in general, they did not cause but are responsible for exacerbating), striving against random, uncaring fate, and are frequently doomed. The films are seen as depicting a world that is inherently corrupt. Classic film noir has been associated by many critics with the American social landscape of the era—in particular, with a sense of heightened anxiety and alienation that is said to have followed World War II. In author Nicholas Christopher's opinion, "it is as if the war, and the social eruptions in its aftermath, unleashed demons that had been bottled up in the national psyche." Films noir, especially those of the 1950s and the height of the Red Scare, are often said to reflect cultural paranoia; Kiss Me Deadly is the noir most frequently marshaled as evidence for this claim.
Film noir is often said to be defined by "moral ambiguity", yet the Production Code obliged almost all classic noirs to see that steadfast virtue was ultimately rewarded and vice, in the absence of shame and redemption, severely punished (however dramatically incredible the final rendering of mandatory justice might be). A substantial number of latter-day noirs flout such conventions: vice emerges triumphant in films as varied as the grim Chinatown and the ribald Hot Spot.
The tone of film noir is generally regarded as downbeat; some critics experience it as darker still—"overwhelmingly black", according to Robert Ottoson. Influential critic (and filmmaker) Paul Schrader wrote in a seminal 1972 essay that "film noir is defined by tone", a tone he seems to perceive as "hopeless". In describing the adaptation of Double Indemnity, noir analyst Foster Hirsch describes the "requisite hopeless tone" achieved by the filmmakers, which appears to characterize his view of noir as a whole. On the other hand, definitive film noirs such as The Big Sleep, The Lady from Shanghai, Scarlet Street and Double Indemnity itself are famed for their hardboiled repartee, often imbued with sexual innuendo and self-reflexive humor.
The music of film noir was typically orchestral, per the Hollywood norm, but often with added dissonance. Many of the prime composers, like the directors and cameramen, were European émigrés, e.g., Max Steiner (The Big Sleep, Mildred Pierce), Miklós Rózsa (Double Indemnity, The Killers, Criss Cross), and Franz Waxman (Fury, Sunset Boulevard, Night and the City). Double Indemnity is a seminal score, initially disliked by Paramount's music director for its harshness but strongly endorsed by director Billy Wilder and studio chief Buddy DeSylva. There is a widespread popular impression that "sleazy" jazz saxophone and pizzicato bass constitute the sound of noir, but those characteristics arose much later, as in the late-1950s music of Henry Mancini for Touch of Evil and television's Peter Gunn. Bernard Herrmann's score for Taxi Driver makes heavy use of saxophone. | [
{
"paragraph_id": 0,
"text": "Film noir (/nwɑːr/; French: [film nwaʁ]) is a cinematic term used primarily to describe stylized Hollywood crime dramas, particularly those that emphasize cynical attitudes and motivations. The 1940s and 1950s are generally regarded as the \"classic period\" of American film noir. Film noir of this era is associated with a low-key, black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term film noir, French for 'black film' (literal) or 'dark film' (closer meaning), was first applied to Hollywood films by French critic Nino Frank in 1946, but was unrecognized by most American film industry professionals of that era. Frank is believed to have been inspired by the French literary publishing imprint Série noire, founded in 1945.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic films noir were referred to as \"melodramas\". Whether film noir qualifies as a distinct genre or whether it is more of a filmmaking style is a matter of ongoing and heavy debate among scholars.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Film noir encompasses a range of plots: the central figure may be a private investigator (The Big Sleep), a plainclothes police officer (The Big Heat), an aging boxer (The Set-Up), a hapless grifter (Night and the City), a law-abiding citizen lured into a life of crime (Gun Crazy), a femme fatale (Gilda) or simply a victim of circumstance (D.O.A.). Although film noir was originally associated with American productions, the term has been used to describe films from around the world. Many films released from the 1960s onward share attributes with films noir of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir. The clichés of film noir have inspired parody since the mid-1940s.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The questions of what defines film noir, and what sort of category it is, provoke continuing debate. \"We'd be oversimplifying things in calling film noir oneiric, strange, erotic, ambivalent, and cruel ...\"—this set of attributes constitutes the first of many attempts to define film noir made by French critics Raymond Borde and Étienne Chaumeton in their 1955 book Panorama du film noir américain 1941–1953 (A Panorama of American Film Noir), the original and seminal extended treatment of the subject. They emphasize that not every noir film embodies all five attributes in equal measure—one might be more dreamlike; another, particularly brutal. The authors' caveats and repeated efforts at alternative definition have been echoed in subsequent scholarship, but in the words of cinema historian Mark Bould, film noir remains an \"elusive phenomenon.\"",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "Though film noir is often identified with a visual style that emphasizes low-key lighting and unbalanced compositions, films commonly identified as noir evidence a variety of visual approaches, including ones that fit comfortably within the Hollywood mainstream. Film noir similarly embraces a variety of genres, from the gangster film to the police procedural to the gothic romance to the social problem picture—any example of which from the 1940s and 1950s, now seen as noir's classical era, was likely to be described as a melodrama at the time.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "It is night, always. The hero enters a labyrinth on a quest. He is alone and off balance. He may be desperate, in flight, or coldly calculating, imagining he is the pursuer rather than the pursued.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "A woman invariably joins him at a critical juncture, when he is most vulnerable. [Her] eventual betrayal of him (or herself) is as ambiguous as her feelings about him.",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "Nicholas Christopher, Somewhere in the Night (1997)",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "While many critics refer to film noir as a genre itself, others argue that it can be no such thing. Foster Hirsch defines a genre as determined by \"conventions of narrative structure, characterization, theme, and visual design.\" Hirsch, as one who has taken the position that film noir is a genre, argues that these elements are present \"in abundance.\" Hirsch notes that there are unifying features of tone, visual style and narrative sufficient to classify noir as a distinct genre.",
"title": "Definition"
},
{
"paragraph_id": 10,
"text": "Others argue that film noir is not a genre. It is often associated with an urban setting, but many classic noirs take place in small towns, suburbia, rural areas, or on the open road; setting is not a determinant, as with the Western. Similarly, while the private eye and the femme fatale are stock character types conventionally identified with noir, the majority of films in the genre feature neither. Nor does film noir rely on anything as evident as the monstrous or supernatural elements of the horror film, the speculative leaps of the science fiction film, or the song-and-dance routines of the musical.",
"title": "Definition"
},
{
"paragraph_id": 11,
"text": "An analogous case is that of the screwball comedy, widely accepted by film historians as constituting a \"genre\": screwball is defined not by a fundamental attribute, but by a general disposition and a group of elements, some—but rarely and perhaps never all—of which are found in each of the genre's films. Because of the diversity of noir (much greater than that of the screwball comedy), certain scholars in the field, such as film historian Thomas Schatz, treat it as not a genre but a \"style\". Alain Silver, the most widely published American critic specializing in film noir studies, refers to film noir as a \"cycle\" and a \"phenomenon\", even as he argues that it has—like certain genres—a consistent set of visual and thematic codes. Screenwriter Eric R. Williams labels both film noir and screwball comedy a \"pathway\" in his screenwriters taxonomy; explaining that a pathway has two parts: 1) the way the audience connects with the protagonist and 2) the trajectory the audience expects the story to follow. Other critics treat film noir as a \"mood,\" a \"series\", or simply a chosen set of films they regard as belonging to the noir \"canon.\" There is no consensus on the matter.",
"title": "Definition"
},
{
"paragraph_id": 12,
"text": "The aesthetics of film noir were influenced by German Expressionism, an artistic movement of the 1910s and 1920s that involved theater, music, photography, painting, sculpture and architecture, as well as cinema. The opportunities offered by the booming Hollywood film industry and then the threat of Nazism led to the emigration of many film artists working in Germany who had been involved in the Expressionist movement or studied with its practitioners. M (1931), shot only a few years before director Fritz Lang's departure from Germany, is among the first crime films of the sound era to join a characteristically noirish visual style with a noir-type plot, in which the protagonist is a criminal (as are his most successful pursuers). Directors such as Lang, Jacques Tourneur, Robert Siodmak and Michael Curtiz brought a dramatically shadowed lighting style and a psychologically expressive approach to visual composition (mise-en-scène) with them to Hollywood, where they made some of the most famous classic noirs.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "By 1931, Curtiz had already been in Hollywood for half a decade, making as many as six films a year. Movies of his such as 20,000 Years in Sing Sing (1932) and Private Detective 62 (1933) are among the early Hollywood sound films arguably classifiable as noir—scholar Marc Vernet offers the latter as evidence that dating the initiation of film noir to 1940 or any other year is \"arbitrary\". Expressionism-orientated filmmakers had free stylistic rein in Universal horror pictures such as Dracula (1931), The Mummy (1932)—the former photographed and the latter directed by the Berlin-trained Karl Freund—and The Black Cat (1934), directed by Austrian émigré Edgar G. Ulmer. The Universal horror film that comes closest to noir, in story and sensibility, is The Invisible Man (1933), directed by Englishman James Whale and photographed by American Arthur Edeson. Edeson later photographed The Maltese Falcon (1941), widely regarded as the first major film noir of the classic era.",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "Josef von Sternberg was directing in Hollywood during the same period. Films of his such as Shanghai Express (1932) and The Devil Is a Woman (1935), with their hothouse eroticism and baroque visual style anticipated central elements of classic noir. The commercial and critical success of Sternberg's silent Underworld (1927) was largely responsible for spurring a trend of Hollywood gangster films. Successful films in that genre such as Little Caesar (1931), The Public Enemy (1931) and Scarface (1932) demonstrated that there was an audience for crime dramas with morally reprehensible protagonists. An important, possibly influential, cinematic antecedent to classic noir was 1930s French poetic realism, with its romantic, fatalistic attitude and celebration of doomed heroes. The movement's sensibility is mirrored in the Warner Bros. drama I Am a Fugitive from a Chain Gang (1932), a forerunner of noir. Among films not considered noir, perhaps none had a greater effect on the development of the genre than Citizen Kane (1941), directed by Orson Welles. Its visual intricacy and complex, voiceover narrative structure are echoed in dozens of classic films noir.",
"title": "Background"
},
{
"paragraph_id": 15,
"text": "Italian neorealism of the 1940s, with its emphasis on quasi-documentary authenticity, was an acknowledged influence on trends that emerged in American noir. The Lost Weekend (1945), directed by Billy Wilder, another Vienna-born, Berlin-trained American auteur, tells the story of an alcoholic in a manner evocative of neorealism. It also exemplifies the problem of classification: one of the first American films to be described as a film noir, it has largely disappeared from considerations of the field. Director Jules Dassin of The Naked City (1948) pointed to the neorealists as inspiring his use of location photography with non-professional extras. This semidocumentary approach characterized a substantial number of noirs in the late 1940s and early 1950s. Along with neorealism, the style had an American precedent cited by Dassin, in director Henry Hathaway's The House on 92nd Street (1945), which demonstrated the parallel influence of the cinematic newsreel.",
"title": "Background"
},
{
"paragraph_id": 16,
"text": "The primary literary influence on film noir was the hardboiled school of American detective and crime fiction, led in its early years by such writers as Dashiell Hammett (whose first novel, Red Harvest, was published in 1929) and James M. Cain (whose The Postman Always Rings Twice appeared five years later), and popularized in pulp magazines such as Black Mask. The classic film noirs The Maltese Falcon (1941) and The Glass Key (1942) were based on novels by Hammett; Cain's novels provided the basis for Double Indemnity (1944), Mildred Pierce (1945), The Postman Always Rings Twice (1946), and Slightly Scarlet (1956; adapted from Love's Lovely Counterfeit). A decade before the classic era, a story by Hammett was the source for the gangster melodrama City Streets (1931), directed by Rouben Mamoulian and photographed by Lee Garmes, who worked regularly with Sternberg. Released the month before Lang's M, City Streets has a claim to being the first major film noir; both its style and story had many noir characteristics.",
"title": "Background"
},
{
"paragraph_id": 17,
"text": "Raymond Chandler, who debuted as a novelist with The Big Sleep in 1939, soon became the most famous author of the hardboiled school. Not only were Chandler's novels turned into major noirs—Murder, My Sweet (1944; adapted from Farewell, My Lovely), The Big Sleep (1946), and Lady in the Lake (1947)—he was an important screenwriter in the genre as well, producing the scripts for Double Indemnity, The Blue Dahlia (1946), and Strangers on a Train (1951). Where Chandler, like Hammett, centered most of his novels and stories on the character of the private eye, Cain featured less heroic protagonists and focused more on psychological exposition than on crime solving; the Cain approach has come to be identified with a subset of the hardboiled genre dubbed \"noir fiction\". For much of the 1940s, one of the most prolific and successful authors of this often downbeat brand of suspense tale was Cornell Woolrich (sometimes under the pseudonym George Hopley or William Irish). No writer's published work provided the basis for more noir films of the classic period than Woolrich's: thirteen in all, including Black Angel (1946), Deadline at Dawn (1946), and Fear in the Night (1947).",
"title": "Background"
},
{
"paragraph_id": 18,
"text": "Another crucial literary source for film noir was W. R. Burnett, whose first novel to be published was Little Caesar, in 1929. It was turned into a hit for Warner Bros. in 1931; the following year, Burnett was hired to write dialogue for Scarface, while The Beast of the City (1932) was adapted from one of his stories. At least one important reference work identifies the latter as a film noir despite its early date. Burnett's characteristic narrative approach fell somewhere between that of the quintessential hardboiled writers and their noir fiction compatriots—his protagonists were often heroic in their own way, which happened to be that of the gangster. During the classic era, his work, either as author or screenwriter, was the basis for seven films now widely regarded as noir, including three of the most famous: High Sierra (1941), This Gun for Hire (1942), and The Asphalt Jungle (1950).",
"title": "Background"
},
{
"paragraph_id": 19,
"text": "The 1940s and 1950s are generally regarded as the classic period of American film noir. While City Streets and other pre-WWII crime melodramas such as Fury (1936) and You Only Live Once (1937), both directed by Fritz Lang, are categorized as full-fledged noir in Alain Silver and Elizabeth Ward's film noir encyclopedia, other critics tend to describe them as \"proto-noir\" or in similar terms.",
"title": "Classic period"
},
{
"paragraph_id": 20,
"text": "The film now most commonly cited as the first \"true\" film noir is Stranger on the Third Floor (1940), directed by Latvian-born, Soviet-trained Boris Ingster. Hungarian émigré Peter Lorre—who had starred in Lang's M—was top-billed, although he did not play the primary lead. (He later played secondary roles in several other formative American noirs.) Although modestly budgeted, at the high end of the B movie scale, Stranger on the Third Floor still lost its studio, RKO, US$56,000 (equivalent to $1,169,748 in 2022), almost a third of its total cost. Variety magazine found Ingster's work: \"...too studied and when original, lacks the flare to hold attention. It's a film too arty for average audiences, and too humdrum for others.\" Stranger on the Third Floor was not recognized as the beginning of a trend, let alone a new genre, for many decades.",
"title": "Classic period"
},
{
"paragraph_id": 21,
"text": "Whoever went to the movies with any regularity during 1946 was caught in the midst of Hollywood's profound postwar affection for morbid drama. From January through December deep shadows, clutching hands, exploding revolvers, sadistic villains and heroines tormented with deeply rooted diseases of the mind flashed across the screen in a panting display of psychoneurosis, unsublimated sex and murder most foul.",
"title": "Classic period"
},
{
"paragraph_id": 22,
"text": "Donald Marshman, Life (August 25, 1947)",
"title": "Classic period"
},
{
"paragraph_id": 23,
"text": "Most film noirs of the classic period were similarly low- and modestly-budgeted features without major stars—B movies either literally or in spirit. In this production context, writers, directors, cinematographers, and other craftsmen were relatively free from typical big-picture constraints. There was more visual experimentation than in Hollywood filmmaking as a whole: the Expressionism now closely associated with noir and the semi-documentary style that later emerged represent two very different tendencies. Narrative structures sometimes involved convoluted flashbacks uncommon in non-noir commercial productions. In terms of content, enforcement of the Production Code ensured that no film character could literally get away with murder or be seen sharing a bed with anyone but a spouse; within those bounds, however, many films now identified as noir feature plot elements and dialogue that were very risqué for the time.",
"title": "Classic period"
},
{
"paragraph_id": 24,
"text": "Thematically, films noir were most exceptional for the relative frequency with which they centered on portrayals of women of questionable virtue—a focus that had become rare in Hollywood films after the mid-1930s and the end of the pre-Code era. The signal film in this vein was Double Indemnity, directed by Billy Wilder; setting the mold was Barbara Stanwyck's femme fatale, Phyllis Dietrichson—an apparent nod to Marlene Dietrich, who had built her extraordinary career playing such characters for Sternberg. An A-level feature, the film's commercial success and seven Oscar nominations made it probably the most influential of the early noirs. A slew of now-renowned noir \"bad girls\" followed, such as those played by Rita Hayworth in Gilda (1946), Lana Turner in The Postman Always Rings Twice (1946), Ava Gardner in The Killers (1946), and Jane Greer in Out of the Past (1947). The iconic noir counterpart to the femme fatale, the private eye, came to the fore in films such as The Maltese Falcon (1941), with Humphrey Bogart as Sam Spade, and Murder, My Sweet (1944), with Dick Powell as Philip Marlowe.",
"title": "Classic period"
},
{
"paragraph_id": 25,
"text": "The prevalence of the private eye as a lead character declined in film noir of the 1950s, a period during which several critics describe the form as becoming more focused on extreme psychologies and more exaggerated in general. A prime example is Kiss Me Deadly (1955); based on a novel by Mickey Spillane, the best-selling of all the hardboiled authors, here the protagonist is a private eye, Mike Hammer. As described by Paul Schrader, \"Robert Aldrich's teasing direction carries noir to its sleaziest and most perversely erotic. Hammer overturns the underworld in search of the 'great whatsit' [which] turns out to be—joke of jokes—an exploding atomic bomb.\" Orson Welles's baroquely styled Touch of Evil (1958) is frequently cited as the last noir of the classic period. Some scholars believe film noir never really ended, but continued to transform even as the characteristic noir visual style began to seem dated and changing production conditions led Hollywood in different directions—in this view, post-1950s films in the noir tradition are seen as part of a continuity with classic noir. A majority of critics, however, regard comparable films made outside the classic era to be something other than genuine film noir. They regard true film noir as belonging to a temporally and geographically limited cycle or period, treating subsequent films that evoke the classics as fundamentally different due to general shifts in filmmaking style and latter-day awareness of noir as a historical source for allusion. These later films are often called neo-noir.",
"title": "Classic period"
},
{
"paragraph_id": 26,
"text": "While the inceptive noir, Stranger on the Third Floor, was a B picture directed by a virtual unknown, many of the films noir still remembered were A-list productions by well-known film makers. Debuting as a director with The Maltese Falcon (1941), John Huston followed with Key Largo (1948) and The Asphalt Jungle (1950). Opinion is divided on the noir status of several Alfred Hitchcock thrillers from the era; at least four qualify by consensus: Shadow of a Doubt (1943), Notorious (1946), Strangers on a Train (1951) and The Wrong Man (1956), Otto Preminger's success with Laura (1944) made his name and helped demonstrate noir's adaptability to a high-gloss 20th Century-Fox presentation. Among Hollywood's most celebrated directors of the era, arguably none worked more often in a noir mode than Preminger; his other noirs include Fallen Angel (1945), Whirlpool (1949), Where the Sidewalk Ends (1950) (all for Fox) and Angel Face (1952). A half-decade after Double Indemnity and The Lost Weekend, Billy Wilder made Sunset Boulevard (1950) and Ace in the Hole (1951), noirs that were not so much crime dramas as satires on Hollywood and the news media respectively. In a Lonely Place (1950) was Nicholas Ray's breakthrough; his other noirs include his debut, They Live by Night (1948) and On Dangerous Ground (1952), noted for their unusually sympathetic treatment of characters alienated from the social mainstream.",
"title": "Classic period"
},
{
"paragraph_id": 27,
"text": "Orson Welles had notorious problems with financing but his three film noirs were well-budgeted: The Lady from Shanghai (1947) received top-level, \"prestige\" backing, while The Stranger (1946), his most conventional film, and Touch of Evil (1958), an unmistakably personal work, were funded at levels lower but still commensurate with headlining releases. Like The Stranger, Fritz Lang's The Woman in the Window (1944) was a production of the independent International Pictures. Lang's follow-up, Scarlet Street (1945), was one of the few classic noirs to be officially censored: filled with erotic innuendo, it was temporarily banned in Milwaukee, Atlanta and New York State. Scarlet Street was a semi-independent, cosponsored by Universal and Lang's Diana Productions, of which the film's co-star, Joan Bennett, was the second biggest shareholder. Lang, Bennett and her husband, the Universal veteran and Diana production head Walter Wanger, made Secret Beyond the Door (1948) in similar fashion.",
"title": "Classic period"
},
{
"paragraph_id": 28,
"text": "Before leaving the United States while subject to the Hollywood blacklist, Jules Dassin made two classic noirs that also straddled the major/independent line: Brute Force (1947) and the influential documentary-style The Naked City (1948) were developed by producer Mark Hellinger, who had an \"inside/outside\" contract with Universal similar to Wanger's. Years earlier, working at Warner Bros., Hellinger had produced three films for Raoul Walsh, the proto-noirs They Drive by Night (1940), Manpower (1941) and High Sierra (1941), now regarded as a seminal work in noir's development. Walsh had no great name during his half-century as a director but his noirs White Heat (1949) and The Enforcer (1951) had A-list stars and are seen as important examples of the cycle. Other directors associated with top-of-the-bill Hollywood films noir include Edward Dmytryk (Murder, My Sweet (1944), Crossfire (1947))—the first important noir director to fall prey to the industry blacklist—as well as Henry Hathaway (The Dark Corner (1946), Kiss of Death (1947)) and John Farrow (The Big Clock (1948), Night Has a Thousand Eyes (1948)).",
"title": "Classic period"
},
{
"paragraph_id": 29,
"text": "Most of the Hollywood films considered to be classic noirs fall into the category of the B movie. Some were Bs in the most precise sense, produced to run on the bottom of double bills by a low-budget unit of one of the major studios or by one of the smaller Poverty Row outfits, from the relatively well-off Monogram to shakier ventures such as Producers Releasing Corporation (PRC). Jacques Tourneur had made over thirty Hollywood Bs (a few now highly regarded, most forgotten) before directing the A-level Out of the Past, described by scholar Robert Ottoson as \"the ne plus ultra of forties film noir\". Movies with budgets a step up the ladder, known as \"intermediates\" by the industry, might be treated as A or B pictures depending on the circumstances. Monogram created Allied Artists in the late 1940s to focus on this sort of production. Robert Wise (Born to Kill [1947], The Set-Up [1949]) and Anthony Mann (T-Men [1947] and Raw Deal [1948]) each made a series of impressive intermediates, many of them noirs, before graduating to steady work on big-budget productions. Mann did some of his most celebrated work with cinematographer John Alton, a specialist in what James Naremore called \"hypnotic moments of light-in-darkness\". He Walked by Night (1948), shot by Alton though credited solely to Alfred Werker, directed in large part by Mann, demonstrates their technical mastery and exemplifies the late 1940s trend of \"police procedural\" crime dramas. It was released, like other Mann-Alton noirs, by the small Eagle-Lion company; it was the inspiration for the Dragnet series, which debuted on radio in 1949 and television in 1951.",
"title": "Classic period"
},
{
"paragraph_id": 30,
"text": "Several directors associated with noir built well-respected oeuvres largely at the B-movie/intermediate level. Samuel Fuller's brutal, visually energetic films such as Pickup on South Street (1953) and Underworld U.S.A. (1961) earned him a unique reputation; his advocates praise him as \"primitive\" and \"barbarous\". Joseph H. Lewis directed noirs as diverse as Gun Crazy (1950) and The Big Combo (1955). The former—whose screenplay was written by the blacklisted Dalton Trumbo, disguised by a front—features a bank hold-up sequence shown in an unbroken take of over three minutes that was influential. The Big Combo was shot by John Alton and took the shadowy noir style to its outer limits. The most distinctive films of Phil Karlson (The Phenix City Story [1955] and The Brothers Rico [1957]) tell stories of vice organized on a monstrous scale. The work of other directors in this tier of the industry, such as Felix E. Feist (The Devil Thumbs a Ride [1947], Tomorrow Is Another Day [1951]), has become obscure. Edgar G. Ulmer spent most of his Hollywood career working at B studios and once in a while on projects that achieved intermediate status; for the most part, on unmistakable Bs. In 1945, while at PRC, he directed a noir cult classic, Detour. Ulmer's other noirs include Strange Illusion (1945), also for PRC; Ruthless (1948), for Eagle-Lion, which had acquired PRC the previous year and Murder Is My Beat (1955), for Allied Artists.",
"title": "Classic period"
},
{
"paragraph_id": 31,
"text": "A number of low- and modestly-budgeted noirs were made by independent, often actor-owned, companies contracting with larger studios for distribution. Serving as producer, writer, director and top-billed performer, Hugo Haas made films like Pickup (1951), The Other Woman (1954) and Jacques Tourneur, The Fearmakers (1958). It was in this way that accomplished noir actress Ida Lupino established herself as the sole female director in Hollywood during the late 1940s and much of the 1950s. She does not appear in the best-known film she directed, The Hitch-Hiker (1953), developed by her company, The Filmakers, with support and distribution by RKO. It is one of the seven classic film noirs produced largely outside of the major studios that have been chosen for the United States National Film Registry. Of the others, one was a small-studio release: Detour. Four were independent productions distributed by United Artists, the \"studio without a studio\": Gun Crazy; Kiss Me Deadly; D.O.A. (1950), directed by Rudolph Maté and Sweet Smell of Success (1957), directed by Alexander Mackendrick. One was an independent distributed by MGM, the industry leader: Force of Evil (1948), directed by Abraham Polonsky and starring John Garfield, both of whom were blacklisted in the 1950s. Independent production usually meant restricted circumstances but Sweet Smell of Success, despite the plans of the production team, was clearly not made on the cheap, though like many other cherished A-budget noirs, it might be said to have a B-movie soul.",
"title": "Classic period"
},
{
"paragraph_id": 32,
"text": "Perhaps no director better displayed that spirit than the German-born Robert Siodmak, who had already made a score of films before his 1940 arrival in Hollywood. Working mostly on A features, he made eight films now regarded as classic-era noir (a figure matched only by Lang and Mann). In addition to The Killers, Burt Lancaster's debut and a Hellinger/Universal co-production, Siodmak's other important contributions to the genre include 1944's Phantom Lady (a top-of-the-line B and Woolrich adaptation), the ironically titled Christmas Holiday (1944), and Cry of the City (1948). Criss Cross (1949), with Lancaster again the lead, exemplifies how Siodmak brought the virtues of the B-movie to the A noir. In addition to the relatively looser constraints on character and message at lower budgets, the nature of B production lent itself to the noir style for economic reasons: dim lighting saved on electricity and helped cloak cheap sets (mist and smoke also served the cause). Night shooting was often compelled by hurried production schedules. Plots with obscure motivations and intriguingly elliptical transitions were sometimes the consequence of hastily written scripts. There was not always enough time or money to shoot every scene. In Criss Cross, Siodmak achieved these effects, wrapping them around Yvonne De Carlo, who played the most understandable of femme fatales; Dan Duryea, in one of his many charismatic villain roles; and Lancaster as an ordinary laborer turned armed robber, doomed by a romantic obsession.",
"title": "Classic period"
},
{
"paragraph_id": 33,
"text": "Some critics regard classic film noir as a cycle exclusive to the United States; Alain Silver and Elizabeth Ward, for example, argue, \"With the Western, film noir shares the distinction of being an indigenous American form ... a wholly American film style.\" However, although the term \"film noir\" was originally coined to describe Hollywood movies, it was an international phenomenon. Even before the beginning of the generally accepted classic period, there were films made far from Hollywood that can be seen in retrospect as films noir, for example, the French productions Pépé le Moko (1937), directed by Julien Duvivier, and Le Jour se lève (1939), directed by Marcel Carné. In addition, Mexico experienced a vibrant film noir period from roughly 1946 to 1952, which was around the same time film noir was blossoming in the United States.",
"title": "Outside the United States"
},
{
"paragraph_id": 34,
"text": "During the classic period, there were many films produced in Europe, particularly in France, that share elements of style, theme, and sensibility with American films noir and may themselves be included in the genre's canon. In certain cases, the interrelationship with Hollywood noir is obvious: American-born director Jules Dassin moved to France in the early 1950s as a result of the Hollywood blacklist, and made one of the most famous French film noirs, Rififi (1955). Other well-known French films often classified as noir include Quai des Orfèvres (1947) and Les Diaboliques (1955), both directed by Henri-Georges Clouzot. Casque d'Or (1952), Touchez pas au grisbi (1954), and Le Trou (1960) directed by Jacques Becker; and Ascenseur pour l'échafaud (1958), directed by Louis Malle. French director Jean-Pierre Melville is widely recognized for his tragic, minimalist films noir—Bob le flambeur (1955), from the classic period, was followed by Le Doulos (1962), Le deuxième souffle 1966), Le Samouraï (1967), and Le Cercle rouge (1970). In the 1960s, Greek films noir \"The Secret of the Red Mantle\" and \"The Fear\" allowed audience for an anti-ableist reading which challenged stereotypes of disability. .",
"title": "Outside the United States"
},
{
"paragraph_id": 35,
"text": "Scholar Andrew Spicer argues that British film noir evidences a greater debt to French poetic realism than to the expressionistic American mode of noir. Examples of British noir (sometimes described as \"Brit noir\") from the classic period include Brighton Rock (1947), directed by John Boulting; They Made Me a Fugitive (1947), directed by Alberto Cavalcanti; The Small Back Room (1948), directed by Michael Powell and Emeric Pressburger; The October Man (1950), directed by Roy Ward Baker; and Cast a Dark Shadow (1955), directed by Lewis Gilbert. Terence Fisher directed several low-budget thrillers in a noir mode for Hammer Film Productions, including The Last Page (a.k.a. Man Bait; 1952), Stolen Face (1952), and Murder by Proxy (a.k.a. Blackout; 1954). Before leaving for France, Jules Dassin had been obliged by political pressure to shoot his last English-language film of the classic noir period in Great Britain: Night and the City (1950). Though it was conceived in the United States and was not only directed by an American but also stars two American actors—Richard Widmark and Gene Tierney—it is technically a UK production, financed by 20th Century-Fox's British subsidiary. The most famous of classic British noirs is director Carol Reed's The Third Man (1949), from a screenplay by Graham Greene. Set in Vienna immediately after World War II, it also stars two American actors, Joseph Cotten and Orson Welles, who had appeared together in Citizen Kane.",
"title": "Outside the United States"
},
{
"paragraph_id": 36,
"text": "Elsewhere, Italian director Luchino Visconti adapted Cain's The Postman Always Rings Twice as Ossessione (1943), regarded both as one of the great noirs and a seminal film in the development of neorealism. (This was not even the first screen version of Cain's novel, having been preceded by the French Le Dernier Tournant in 1939.) In Japan, the celebrated Akira Kurosawa directed several films recognizable as films noir, including Drunken Angel (1948), Stray Dog (1949), The Bad Sleep Well (1960), and High and Low (1963). Spanish author Mercedes Formica's novel La ciudad perdida (The Lost City) was adapted into film in 1960.",
"title": "Outside the United States"
},
{
"paragraph_id": 37,
"text": "Among the first major neo-noir films—the term often applied to films that consciously refer back to the classic noir tradition—was the French Tirez sur le pianiste (1960), directed by François Truffaut from a novel by one of the gloomiest of American noir fiction writers, David Goodis. Noir crime films and melodramas have been produced in many countries in the post-classic area. Some of these are quintessentially self-aware neo-noirs—for example, Il Conformista (1969; Italy), Der Amerikanische Freund (1977; Germany), The Element of Crime (1984; Denmark), and El Aura (2005; Argentina). Others simply share narrative elements and a version of the hardboiled sensibility associated with classic noir, such as Castle of Sand (1974; Japan), Insomnia (1997; Norway), Croupier (1998; UK), and Blind Shaft (2003; China).",
"title": "Outside the United States"
},
{
"paragraph_id": 38,
"text": "The neo-noir film genre developed mid-way into the Cold War. This cinematological trend reflected much of the cynicism and the possibility of nuclear annihilation of the era. This new genre introduced innovations that were not available to earlier noir films. The violence was also more potent.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 39,
"text": "While it is hard to draw a line between some of the noir films of the early 1960s such as Blast of Silence (1961) and Cape Fear (1962) and the noirs of the late 1950s, new trends emerged in the post-classic era. The Manchurian Candidate (1962), directed by John Frankenheimer, Shock Corridor (1963), directed by Samuel Fuller, and Brainstorm (1965), directed by experienced noir character actor William Conrad, all treat the theme of mental dispossession within stylistic and tonal frameworks derived from classic film noir. The Manchurian Candidate examined the situation of American prisoners of war (POWs) during the Korean War. Incidents that occurred during the war as well as those post-war functioned as an inspiration for a \"Cold War Noir\" subgenre. The television series The Fugitive (1963–67) brought classic noir themes and mood to the small screen for an extended run.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 40,
"text": "In a different vein, films began to appear that self-consciously acknowledged the conventions of classic film noir as historical archetypes to be revived, rejected, or reimagined. These efforts typify what came to be known as neo-noir. Though several late classic noirs, Kiss Me Deadly (1955) in particular, were deeply self-knowing and post-traditional in conception, none tipped its hand so evidently as to be remarked on by American critics at the time. The first major film to overtly work this angle was French director Jean-Luc Godard's À bout de souffle (Breathless; 1960), which pays its literal respects to Bogart and his crime films while brandishing a bold new style for a new day. In the United States, Arthur Penn (1965's Mickey One, drawing inspiration from Truffaut's Tirez sur le pianiste and other French New Wave films), John Boorman (1967's Point Blank, similarly caught up, though in the Nouvelle vague's deeper waters), and Alan J. Pakula (1971's Klute) directed films that knowingly related themselves to the original films noir, inviting audiences in on the game.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 41,
"text": "A manifest affiliation with noir traditions—which, by its nature, allows different sorts of commentary on them to be inferred—can also provide the basis for explicit critiques of those traditions. In 1973, director Robert Altman flipped off noir piety with The Long Goodbye. Based on the novel by Raymond Chandler, it features one of Bogart's most famous characters, but in iconoclastic fashion: Philip Marlowe, the prototypical hardboiled detective, is replayed as a hapless misfit, almost laughably out of touch with contemporary mores and morality. Where Altman's subversion of the film noir mythos was so irreverent as to outrage some contemporary critics, around the same time Woody Allen was paying affectionate, at points idolatrous homage to the classic mode with Play It Again, Sam (1972). The \"blaxploitation\" film Shaft (1971), wherein Richard Roundtree plays the titular African-American private eye, John Shaft, takes conventions from classic noir.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 42,
"text": "The most acclaimed of the neo-noirs of the era was director Roman Polanski's 1974 Chinatown. Written by Robert Towne, it is set in 1930s Los Angeles, an accustomed noir locale nudged back some few years in a way that makes the pivotal loss of innocence in the story even crueler. Where Polanski and Towne raised noir to a black apogee by turning rearward, director Martin Scorsese and screenwriter Paul Schrader brought the noir attitude crashing into the present day with Taxi Driver (1976), a crackling, bloody-minded gloss on bicentennial America. In 1978, Walter Hill wrote and directed The Driver, a chase film as might have been imagined by Jean-Pierre Melville in an especially abstract mood.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 43,
"text": "Hill was already a central figure in 1970s noir of a more straightforward manner, having written the script for director Sam Peckinpah's The Getaway (1972), adapting a novel by pulp master Jim Thompson, as well as for two tough private eye films: an original screenplay for Hickey & Boggs (1972) and an adaptation of a novel by Ross Macdonald, the leading literary descendant of Hammett and Chandler, for The Drowning Pool (1975). Some of the strongest 1970s noirs, in fact, were unwinking remakes of the classics, \"neo\" mostly by default: the heartbreaking Thieves Like Us (1974), directed by Altman from the same source as Ray's They Live by Night, and Farewell, My Lovely (1975), the Chandler tale made classically as Murder, My Sweet, remade here with Robert Mitchum in his last notable noir role. Detective series, prevalent on American television during the period, updated the hardboiled tradition in different ways, but the show conjuring the most noir tone was a horror crossover touched with shaggy, Long Goodbye-style humor: Kolchak: The Night Stalker (1974–75), featuring a Chicago newspaper reporter investigating strange, usually supernatural occurrences.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 44,
"text": "The turn of the decade brought Scorsese's black-and-white Raging Bull (1980, cowritten by Schrader). An acknowledged masterpiece—in 2007 the American Film Institute ranked it as the greatest American film of the 1980s and the fourth greatest of all time—it tells the story of a boxer's moral self-destruction that recalls in both theme and visual ambiance noir dramas such as Body and Soul (1947) and Champion (1949). From 1981, Body Heat, written and directed by Lawrence Kasdan, invokes a different set of classic noir elements, this time in a humid, erotically charged Florida setting. Its success confirmed the commercial viability of neo-noir at a time when the major Hollywood studios were becoming increasingly risk averse. The mainstreaming of neo-noir is evident in such films as Black Widow (1987), Shattered (1991), and Final Analysis (1992). Few neo-noirs have made more money or more wittily updated the tradition of the noir double entendre than Basic Instinct (1992), directed by Paul Verhoeven and written by Joe Eszterhas. The film also demonstrates how neo-noir's polychrome palette can reproduce many of the expressionistic effects of classic black-and-white noir.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 45,
"text": "Like Chinatown, its more complex predecessor, Curtis Hanson's Oscar-winning L.A. Confidential (1997), based on the James Ellroy novel, demonstrates the opposite tendency—the deliberately retro film noir; its tale of corrupt cops and femmes fatale is seemingly lifted straight from a film of 1953, the year in which it is set. Director David Fincher followed the immensely successful neo-noir Seven (1995) with a film that developed into a cult favorite after its original, disappointing release: Fight Club (1999), a sui generis mix of noir aesthetic, perverse comedy, speculative content, and satiric intent.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 46,
"text": "Working generally with much smaller budgets, brothers Joel and Ethan Coen have created one of the most extensive oeuvres influenced by classic noir, with films such as Blood Simple (1984) and Fargo (1996), the latter considered by some a supreme work in the neo-noir mode. The Coens cross noir with other generic traditions in the gangster drama Miller's Crossing (1990)—loosely based on the Dashiell Hammett novels Red Harvest and The Glass Key—and the comedy The Big Lebowski (1998), a tribute to Chandler and an homage to Altman's version of The Long Goodbye. The characteristic work of David Lynch combines film noir tropes with scenarios driven by disturbed characters such as the sociopathic criminal played by Dennis Hopper in Blue Velvet (1986) and the delusionary protagonist of Lost Highway (1997). The Twin Peaks cycle, both the TV series (1990–91) and a film, Fire Walk with Me (1992), puts a detective plot through a succession of bizarre spasms. David Cronenberg also mixes surrealism and noir in Naked Lunch (1991), inspired by William S. Burroughs' novel.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 47,
"text": "Perhaps no American neo-noirs better reflect the classic noir B movie spirit than those of director-writer Quentin Tarantino. Neo-noirs of his such as Reservoir Dogs (1992) and Pulp Fiction (1994) display a relentlessly self-reflexive, sometimes tongue-in-cheek sensibility, similar to the work of the New Wave directors and the Coens. Other films from the era readily identifiable as neo-noir (some retro, some more au courant) include director John Dahl's Kill Me Again (1989), Red Rock West (1992), and The Last Seduction (1993); four adaptations of novels by Jim Thompson—The Kill-Off (1989), After Dark, My Sweet (1990), The Grifters (1990), and the remake of The Getaway (1994); and many more, including adaptations of the work of other major noir fiction writers: The Hot Spot (1990), from Hell Hath No Fury, by Charles Williams; Miami Blues (1990), from the novel by Charles Willeford; and Out of Sight (1998), from the novel by Elmore Leonard. Several films by director-writer David Mamet involve noir elements: House of Games (1987), Homicide (1991), The Spanish Prisoner (1997), and Heist (2001). On television, Moonlighting (1985–89) paid homage to classic noir while demonstrating an unusual appreciation of the sense of humor often found in the original cycle. Between 1983 and 1989, Mickey Spillane's hardboiled private eye Mike Hammer was played with wry gusto by Stacy Keach in a series and several stand-alone television films (an unsuccessful revival followed in 1997–98). The British miniseries The Singing Detective (1986), written by Dennis Potter, tells the story of a mystery writer named Philip Marlow; widely considered one of the finest neo-noirs in any medium, some critics rank it among the greatest television productions of all time.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 48,
"text": "Among big-budget auteurs, Michael Mann has worked frequently in a neo-noir mode, with such films as Thief (1981) and Heat (1995) and the TV series Miami Vice (1984–89) and Crime Story (1986–88). Mann's output exemplifies a primary strain of neo-noir, or as it is affectionately called, \"neon noir\", in which classic themes and tropes are revisited in a contemporary setting with an up-to-date visual style and rock- or hip hop-based musical soundtrack.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 49,
"text": "Neo-noir film borrows from and reflects many of the characteristics of the film noir: the presence of crime and violence, complex characters and plot-lines, mystery, and moral ambivalence, all of which come into play in the neon-noir sub-genre. But more than just exhibiting the superficial traits of the genre, neon-noir emphasizes the socio-critique of film noir, recalling the specific socio-cultural dimensions of the interwar years when noirs first became prominent; a time of global existential crisis, depression and the mass movement of the rural population to cities. Long shots or montages of cityscapes, often portrayed as dark and menacing, are suggestive of what Dueck referred to as a ‘bleak societal perspective’, providing a critique on global capitalism and consumerism. Other characteristics include the use of highly stylized lighting techniques such chiaroscuro, and neon signs and brightly lit buildings that provide a sense of alienation and entrapment.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 50,
"text": "Accentuating the use of artificial and neon lighting in the films-noir of the '40s and '50s, neon-noir films accentuate this aesthetic with electrifying color and manipulated light in order to highlight their socio-cultural critiques and their references to contemporary and pop culture. In doing so, neon-noir films present the themes of urban decay, consumerist decadence and capitalism, existentialism, sexuality, and issues of race and violence in the contemporary culture, not only in America, but the globalized world at large.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 51,
"text": "Neon-noirs seek to bring the contemporary noir, somewhat diluted under the umbrella of neo-noir, back to the exploration of culture: class, race, gender, patriarchy, and capitalism. Neon-noirs present an existential exploration of society in a hyper-technological and globalized world. Illustrating society as decadent and consumerist, and identity as confused and anxious, neon-noirs reposition the contemporary noir in the setting of urban decay, often featuring scenes set in underground city haunts: brothels, nightclubs, casinos, strip bars, pawnshops, laundromats.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 52,
"text": "Neon-noirs were popularized in the '70s and '80s by films such as Taxi Driver (1976), Blade Runner (1982), and films from David Lynch, such as Blue Velvet (1986) and later, Lost Highway (1997). Other titles from this era included Brian De Palma's Blow Out (1981) and the Coen Brothers' debut Blood Simple (1984). More currently, films such as Harmony Korine’s highly provocative Spring Breakers (2012), and Danny Boyle’s Trance (2013) have been especially noted for their neon-infused rendering of film noir; while Trance was celebrated for ‘shak(ing) the ingredients (of the noir) like colored sand in a jar’, Spring Breakers notoriously produced a slew of criticism referring to its ‘fever-dream’ aesthetic and ‘neon-caked explosion of excess’ (Kohn). Another neon-noir endowed with the 'fever-dream' aesthetic is The Persian Connection, expressly linked to Lynchian aesthetics as a neon-drenched contemporary noir.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 53,
"text": "Neon-noir can be seen as a response to the over-use of the term neo-noir. While the term neo-noir functions to bring noir into the contemporary landscape, it has often been criticized for its dilution of the noir genre. Author Robert Arnett commented on its \"amorphous\" reach: \"any film featuring a detective or crime qualifies\". The neon-noir, more specifically, seeks to revive noir sensibilities in a more targeted manner of reference, focalizing socio-cultural commentary and a hyper-stylized aesthetic.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 54,
"text": "The Coen brothers make reference to the noir tradition again with The Man Who Wasn't There (2001); a black-and-white crime melodrama set in 1949; it features a scene apparently staged to mirror one from Out of the Past. Lynch's Mulholland Drive (2001) continued in his characteristic vein, making the classic noir setting of Los Angeles the venue for a noir-inflected psychological jigsaw puzzle. British-born director Christopher Nolan's black-and-white debut, Following (1998), was an overt homage to classic noir. During the new century's first decade, he was one of the leading Hollywood directors of neo-noir with the acclaimed Memento (2000) and the remake of Insomnia (2002).",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 55,
"text": "Director Sean Penn's The Pledge (2001), though adapted from a very self-reflexive novel by Friedrich Dürrenmatt, plays noir comparatively straight, to devastating effect. Screenwriter David Ayer updated the classic noir bad-cop tale, typified by Shield for Murder (1954) and Rogue Cop (1954), with his scripts for Training Day (2001) and, adapting a story by James Ellroy, Dark Blue (2002); he later wrote and directed the even darker Harsh Times (2006). Michael Mann's Collateral (2004) features a performance by Tom Cruise as an assassin in the lineage of Le Samouraï. The torments of The Machinist (2004), directed by Brad Anderson, evoke both Fight Club and Memento. In 2005, Shane Black directed Kiss Kiss Bang Bang, basing his screenplay in part on a crime novel by Brett Halliday, who published his first stories back in the 1920s. The film plays with an awareness not only of classic noir but also of neo-noir reflexivity itself.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 56,
"text": "With ultra-violent films such as Sympathy for Mr. Vengeance (2002) and Thirst (2009), Park Chan-wook of South Korea has been the most prominent director outside of the United States to work regularly in a noir mode in the new millennium. The most commercially successful neo-noir of this period has been Sin City (2005), directed by Robert Rodriguez in extravagantly stylized black and white with splashes of color. The film is based on a series of comic books created by Frank Miller (credited as the film's codirector), which are in turn openly indebted to the works of Spillane and other pulp mystery authors. Similarly, graphic novels provide the basis for Road to Perdition (2002), directed by Sam Mendes, and A History of Violence (2005), directed by David Cronenberg; the latter was voted best film of the year in the annual Village Voice poll. Writer-director Rian Johnson's Brick (2005), featuring present-day high schoolers speaking a version of 1930s hardboiled argot, won the Special Jury Prize for Originality of Vision at the Sundance Film Festival. The television series Veronica Mars (2004–07) and the movie Veronica Mars (2014) also brought a youth-oriented twist to film noir. Examples of this sort of generic crossover have been dubbed \"teen noir\".",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 57,
"text": "Neo-noir films released in the 2010s include Kim Jee-woon’s I Saw the Devil (2010), Fred Cavaye’s Point Blank (2010), Na Hong-jin’s The Yellow Sea (2010), Nicolas Winding Refn’s Drive (2011), Claire Denis' Bastards (2013) and Dan Gilroy's Nightcrawler (2014).",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 58,
"text": "The Science Channel broadcast the 2021 science documentary series Killers of the Cosmos in a format it describes as \"space noir.\" In the series, actor Aidan Gillen in animated form serves as the host of the series while portraying a private investigator who takes on \"cases\" in which he \"hunts down\" lethal threats to humanity posed by the cosmos. The animated sequences combine the characteristics of film noir with those of a pulp fiction graphic novel set in the mid-20th century, and they link conventional live-action documentary segments in which experts describe the potentially deadly phenomena.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 59,
"text": "In the post-classic era, a significant trend in noir crossovers has involved science fiction. In Jean-Luc Godard's Alphaville (1965), Lemmy Caution is the name of the old-school private eye in the city of tomorrow. The Groundstar Conspiracy (1972) centers on another implacable investigator and an amnesiac named Welles. Soylent Green (1973), the first major American example, portrays a dystopian, near-future world via a noir detection plot; starring Charlton Heston (the lead in Touch of Evil), it also features classic noir standbys Joseph Cotten, Edward G. Robinson, and Whit Bissell. The film was directed by Richard Fleischer, who two decades before had directed several strong B noirs, including Armored Car Robbery (1950) and The Narrow Margin (1952).",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 60,
"text": "The cynical and stylized perspective of classic film noir had a formative effect on the cyberpunk genre of science fiction that emerged in the early 1980s; the film most directly influential on cyberpunk was Blade Runner (1982), directed by Ridley Scott, which pays evocative homage to the classic noir mode (Scott subsequently directed the poignant 1987 noir crime melodrama Someone to Watch Over Me). Scholar Jamaluddin Bin Aziz has observed how \"the shadow of Philip Marlowe lingers on\" in such other \"future noir\" films as 12 Monkeys (1995), Dark City (1998) and Minority Report (2002). Fincher's feature debut was Alien 3 (1992), which evoked the classic noir jail film Brute Force.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 61,
"text": "David Cronenberg's Crash (1996), an adaptation of the speculative novel by J. G. Ballard, has been described as a \"film noir in bruise tones\". The hero is the target of investigation in Gattaca (1997), which fuses film noir motifs with a scenario indebted to Brave New World. The Thirteenth Floor (1999), like Blade Runner, is an explicit homage to classic noir, in this case involving speculations about virtual reality. Science fiction, noir, and anime are brought together in the Japanese films of 90s Ghost in the Shell (1995) and Ghost in the Shell 2: Innocence (2004), both directed by Mamoru Oshii. The Animatrix (2003), based on and set within the world of The Matrix film trilogy, contains an anime short film in classic noir style titled \"A Detective Story\". Anime television series with science fiction noir themes include Noir (2001) and Cowboy Bebop (1998).",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 62,
"text": "The 2015 film Ex Machina puts an understated film noir spin on the Frankenstein mythos, with the sentient android Ava as a potential femme fatale, her creator Nathan embodying the abusive husband or father trope, and her would-be rescuer Caleb as a \"clueless drifter\" enthralled by Ava.",
"title": "Neo-noir and echoes of the classic mode"
},
{
"paragraph_id": 63,
"text": "Film noir has been parodied many times in many manners. In 1945, Danny Kaye starred in what appears to be the first intentional film noir parody, Wonder Man. That same year, Deanna Durbin was the singing lead in the comedic noir Lady on a Train, which makes fun of Woolrich-brand wistful miserablism. Bob Hope inaugurated the private-eye noir parody with My Favorite Brunette (1947), playing a baby-photographer who is mistaken for an ironfisted detective. In 1947 as well, The Bowery Boys appeared in Hard Boiled Mahoney, which had a similar mistaken-identity plot; they spoofed the genre once more in Private Eyes (1953). Two RKO productions starring Robert Mitchum take film noir over the border into self-parody: The Big Steal (1949), directed by Don Siegel, and His Kind of Woman (1951). The \"Girl Hunt\" ballet in Vincente Minnelli's The Band Wagon (1953) is a ten-minute distillation of—and play on—noir in dance. The Cheap Detective (1978), starring Peter Falk, is a broad spoof of several films, including the Bogart classics The Maltese Falcon and Casablanca. Carl Reiner's black-and-white Dead Men Don't Wear Plaid (1982) appropriates clips of classic noirs for a farcical pastiche, while his Fatal Instinct (1993) sends up noir classic (Double Indemnity) and neo-noir (Basic Instinct). Robert Zemeckis's Who Framed Roger Rabbit (1988) develops a noir plot set in 1940s Los Angeles around a host of cartoon characters.",
"title": "Parodies"
},
{
"paragraph_id": 64,
"text": "Noir parodies come in darker tones as well. Murder by Contract (1958), directed by Irving Lerner, is a deadpan joke on noir, with a denouement as bleak as any of the films it kids. An ultra-low-budget Columbia Pictures production, it may qualify as the first intentional example of what is now called a neo-noir film; it was likely a source of inspiration for both Melville's Le Samouraï and Scorsese's Taxi Driver. Belying its parodic strain, The Long Goodbye's final act is seriously grave. Taxi Driver caustically deconstructs the \"dark\" crime film, taking it to an absurd extreme and then offering a conclusion that manages to mock every possible anticipated ending—triumphant, tragic, artfully ambivalent—while being each, all at once. Flirting with splatter status even more brazenly, the Coens' Blood Simple is both an exacting pastiche and a gross exaggeration of classic noir. Adapted by director Robinson Devor from a novel by Charles Willeford, The Woman Chaser (1999) sends up not just the noir mode but the entire Hollywood filmmaking process, with each shot seemingly staged as the visual equivalent of an acerbic Marlowe wisecrack.",
"title": "Parodies"
},
{
"paragraph_id": 65,
"text": "In other media, the television series Sledge Hammer! (1986–88) lampoons noir, along with such topics as capital punishment, gun fetishism, and Dirty Harry. Sesame Street (1969–curr.) occasionally casts Kermit the Frog as a private eye; the sketches refer to some of the typical motifs of noir films, in particular the voiceover. Garrison Keillor's radio program A Prairie Home Companion features the recurring character Guy Noir, a hardboiled detective whose adventures always wander into farce (Guy also appears in the Altman-directed film based on Keillor's show). Firesign Theatre's Nick Danger has trodden the same not-so-mean streets, both on radio and in comedy albums. Cartoons such as Garfield's Babes and Bullets (1989) and comic strip characters such as Tracer Bullet of Calvin and Hobbes have parodied both film noir and the kindred hardboiled tradition—one of the sources from which film noir sprang and which it now overshadows. It’s Always Sunny in Philadelphia parodied the noir genre in its season 14 episode \"The Janitor Always Mops Twice.\"",
"title": "Parodies"
},
{
"paragraph_id": 66,
"text": "In their original 1955 canon of film noir, Raymond Borde and Etienne Chaumeton identified twenty-two Hollywood films released between 1941 and 1952 as core examples; they listed another fifty-nine American films from the period as significantly related to the field of noir. A half-century later, film historians and critics had come to agree on a canon of approximately three hundred films from 1940 to 1958. There remain, however, many differences of opinion over whether other films of the era, among them a number of well-known ones, qualify as films noir or not. For instance, The Night of the Hunter (1955), starring Robert Mitchum in an acclaimed performance, is treated as a film noir by some critics, but not by others. Some critics include Suspicion (1941), directed by Alfred Hitchcock, in their catalogues of noir; others ignore it. Concerning films made either before or after the classic period, or outside of the United States at any time, consensus is even rarer.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 67,
"text": "To support their categorization of certain films as noirs and their rejection of others, many critics refer to a set of elements they see as marking examples of the mode. The question of what constitutes the set of noir's identifying characteristics is a fundamental source of controversy. For instance, critics tend to define the model film noir as having a tragic or bleak conclusion, but many acknowledged classics of the genre have clearly happy endings (e.g., Stranger on the Third Floor, The Big Sleep, Dark Passage, and The Dark Corner), while the tone of many other noir denouements is ambivalent. Some critics perceive classic noir's hallmark as a distinctive visual style. Others, observing that there is actually considerable stylistic variety among noirs, instead emphasize plot and character type. Still others focus on mood and attitude. No survey of classic noir's identifying characteristics can therefore be considered definitive. In the 1990s and 2000s, critics have increasingly turned their attention to that diverse field of films called neo-noir; once again, there is even less consensus about the defining attributes of such films made outside the classic period. Roger Ebert offered \"A Guide to Film Noir\", writing that \"Film noir is...",
"title": "Identifying characteristics"
},
{
"paragraph_id": 68,
"text": "The low-key lighting schemes of many classic films noir are associated with stark light/dark contrasts and dramatic shadow patterning—a style known as chiaroscuro (a term adopted from Renaissance painting). The shadows of Venetian blinds or banister rods, cast upon an actor, a wall, or an entire set, are an iconic visual in noir and had already become a cliché well before the neo-noir era. Characters' faces may be partially or wholly obscured by darkness—a relative rarity in conventional Hollywood filmmaking. While black-and-white cinematography is considered by many to be one of the essential attributes of classic noir, the color films Leave Her to Heaven (1945) and Niagara (1953) are routinely included in noir filmographies, while Slightly Scarlet (1956), Party Girl (1958), and Vertigo (1958) are classified as noir by varying numbers of critics.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 69,
"text": "Film noir is also known for its use of low-angle, wide-angle, and skewed, or Dutch angle shots. Other devices of disorientation relatively common in film noir include shots of people reflected in one or more mirrors, shots through curved or frosted glass or other distorting objects (such as during the strangulation scene in Strangers on a Train), and special effects sequences of a sometimes bizarre nature. Night-for-night shooting, as opposed to the Hollywood norm of day-for-night, was often employed. From the mid-1940s forward, location shooting became increasingly frequent in noir.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 70,
"text": "In an analysis of the visual approach of Kiss Me Deadly, a late and self-consciously stylized example of classic noir, critic Alain Silver describes how cinematographic choices emphasize the story's themes and mood. In one scene, the characters, seen through a \"confusion of angular shapes\", thus appear \"caught in a tangible vortex or enclosed in a trap.\" Silver makes a case for how \"side light is used ... to reflect character ambivalence\", while shots of characters in which they are lit from below \"conform to a convention of visual expression which associates shadows cast upward of the face with the unnatural and ominous\".",
"title": "Identifying characteristics"
},
{
"paragraph_id": 71,
"text": "Films noir tend to have unusually convoluted story lines, frequently involving flashbacks and other editing techniques that disrupt and sometimes obscure the narrative sequence. Framing the entire primary narrative as a flashback is also a standard device. Voiceover narration, sometimes used as a structuring device, came to be seen as a noir hallmark; while classic noir is generally associated with first-person narration (i.e., by the protagonist), Stephen Neale notes that third-person narration is common among noirs of the semidocumentary style. Neo-noirs as varied as The Element of Crime (surrealist), After Dark, My Sweet (retro), and Kiss Kiss Bang Bang (meta) have employed the flashback/voiceover combination.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 72,
"text": "Bold experiments in cinematic storytelling were sometimes attempted during the classic era: Lady in the Lake, for example, is shot entirely from the point of view of protagonist Philip Marlowe; the face of star (and director) Robert Montgomery is seen only in mirrors. The Chase (1946) takes oneirism and fatalism as the basis for its fantastical narrative system, redolent of certain horror stories, but with little precedent in the context of a putatively realistic genre. In their different ways, both Sunset Boulevard and D.O.A. are tales told by dead men. Latter-day noir has been in the forefront of structural experimentation in popular cinema, as exemplified by such films as Pulp Fiction, Fight Club, and Memento.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 73,
"text": "Crime, usually murder, is an element of almost all films noir; in addition to standard-issue greed, jealousy is frequently the criminal motivation. A crime investigation—by a private eye, a police detective (sometimes acting alone), or a concerned amateur—is the most prevalent, but far from dominant, basic plot. In other common plots the protagonists are implicated in heists or con games, or in murderous conspiracies often involving adulterous affairs. False suspicions and accusations of crime are frequent plot elements, as are betrayals and double-crosses. According to J. David Slocum, \"protagonists assume the literal identities of dead men in nearly fifteen percent of all noir.\" Amnesia is fairly epidemic—\"noir's version of the common cold\", in the words of film historian Lee Server.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 74,
"text": "Films noir tend to revolve around heroes who are more flawed and morally questionable than the norm, often fall guys of one sort or another. The characteristic protagonists of noir are described by many critics as \"alienated\"; in the words of Silver and Ward, \"filled with existential bitterness\". Certain archetypal characters appear in many film noirs—hardboiled detectives, femme fatales, corrupt policemen, jealous husbands, intrepid claims adjusters, and down-and-out writers. Among characters of every stripe, cigarette smoking is rampant. From historical commentators to neo-noir pictures to pop culture ephemera, the private eye and the femme fatale have been adopted as the quintessential film noir figures, though they do not appear in most films now regarded as classic noir. Of the twenty-six National Film Registry noirs, in only four does the star play a private eye: The Maltese Falcon, The Big Sleep, Out of the Past, and Kiss Me Deadly. Just four others readily qualify as detective stories: Laura, The Killers, The Naked City, and Touch of Evil. There is usually an element of drug or alcohol use, particularly as part of the detective's method to solving the crime, as an example the character of Mike Hammer in the 1955 film Kiss Me Deadly who walks into a bar saying \"Give me a double bourbon, and leave the bottle\". Chaumeton and Borde have argued that film noir grew out of the \"literature of drugs and alcohol\".",
"title": "Identifying characteristics"
},
{
"paragraph_id": 75,
"text": "Film noir is often associated with an urban setting, and a few cities—Los Angeles, San Francisco, New York, and Chicago, in particular—are the location of many of the classic films. In the eyes of many critics, the city is presented in noir as a \"labyrinth\" or \"maze\". Bars, lounges, nightclubs, and gambling dens are frequently the scene of action. The climaxes of a substantial number of film noirs take place in visually complex, often industrial settings, such as refineries, factories, trainyards, power plants—most famously the explosive conclusion of White Heat, set at a chemical plant. In the popular (and, frequently enough, critical) imagination, in noir it is always night and it always raining.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 76,
"text": "A substantial trend within latter-day noir—dubbed \"film soleil\" by critic D. K. Holm—heads in precisely the opposite direction, with tales of deception, seduction, and corruption exploiting bright, sun-baked settings, stereotypically the desert or open water, to searing effect. Significant predecessors from the classic and early post-classic eras include The Lady from Shanghai; the Robert Ryan vehicle Inferno (1953); the French adaptation of Patricia Highsmith's The Talented Mr. Ripley, Plein soleil (Purple Noon in the United States, more accurately rendered elsewhere as Blazing Sun or Full Sun; 1960); and director Don Siegel's version of The Killers (1964). The tendency was at its peak during the late 1980s and 1990s, with films such as Dead Calm (1989), After Dark, My Sweet (1990), The Hot Spot (1990), Delusion (1991), Red Rock West (1993) and the television series Miami Vice.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 77,
"text": "Film noir is often described as essentially pessimistic. The noir stories that are regarded as most characteristic tell of people trapped in unwanted situations (which, in general, they did not cause but are responsible for exacerbating), striving against random, uncaring fate, and are frequently doomed. The films are seen as depicting a world that is inherently corrupt. Classic film noir has been associated by many critics with the American social landscape of the era—in particular, with a sense of heightened anxiety and alienation that is said to have followed World War II. In author Nicholas Christopher's opinion, \"it is as if the war, and the social eruptions in its aftermath, unleashed demons that had been bottled up in the national psyche.\" Films noir, especially those of the 1950s and the height of the Red Scare, are often said to reflect cultural paranoia; Kiss Me Deadly is the noir most frequently marshaled as evidence for this claim.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 78,
"text": "Film noir is often said to be defined by \"moral ambiguity\", yet the Production Code obliged almost all classic noirs to see that steadfast virtue was ultimately rewarded and vice, in the absence of shame and redemption, severely punished (however dramatically incredible the final rendering of mandatory justice might be). A substantial number of latter-day noirs flout such conventions: vice emerges triumphant in films as varied as the grim Chinatown and the ribald Hot Spot.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 79,
"text": "The tone of film noir is generally regarded as downbeat; some critics experience it as darker still—\"overwhelmingly black\", according to Robert Ottoson. Influential critic (and filmmaker) Paul Schrader wrote in a seminal 1972 essay that \"film noir is defined by tone\", a tone he seems to perceive as \"hopeless\". In describing the adaptation of Double Indemnity, noir analyst Foster Hirsch describes the \"requisite hopeless tone\" achieved by the filmmakers, which appears to characterize his view of noir as a whole. On the other hand, definitive film noirs such as The Big Sleep, The Lady from Shanghai, Scarlet Street and Double Indemnity itself are famed for their hardboiled repartee, often imbued with sexual innuendo and self-reflexive humor.",
"title": "Identifying characteristics"
},
{
"paragraph_id": 80,
"text": "The music of film noir was typically orchestral, per the Hollywood norm, but often with added dissonance. Many of the prime composers, like the directors and cameramen, were European émigrés, e.g., Max Steiner (The Big Sleep, Mildred Pierce), Miklós Rózsa (Double Indemnity, The Killers, Criss Cross), and Franz Waxman (Fury, Sunset Boulevard, Night and the City). Double Indemnity is a seminal score, initially disliked by Paramount's music director for its harshness but strongly endorsed by director Billy Wilder and studio chief Buddy DeSylva. There is a widespread popular impression that \"sleazy\" jazz saxophone and pizzicato bass constitute the sound of noir, but those characteristics arose much later, as in the late-1950s music of Henry Mancini for Touch of Evil and television's Peter Gunn. Bernard Herrmann's score for Taxi Driver makes heavy use of saxophone.",
"title": "Identifying characteristics"
}
]
| Film noir is a cinematic term used primarily to describe stylized Hollywood crime dramas, particularly those that emphasize cynical attitudes and motivations. The 1940s and 1950s are generally regarded as the "classic period" of American film noir. Film noir of this era is associated with a low-key, black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression. The term film noir, French for 'black film' (literal) or 'dark film', was first applied to Hollywood films by French critic Nino Frank in 1946, but was unrecognized by most American film industry professionals of that era. Frank is believed to have been inspired by the French literary publishing imprint Série noire, founded in 1945. Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic films noir were referred to as "melodramas". Whether film noir qualifies as a distinct genre or whether it is more of a filmmaking style is a matter of ongoing and heavy debate among scholars. Film noir encompasses a range of plots: the central figure may be a private investigator, a plainclothes police officer, an aging boxer, a hapless grifter, a law-abiding citizen lured into a life of crime, a femme fatale (Gilda) or simply a victim of circumstance (D.O.A.). Although film noir was originally associated with American productions, the term has been used to describe films from around the world. Many films released from the 1960s onward share attributes with films noir of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir. The clichés of film noir have inspired parody since the mid-1940s. | 2001-05-24T21:01:35Z | 2023-12-29T18:15:06Z | [
"Template:ISBN",
"Template:Short description",
"Template:Inflation",
"Template:Clear",
"Template:'",
"Template:Reflist",
"Template:Quote box",
"Template:See also",
"Template:Cite book",
"Template:Cite magazine",
"Template:Citation",
"Template:Crime fiction",
"Template:--",
"Template:POV-inline",
"Template:Refend",
"Template:Webarchive",
"Template:IPAc-en",
"Template:Refbegin",
"Template:Cite web",
"Template:Cite news",
"Template:Spoken Wikipedia",
"Template:Film genres",
"Template:Authority control",
"Template:Redirect",
"Template:Infobox film movement",
"Template:Flatlist",
"Template:Listen",
"Template:Citation needed",
"Template:IPA-fr",
"Template:Ref label",
"Template:Note label",
"Template:Cite journal",
"Template:Commonscatinline"
]
| https://en.wikipedia.org/wiki/Film_noir |
10,803 | Finno-Ugric languages | Finno-Ugric (/ˌfɪnoʊˈjuːɡrɪk/ or /ˌfɪnoʊˈuːɡrɪk/; Fenno-Ugric) or Finno-Ugrian (Fenno-Ugrian) is a traditional grouping of all languages in the Uralic language family except the Samoyedic languages. Its formerly commonly accepted status as a subfamily of Uralic is based on criteria formulated in the 19th century and is criticized by some contemporary linguists such as Tapani Salminen and Ante Aikio as inaccurate and misleading. The three most-spoken Uralic languages, Hungarian, Finnish, and Estonian, are all included in Finno-Ugric, although linguistic roots common to both branches of the traditional Finno-Ugric language tree (Finno-Permic and Ugric) are distant.
The term Finno-Ugric, which originally referred to the entire family, is sometimes used as a synonym for the term Uralic, which includes the Samoyedic languages, as commonly happens when a language family is expanded with further discoveries.
The validity of Finno-Ugric as a phylogenic grouping is under challenge, with some linguists maintaining that the Finno-Permic languages are as distinct from the Ugric languages as they are from the Samoyedic languages spoken in Siberia, or even that none of the Finno-Ugric, Finno-Permic, or Ugric branches has been established. Received opinion is that the easternmost (and last-discovered) Samoyed had separated first and the branching into Ugric and Finno-Permic took place later, but this reconstruction does not have strong support in the linguistic data.
Attempts at reconstructing a Proto-Finno-Ugric proto-language, a common ancestor of all Uralic languages except for the Samoyedic languages, are largely indistinguishable from Proto-Uralic, suggesting that Finno-Ugric might not be a historical grouping but a geographical one, with Samoyedic being distinct by lexical borrowing rather than actually being historically divergent. It has been proposed that the area in which Proto-Finno-Ugric was spoken reached between the Baltic Sea and the Ural Mountains.
Traditionally, the main set of evidence for the genetic proposal of Proto-Finno-Ugric has come from vocabulary. A large amount of vocabulary (e.g. the numerals "one", "three", "four" and "six"; the body-part terms "hand", "head") is only reconstructed up to the Proto-Finno-Ugric level, and only words with a Samoyedic equivalent have been reconstructed for Proto-Uralic. That methodology has been criticised, as no coherent explanation other than inheritance has been presented for the origin of most of the Finno-Ugric vocabulary (though a small number has been explained as old loanwords from Proto-Indo-European or its immediate successors).
The Samoyedic group has undergone a longer period of independent development, and its divergent vocabulary could be caused by mechanisms of replacement such as language contact. (The Finno-Ugric group is usually dated to approximately 4,000 years ago, the Samoyedic a little over 2,000.) Proponents of the traditional binary division note, however, that the invocation of extensive contact influence on vocabulary is at odds with the grammatical conservatism of Samoyedic.
The consonant *š (voiceless postalveolar fricative, [ʃ]) has not been conclusively shown to occur in the traditional Proto-Uralic lexicon, but it is attested in some of the Proto-Finno-Ugric material. Another feature attested in the Finno-Ugric vocabulary is that *i now behaves as a neutral vowel with respect to front-back vowel harmony, and thus there are roots such as *niwa- "to remove the hair from hides".
Regular sound changes proposed for this stage are few and remain open to interpretation. Sammallahti (1988) proposes five, following Janhunen's (1981) reconstruction of Proto-Finno-Permic:
Sammallahti (1988) further reconstructs sound changes *oo, *ee → *a, *ä (merging with original *a, *ä) for the development from Proto-Finno-Ugric to Proto-Ugric. Similar sound laws are required for other languages as well. Thus, the origin and raising of long vowels may actually belong at a later stage, and the development of these words from Proto-Uralic to Proto-Ugric can be summarized as simple loss of *x (if it existed in the first place at all; vowel length only surfaces consistently in the Baltic-Finnic languages.) The proposed raising of *o has been alternatively interpreted instead as a lowering *u → *o in Samoyedic (PU *lumi → *lomə → Proto-Samoyedic *jom).
Janhunen (2007, 2009) notes a number of derivational innovations in Finno-Ugric, including *ńoma "hare" → *ńoma-la, (vs. Samoyedic *ńomå), *pexli "side" → *peel-ka → *pelka "thumb", though involving Proto-Uralic derivational elements.
The Finno-Ugric group is not typologically distinct from Uralic as a whole: the most widespread structural features among the group all extend to the Samoyedic languages as well.
Modern linguistic research has shown that Volgaic languages is a geographical classification rather than a linguistic one, because the Mordvinic languages are more closely related to the Finno-Lappic languages than the Mari languages.
The relation of the Finno-Permic and the Ugric groups is adjudged remote by some scholars. On the other hand, with a projected time depth of only 3,000 to 4,000 years, the traditionally accepted Finno-Ugric grouping would be far younger than many major families such as Indo-European or Semitic, and would be about the same age as, for instance, the Eastern subfamily of Nilotic. But the grouping is far from transparent or securely established. The absence of early records is a major obstacle. As for the Finno-Ugric Urheimat, most of what has been said about it is speculation.
Some linguists criticizing the Finno-Ugric genetic proposal also question the validity of the entire Uralic family, instead proposing a Ural–Altaic hypothesis, within which they believe Finno-Permic may be as distant from Ugric as from Turkic. However, this approach has been rejected by nearly all other specialists in Uralic linguistics.
One argument in favor of the Finno-Ugric grouping has come from loanwords. Several loans from the Indo-European languages are present in most or all of the Finno-Ugric languages, while being absent from Samoyedic.
According to Häkkinen (1983) the alleged Proto-Finno-Ugric loanwords are disproportionally well-represented in Hungarian and the Permic languages, and disproportionally poorly represented in the Ob-Ugric languages; hence it is possible that such words have been acquired by the languages only after the initial dissolution of the Uralic family into individual dialects, and that the scarcity of loanwords in Samoyedic results from its peripheric location.
The number systems among the Finno-Ugric languages are particularly distinct from the Samoyedic languages: only the numerals "2", "5", and "7" have cognates in Samoyedic, while also the numerals, "1", "3", "4", "6", "10" are shared by all or most Finno-Ugric languages.
Below are the numbers 1 to 10 in several Finno-Ugric languages. Forms in italic do not descend from the reconstructed forms.
The number '2' descends in Ugric from a front-vocalic variant *kektä.
The numbers '9' and '8' in Finnic through Mari are considered to be derived from the numbers '1' and '2' as '10–1' and '10–2'. One reconstruction is *yk+teksa and *kak+teksa, respectively, where *teksa cf. deka is an Indo-European loan; the difference between /t/ and /d/ is not phonemic, unlike in Indo-European. Another analysis is *ykt-e-ksa, *kakt-e-ksa, with *e being the negative verb.
100-word Swadesh lists for certain Finno-Ugric languages can be compared and contrasted at the Rosetta Project website: Finnish, Estonian, Hungarian, and Erzya.
The four largest ethnic groups that speak Finno-Ugric languages are the Hungarians (14.5 million), Finns (6.5 million), Estonians (1.1 million), and Mordvins (0.85 million). Majorities of three (the Hungarians, Finns, and Estonians) inhabit their respective nation states in Europe, i.e. Hungary, Finland, and Estonia, while a large minority of Mordvins inhabit the federal Mordovian Republic within Russia (Russian Federation).
The indigenous area of the Sámi people is known as Sápmi and it consists of the northern parts of the Fennoscandian Peninsula. Some other peoples that speak Finno-Ugric languages have been assigned autonomous republics within Russia. These are the Karelians (Republic of Karelia), Komi (Komi Republic), Udmurts (Udmurt Republic) and Mari (Mari El Republic). The Khanty-Mansi Autonomous Okrug was set up for the Khanty and Mansi of Russia. A once-autonomous Komi-Permyak Okrug was set up for a region of high Komi habitation outside the Komi Republic.
Some of the ethnicities speaking Finno-Ugric languages are:
In the Finno-Ugric countries of Finland, Estonia and Hungary that find themselves surrounded by speakers of unrelated tongues, language origins and language history have long been relevant to national identity. In 1992, the 1st World Congress of Finno-Ugric Peoples was organized in Syktyvkar in the Komi Republic in Russia, the 2nd World Congress in 1996 in Budapest in Hungary, the 3rd Congress in 2000 in Helsinki in Finland, the 4th Congress in 2004 in Tallinn in Estonia, the 5th Congress in 2008 in Khanty-Mansiysk in Russia, the 6th Congress in 2012 in Siófok in Hungary, the 7th Congress in 2016 in Lahti in Finland, and the 8th Congress in 2021 in Tartu in Estonia. The members of the Finno-Ugric Peoples' Consultative Committee include: the Erzyas, Estonians, Finns, Hungarians, Ingrian Finns, Ingrians, Karelians, Khants, Komis, Mansis, Maris, Mokshas, Nenetses, Permian Komis, Saamis, Tver Karelians, Udmurts, Vepsians; Observers: Livonians, Setos.
In 2007, the 1st Festival of the Finno-Ugric Peoples was hosted by President Vladimir Putin of Russia, and visited by Finnish President, Tarja Halonen, and Hungarian Prime Minister, Ferenc Gyurcsány.
The International Finno-Ugric Students' Conference (IFUSCO) is organised annually by students of Finno-Ugric languages to bring together people from all over the world who are interested in the languages and cultures. The first conference was held in 1984 in Göttingen in Germany. IFUSCO features presentations and workshops on topics such as linguistics, ethnography, history and more.
The linguistic reconstruction of the Finno-Ugric language family has led to the postulation that the ancient Proto-Finno-Ugric people were ethnically related, and that even the modern Finno-Ugric-speaking peoples are ethnically related. Such hypotheses are based on the assumption that heredity can be traced through linguistic relatedness, although it must be kept in mind that language shift and ethnic admixture, a relatively frequent and common occurrence both in recorded history and most likely also in prehistory, confuses the picture and there is no straightforward relationship, if at all, between linguistic and genetic affiliation. Still, the premise that the speakers of the ancient proto-language were ethnically homogeneous is generally accepted.
Modern genetic studies have shown that the Y-chromosome haplogroup N3, and sometimes N2, is almost specific though certainly not restricted to Uralic- or Finno-Ugric-speaking populations, especially as high frequency or primary paternal haplogroup. These haplogroups branched from haplogroup N, which probably spread north, then west and east from Northern China about 12,000–14,000 years before present from father haplogroup NO (haplogroup O being the most common Y-chromosome haplogroup in Southeast Asia).
A study of the Finno-Ugric-speaking peoples of northern Eurasia (i.e., excluding the Hungarians), carried out between 2002 and 2008 in the Department of Forensic Medicine at the University of Helsinki, showed that the Finno-Ugric-speaking populations do not retain genetic evidence of a common founder. Most possess an amalgamation of West and East Eurasian gene pools that may have been present in central Asia, with subsequent genetic drift and recurrent founder effects among speakers of various branches of Finno-Ugric. Not all branches show evidence of a single founder effect. North Eurasian Finno-Ugric-speaking populations were found to be genetically a heterogeneous group showing lower haplotype diversities compared to more southern populations. North Eurasian Finno-Ugric-speaking populations possess unique genetic features due to complex genetic changes shaped by molecular and population genetics and adaptation to the areas of Boreal and Arctic North Eurasia.
Notes
Further reading | [
{
"paragraph_id": 0,
"text": "Finno-Ugric (/ˌfɪnoʊˈjuːɡrɪk/ or /ˌfɪnoʊˈuːɡrɪk/; Fenno-Ugric) or Finno-Ugrian (Fenno-Ugrian) is a traditional grouping of all languages in the Uralic language family except the Samoyedic languages. Its formerly commonly accepted status as a subfamily of Uralic is based on criteria formulated in the 19th century and is criticized by some contemporary linguists such as Tapani Salminen and Ante Aikio as inaccurate and misleading. The three most-spoken Uralic languages, Hungarian, Finnish, and Estonian, are all included in Finno-Ugric, although linguistic roots common to both branches of the traditional Finno-Ugric language tree (Finno-Permic and Ugric) are distant.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term Finno-Ugric, which originally referred to the entire family, is sometimes used as a synonym for the term Uralic, which includes the Samoyedic languages, as commonly happens when a language family is expanded with further discoveries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The validity of Finno-Ugric as a phylogenic grouping is under challenge, with some linguists maintaining that the Finno-Permic languages are as distinct from the Ugric languages as they are from the Samoyedic languages spoken in Siberia, or even that none of the Finno-Ugric, Finno-Permic, or Ugric branches has been established. Received opinion is that the easternmost (and last-discovered) Samoyed had separated first and the branching into Ugric and Finno-Permic took place later, but this reconstruction does not have strong support in the linguistic data.",
"title": "Status"
},
{
"paragraph_id": 3,
"text": "Attempts at reconstructing a Proto-Finno-Ugric proto-language, a common ancestor of all Uralic languages except for the Samoyedic languages, are largely indistinguishable from Proto-Uralic, suggesting that Finno-Ugric might not be a historical grouping but a geographical one, with Samoyedic being distinct by lexical borrowing rather than actually being historically divergent. It has been proposed that the area in which Proto-Finno-Ugric was spoken reached between the Baltic Sea and the Ural Mountains.",
"title": "Origins"
},
{
"paragraph_id": 4,
"text": "Traditionally, the main set of evidence for the genetic proposal of Proto-Finno-Ugric has come from vocabulary. A large amount of vocabulary (e.g. the numerals \"one\", \"three\", \"four\" and \"six\"; the body-part terms \"hand\", \"head\") is only reconstructed up to the Proto-Finno-Ugric level, and only words with a Samoyedic equivalent have been reconstructed for Proto-Uralic. That methodology has been criticised, as no coherent explanation other than inheritance has been presented for the origin of most of the Finno-Ugric vocabulary (though a small number has been explained as old loanwords from Proto-Indo-European or its immediate successors).",
"title": "Origins"
},
{
"paragraph_id": 5,
"text": "The Samoyedic group has undergone a longer period of independent development, and its divergent vocabulary could be caused by mechanisms of replacement such as language contact. (The Finno-Ugric group is usually dated to approximately 4,000 years ago, the Samoyedic a little over 2,000.) Proponents of the traditional binary division note, however, that the invocation of extensive contact influence on vocabulary is at odds with the grammatical conservatism of Samoyedic.",
"title": "Origins"
},
{
"paragraph_id": 6,
"text": "The consonant *š (voiceless postalveolar fricative, [ʃ]) has not been conclusively shown to occur in the traditional Proto-Uralic lexicon, but it is attested in some of the Proto-Finno-Ugric material. Another feature attested in the Finno-Ugric vocabulary is that *i now behaves as a neutral vowel with respect to front-back vowel harmony, and thus there are roots such as *niwa- \"to remove the hair from hides\".",
"title": "Origins"
},
{
"paragraph_id": 7,
"text": "Regular sound changes proposed for this stage are few and remain open to interpretation. Sammallahti (1988) proposes five, following Janhunen's (1981) reconstruction of Proto-Finno-Permic:",
"title": "Origins"
},
{
"paragraph_id": 8,
"text": "Sammallahti (1988) further reconstructs sound changes *oo, *ee → *a, *ä (merging with original *a, *ä) for the development from Proto-Finno-Ugric to Proto-Ugric. Similar sound laws are required for other languages as well. Thus, the origin and raising of long vowels may actually belong at a later stage, and the development of these words from Proto-Uralic to Proto-Ugric can be summarized as simple loss of *x (if it existed in the first place at all; vowel length only surfaces consistently in the Baltic-Finnic languages.) The proposed raising of *o has been alternatively interpreted instead as a lowering *u → *o in Samoyedic (PU *lumi → *lomə → Proto-Samoyedic *jom).",
"title": "Origins"
},
{
"paragraph_id": 9,
"text": "Janhunen (2007, 2009) notes a number of derivational innovations in Finno-Ugric, including *ńoma \"hare\" → *ńoma-la, (vs. Samoyedic *ńomå), *pexli \"side\" → *peel-ka → *pelka \"thumb\", though involving Proto-Uralic derivational elements.",
"title": "Origins"
},
{
"paragraph_id": 10,
"text": "The Finno-Ugric group is not typologically distinct from Uralic as a whole: the most widespread structural features among the group all extend to the Samoyedic languages as well.",
"title": "Structural features"
},
{
"paragraph_id": 11,
"text": "Modern linguistic research has shown that Volgaic languages is a geographical classification rather than a linguistic one, because the Mordvinic languages are more closely related to the Finno-Lappic languages than the Mari languages.",
"title": "Classification models"
},
{
"paragraph_id": 12,
"text": "The relation of the Finno-Permic and the Ugric groups is adjudged remote by some scholars. On the other hand, with a projected time depth of only 3,000 to 4,000 years, the traditionally accepted Finno-Ugric grouping would be far younger than many major families such as Indo-European or Semitic, and would be about the same age as, for instance, the Eastern subfamily of Nilotic. But the grouping is far from transparent or securely established. The absence of early records is a major obstacle. As for the Finno-Ugric Urheimat, most of what has been said about it is speculation.",
"title": "Classification models"
},
{
"paragraph_id": 13,
"text": "Some linguists criticizing the Finno-Ugric genetic proposal also question the validity of the entire Uralic family, instead proposing a Ural–Altaic hypothesis, within which they believe Finno-Permic may be as distant from Ugric as from Turkic. However, this approach has been rejected by nearly all other specialists in Uralic linguistics.",
"title": "Classification models"
},
{
"paragraph_id": 14,
"text": "One argument in favor of the Finno-Ugric grouping has come from loanwords. Several loans from the Indo-European languages are present in most or all of the Finno-Ugric languages, while being absent from Samoyedic.",
"title": "Common vocabulary"
},
{
"paragraph_id": 15,
"text": "According to Häkkinen (1983) the alleged Proto-Finno-Ugric loanwords are disproportionally well-represented in Hungarian and the Permic languages, and disproportionally poorly represented in the Ob-Ugric languages; hence it is possible that such words have been acquired by the languages only after the initial dissolution of the Uralic family into individual dialects, and that the scarcity of loanwords in Samoyedic results from its peripheric location.",
"title": "Common vocabulary"
},
{
"paragraph_id": 16,
"text": "The number systems among the Finno-Ugric languages are particularly distinct from the Samoyedic languages: only the numerals \"2\", \"5\", and \"7\" have cognates in Samoyedic, while also the numerals, \"1\", \"3\", \"4\", \"6\", \"10\" are shared by all or most Finno-Ugric languages.",
"title": "Common vocabulary"
},
{
"paragraph_id": 17,
"text": "Below are the numbers 1 to 10 in several Finno-Ugric languages. Forms in italic do not descend from the reconstructed forms.",
"title": "Common vocabulary"
},
{
"paragraph_id": 18,
"text": "The number '2' descends in Ugric from a front-vocalic variant *kektä.",
"title": "Common vocabulary"
},
{
"paragraph_id": 19,
"text": "The numbers '9' and '8' in Finnic through Mari are considered to be derived from the numbers '1' and '2' as '10–1' and '10–2'. One reconstruction is *yk+teksa and *kak+teksa, respectively, where *teksa cf. deka is an Indo-European loan; the difference between /t/ and /d/ is not phonemic, unlike in Indo-European. Another analysis is *ykt-e-ksa, *kakt-e-ksa, with *e being the negative verb.",
"title": "Common vocabulary"
},
{
"paragraph_id": 20,
"text": "100-word Swadesh lists for certain Finno-Ugric languages can be compared and contrasted at the Rosetta Project website: Finnish, Estonian, Hungarian, and Erzya.",
"title": "Common vocabulary"
},
{
"paragraph_id": 21,
"text": "The four largest ethnic groups that speak Finno-Ugric languages are the Hungarians (14.5 million), Finns (6.5 million), Estonians (1.1 million), and Mordvins (0.85 million). Majorities of three (the Hungarians, Finns, and Estonians) inhabit their respective nation states in Europe, i.e. Hungary, Finland, and Estonia, while a large minority of Mordvins inhabit the federal Mordovian Republic within Russia (Russian Federation).",
"title": "Speakers"
},
{
"paragraph_id": 22,
"text": "The indigenous area of the Sámi people is known as Sápmi and it consists of the northern parts of the Fennoscandian Peninsula. Some other peoples that speak Finno-Ugric languages have been assigned autonomous republics within Russia. These are the Karelians (Republic of Karelia), Komi (Komi Republic), Udmurts (Udmurt Republic) and Mari (Mari El Republic). The Khanty-Mansi Autonomous Okrug was set up for the Khanty and Mansi of Russia. A once-autonomous Komi-Permyak Okrug was set up for a region of high Komi habitation outside the Komi Republic.",
"title": "Speakers"
},
{
"paragraph_id": 23,
"text": "Some of the ethnicities speaking Finno-Ugric languages are:",
"title": "Speakers"
},
{
"paragraph_id": 24,
"text": "In the Finno-Ugric countries of Finland, Estonia and Hungary that find themselves surrounded by speakers of unrelated tongues, language origins and language history have long been relevant to national identity. In 1992, the 1st World Congress of Finno-Ugric Peoples was organized in Syktyvkar in the Komi Republic in Russia, the 2nd World Congress in 1996 in Budapest in Hungary, the 3rd Congress in 2000 in Helsinki in Finland, the 4th Congress in 2004 in Tallinn in Estonia, the 5th Congress in 2008 in Khanty-Mansiysk in Russia, the 6th Congress in 2012 in Siófok in Hungary, the 7th Congress in 2016 in Lahti in Finland, and the 8th Congress in 2021 in Tartu in Estonia. The members of the Finno-Ugric Peoples' Consultative Committee include: the Erzyas, Estonians, Finns, Hungarians, Ingrian Finns, Ingrians, Karelians, Khants, Komis, Mansis, Maris, Mokshas, Nenetses, Permian Komis, Saamis, Tver Karelians, Udmurts, Vepsians; Observers: Livonians, Setos.",
"title": "Speakers"
},
{
"paragraph_id": 25,
"text": "In 2007, the 1st Festival of the Finno-Ugric Peoples was hosted by President Vladimir Putin of Russia, and visited by Finnish President, Tarja Halonen, and Hungarian Prime Minister, Ferenc Gyurcsány.",
"title": "Speakers"
},
{
"paragraph_id": 26,
"text": "The International Finno-Ugric Students' Conference (IFUSCO) is organised annually by students of Finno-Ugric languages to bring together people from all over the world who are interested in the languages and cultures. The first conference was held in 1984 in Göttingen in Germany. IFUSCO features presentations and workshops on topics such as linguistics, ethnography, history and more.",
"title": "Speakers"
},
{
"paragraph_id": 27,
"text": "The linguistic reconstruction of the Finno-Ugric language family has led to the postulation that the ancient Proto-Finno-Ugric people were ethnically related, and that even the modern Finno-Ugric-speaking peoples are ethnically related. Such hypotheses are based on the assumption that heredity can be traced through linguistic relatedness, although it must be kept in mind that language shift and ethnic admixture, a relatively frequent and common occurrence both in recorded history and most likely also in prehistory, confuses the picture and there is no straightforward relationship, if at all, between linguistic and genetic affiliation. Still, the premise that the speakers of the ancient proto-language were ethnically homogeneous is generally accepted.",
"title": "Speakers"
},
{
"paragraph_id": 28,
"text": "Modern genetic studies have shown that the Y-chromosome haplogroup N3, and sometimes N2, is almost specific though certainly not restricted to Uralic- or Finno-Ugric-speaking populations, especially as high frequency or primary paternal haplogroup. These haplogroups branched from haplogroup N, which probably spread north, then west and east from Northern China about 12,000–14,000 years before present from father haplogroup NO (haplogroup O being the most common Y-chromosome haplogroup in Southeast Asia).",
"title": "Speakers"
},
{
"paragraph_id": 29,
"text": "A study of the Finno-Ugric-speaking peoples of northern Eurasia (i.e., excluding the Hungarians), carried out between 2002 and 2008 in the Department of Forensic Medicine at the University of Helsinki, showed that the Finno-Ugric-speaking populations do not retain genetic evidence of a common founder. Most possess an amalgamation of West and East Eurasian gene pools that may have been present in central Asia, with subsequent genetic drift and recurrent founder effects among speakers of various branches of Finno-Ugric. Not all branches show evidence of a single founder effect. North Eurasian Finno-Ugric-speaking populations were found to be genetically a heterogeneous group showing lower haplotype diversities compared to more southern populations. North Eurasian Finno-Ugric-speaking populations possess unique genetic features due to complex genetic changes shaped by molecular and population genetics and adaptation to the areas of Boreal and Arctic North Eurasia.",
"title": "Speakers"
},
{
"paragraph_id": 30,
"text": "Notes",
"title": "References"
},
{
"paragraph_id": 31,
"text": "Further reading",
"title": "References"
}
]
| Finno-Ugric or Finno-Ugrian (Fenno-Ugrian) is a traditional grouping of all languages in the Uralic language family except the Samoyedic languages. Its formerly commonly accepted status as a subfamily of Uralic is based on criteria formulated in the 19th century and is criticized by some contemporary linguists such as Tapani Salminen and Ante Aikio as inaccurate and misleading. The three most-spoken Uralic languages, Hungarian, Finnish, and Estonian, are all included in Finno-Ugric, although linguistic roots common to both branches of the traditional Finno-Ugric language tree are distant. The term Finno-Ugric, which originally referred to the entire family, is sometimes used as a synonym for the term Uralic, which includes the Samoyedic languages, as commonly happens when a language family is expanded with further discoveries. | 2001-05-13T21:15:55Z | 2023-12-30T13:43:20Z | [
"Template:IPAc-en",
"Template:Cite journal",
"Template:Anchor",
"Template:Cite encyclopedia",
"Template:Cite thesis",
"Template:Cite book",
"Template:Uralic languages",
"Template:Authority control",
"Template:Use dmy dates",
"Template:IPA",
"Template:Webarchive",
"Template:Columns-list",
"Template:Colend",
"Template:Short description",
"Template:Main",
"Template:Transliteration",
"Template:Reflist",
"Template:Cite news",
"Template:Col div",
"Template:Anli",
"Template:In lang",
"Template:Cite EB1911",
"Template:Infobox language family",
"Template:Citation needed",
"Template:More citations needed",
"Template:Citation",
"Template:Lang",
"Template:Cn",
"Template:Portal",
"Template:See also",
"Template:Cite web",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/Finno-Ugric_languages |
10,804 | Finnish | Finnish may refer to: | [
{
"paragraph_id": 0,
"text": "Finnish may refer to:",
"title": ""
}
]
| Finnish may refer to: Something or someone from, or related to Finland
Culture of Finland
Finnish people or Finns, the primary ethnic group in Finland
Finnish language, the national language of the Finnish people
Finnish cuisine | 2022-08-16T20:59:19Z | [
"Template:Wiktionary",
"Template:Look from",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Finnish |
|
10,808 | Freestyle music | Freestyle, or Latin freestyle (initially called Latin hip hop) is a form of electronic dance music that emerged in the New York metropolitan area, Philadelphia, and Miami, primarily among Hispanic Americans and Italian Americans in the 1980s. It experienced its greatest popularity from the late 1980s until the early 1990s. A common theme of freestyle lyricism originated as heartbreak in an urban environment typified by New York City.
An important precursor to freestyle is 1982's "Planet Rock" by Afrika Bambaataa & Soul Sonic Force. Shannon's 1983 hit "Let the Music Play" is often considered the first freestyle song and the first major song recorded by a Latin American artist is "Please Don't Go" by Nayobe from 1984. From there, freestyle gained a large presence in American clubs, especially in New York and Miami. Radio airplay followed in the mid 1980s.
Performers such as Exposé, Lisa Lisa and Cult Jam, Stevie B and Sweet Sensation gained mainstream chart success with the genre in the late 1980s and early 1990s, but its popularity largely faded by the end of the decade. Both classic and newer freestyle output remain popular as a niche genre in Brazil (where it is an influence on funk carioca), Germany and Canada.
Freestyle music developed in the early 1980s, primarily simultaneously in the Hispanic (mainly Puerto Rican) communities of Upper Manhattan and The Bronx and in the Italian-American communities in Brooklyn, the Bronx, and other boroughs of New York City, New Jersey, Westchester County and Long Island. It initially was a fusion of synthetic instrumentation and syncopated percussion of 1980s electro, as favored by fans of breakdancing. Sampling, as found in synth-pop music and hip-hop, was incorporated. Key influences include Afrika Bambaataa & Soul Sonic Force's "Planet Rock" (1982) and Shannon's "Let the Music Play" (1983), the latter was a top-ten Billboard Hot 100 hit. In 1984, a Latin presence was established when the first song recorded in the genre by a Latin American artist, "Please Don't Go", by newcomer Nayobe (a singer from Brooklyn and of Afro-Cuban descent) was recorded and released. The song became a success, reaching No. 23 on the Billboard Hot Dance Music/Club Play chart. In 1985, a Spanish version of the song was released with the title "No Te Vayas". By 1987, freestyle began getting more airplay on American pop radio stations. Songs such as "Come Go with Me" by Exposé, "Show Me" by the Cover Girls, "Fascinated" by Company B, "Silent Morning" by Noel, and "Catch Me (I'm Falling)" by Pretty Poison, brought freestyle into the mainstream. House music, based partly on disco rhythms, was by 1992 challenging the relatively upbeat, syncopated freestyle sound. Pitchfork considers the Miami Mix of ABC's single "When Smokey Sings" to be proto-freestyle, despite that version being released in 1987. Many early or popular freestyle artists and DJs, such as Jellybean, Tony Torres, Raul Soto, Roman Ricardo, Lil Suzy, and Nocera were of Hispanic or Italian ancestry, which was one reason for the style's popularity among Hispanic Americans and Italian Americans in the New York City area and Philadelphia.
Freestyle's Top 40 Radio airplay started to really take off by 1987, and it began to disappear from the airwaves in the early 1990s as radio stations moved to Top 40-only formats. Artists such as George Lamond, Exposé, Sweet Sensation, and Stevie B were still heard on mainstream radio, but other notable freestyle artists did not fare as well. Carlos Berrios and Platinum producer Frankie Cutlass used a freestyle production on "Temptation" by Corina and "Together Forever" by Lisette Melendez. The songs were released in 1991, almost simultaneously, and caused a resurgence in the style when they were embraced by Top 40 radio. "Temptation" reached the number 6 spot on the Billboard Hot 100 Chart. These hits were followed by the success of Lisa Lisa and Cult Jam, who had been one of the earliest freestyle acts. Their records were produced by Full Force, who had also worked with UTFO and James Brown.
Several primarily freestyle artists released ballads during the 1980s and early 1990s that crossed over to the pop charts and charted higher than their previous work. These include "Seasons Change" by Exposé, "Thinking of You" by Sa-Fire, "One More Try" by Timmy T, "Because I Love You (The Postman Song)" by Stevie B, and "If Wishes Came True" by Sweet Sensation. Brenda K. Starr reached the Hot 100 with her ballad "I Still Believe". Freestyle shortly thereafter gave way to mainstream pop artists such as MC Hammer, Paula Abdul, Bobby Brown, New Kids on the Block, and Milli Vanilli (with some artists utilizing elements of freestyle beginning in the 1980s) using hip hop beats and electro samples in a mainstream form with slicker production and MTV-friendly videos. These artists were successful on crossover stations as well as R&B stations, and freestyle was replaced as an underground genre by newer styles such as new jack swing, trance and Eurodance. Despite this, some freestyle acts managed to garner hits well into the 1990s, with acts such as Cynthia and Rockell scoring minor hits on the Billboard Hot 100 as late as 1998.
Freestyle remained a largely underground genre with a sizable following in New York, but has recently seen a comeback in the cities where the music originally experienced its greatest success. New York City impresario Steve Sylvester and producer Sal Abbetiello of Fever Records launched Stevie Sly's Freestyle Party show at the Manhattan live music venue, Coda on April 1, 2004. The show featured Judy Torres, Cynthia, and the Cover Girls and was attended by several celebrity guests. The Coda show was successful, and was followed by a summer 2006 Madison Square Garden concert that showcased freestyle's most successful performers. New freestyle releases are popular with enthusiasts and newcomers alike. Miami rapper Pitbull collaborated with Miami freestyle artist Stevie B to create an updated version of Stevie B's hit, "Spring Love".
Currently, freestyle music continues to have a thriving fanbase in certain parts of the country, with New York City Italian-American DJs such as Bad Boy Joe and Louie DeVito helping to maintain an active freestyle scene in the NYC metro area. In cities like New York, Miami, and Los Angeles, recent concerts by freestyle artists have been extremely successful, with many events selling out.
As Latin freestyle in the late 1980s and early 1990s gradually became superseded with house music, dance-pop, and regular hip hop on one front and Spanish-language pop music with marginal Latin freestyle influences on another, "harder strain" of house music originating in New York City was known to incorporate elements of Latin freestyle and the old school hip hop sound. Principal architects of the genre were Todd Terry (early instances include "Alright Alright," and "Dum Dum Cry") and Nitro Deluxe. Deluxe's "This Brutal House," fusing Latin percussion and the New York electro sound of Man Parrish with brash house music, proved to have an impact on the United Kingdom's club music scene, presaging the early 1990s British rave scene.
The genre was recognized as a subgenre of hip-hop in the mid-1980s. It was dominated by "hard" electro beats of the type used primarily at the time in hip-hop music. Freestyle was more appreciated in larger cities.
"Let the Music Play" by Shannon, is often named as the genre's first hit, and its sound, called "The Shannon Sound", as the foundation of the genre, although also known as the beginnings of the electro genre which then gave birth to techno. Afrika Bambaataa's "Planet Rock" was arguably the first freestyle song produced. "Let the Music Play" eventually became freestyle's biggest hit, and still receives frequent airplay. Its producers Chris Barbosa and Mark Liggett changed and redefined the electro funk sound with the addition of Latin-American rhythms and a syncopated drum-machine sound.
In March 2013, Radio City Music Hall hosted a freestyle concert. Top freestyle artists included in the line-up were TKA, Safire, Judy Torres, Cynthia, Cover Girls, Lisa Lisa, Shannon, Noel, and Lisette Melendez. Originally scheduled as a one-night event, a second night was added shortly after the first night was sold out in a matter of days.
Radio stations nationwide began to play hits by artists like TKA, Sweet Sensation, Exposé, and Sa-Fire on the same playlists as Michael Jackson and Madonna. "(You Are My) All and All" by Joyce Sims became the first freestyle record to cross over into the R&B market, and was one of the first to reach the European market. Radio station WPOW/Power 96 was noted for exposing freestyle to South Florida in the mid-'80s through the early '90s, as well as mixing in some local Miami bass into its playlist.
'Pretty Tony' Butler produced several hits on Miami's Jam-Packed Records, including Debbie Deb's "When I Hear Music" and "Lookout Weekend", and Trinere's "I'll Be All You'll Ever Need" and "They're Playing Our Song". Company B, Stevie B, Paris by Air, Linear, Will to Power and Exposé's later hits defined Miami freestyle. Tolga Katas is credited as one of the first persons to create a hit record entirely on a computer, and produced Stevie B's "Party Your Body", "In My Eyes" and "Dreamin' of Love". Katas' record label Futura Records was an incubator for artists such as Linear, who achieved international success after a move from Futura to Atlantic Records.
The groundbreaking "Nightime" by Pretty Poison featuring red headed diva Jade Starling in 1984 initially put Philadelphia on the freestyle map. Their follow-up "Catch Me I'm Falling" was a worldwide hit and brought freestyle to American Bandstand, Soul Train, Solid Gold and the Arsenio Hall Show. "Catch Me I'm Falling" broke on the street during the summer of 1987 and was the #1 single at WCAU (98 Hot Hits) and #2 at WUSL (Power 99) during the first two weeks of July. Virgin Records was quick to sign Pretty Poison helping to usher in the avalanche of other major label signings from the expanding freestyle scene.
Several freestyle acts followed on the heels of Pretty Poison emerging from the metropolitan Philadelphia, PA area in the early 1990s, benefiting from both the clubs and the overnight success of then-Dance friendly Rhythmic Top 40 WIOQ. Artists such as T.P.E. (The Philadelphia Experiment) enjoyed regional success.
Freestyle had a notable following in California, especially Los Angeles, the Central Valley, San Francisco Bay, and San Diego. California's large Latino community enjoyed the sounds of America's East Coast club scene, and a number of California artists became popular with East Coast freestyle enthusiasts. In Northern California, primarily San Francisco and San Jose, they leaned toward a similar rhythm dance to hi-NRG, so most of the Californian freestyle emerged from the southern regions of the Bay Area and Los Angeles.
Timmy T, Bernadette, Caleb-B, SF Spanish Fly, Angelina, One Voice, M:G, Stephanie Fastro and The S Factor were from the Bay Area, and from San Diego were Gustavo Campain, Alex Campain, Jose (Jojo) Santos, Robert Romo of the group Internal Affairs, F. Felix, Leticia and Frankie J.
The Filipino American community in California also embraced freestyle music during the late 1980s and early 1990s. Jaya was one of the first Filipino-American freestyle singers, reaching number 44 in 1990 with "If You Leave Me Now". Later Filipino-American freestyle artists include Jocelyn Enriquez, Buffy, Korell, Damien Bautista, One Voice, Kuya, Sharyn Maceren, and others.
Freestyle's popularity spread outward from the Greater Toronto Area's Italian, Hispanic/Latino and Greek populations in the late 1980s and early 1990s. It was showcased alongside house music in various Toronto nightclubs, but by the mid-1990s was replaced almost entirely by house music.
Lil' Suzy released several 12-inch singles and performed live on the Canadian live dance music television program Electric Circus. Montreal singer Nancy Martinez's 1986 single "For Tonight" would become the first Canadian freestyle single to reach the top 40 on the Billboard Hot 100 chart, while the Montreal girl group 11:30 reached the Canadian chart with "Ole Ole" in 2000.
Performers and producers associated with the style also came from around the world, including Turkish-American Murat Konar (the writer of Information Society's "Running"), Paul Lekakis from Greece, Asian artist Leonard (Leon Youngboy) who released the song "Youngboys", and British musicians including Freeez, Paul Hardcastle, Samantha Fox, and even Robin Gibb of the Bee Gees, who also adopted the freestyle sound in his 1984 album Secret Agent, having worked with producer Chris Barbosa. Several British new wave and synthpop bands also teamed up with freestyle producers or were influenced by the genre, and released freestyle songs or remixes. These include Duran Duran whose song "Notorious" was remixed by the Latin Rascals, and whose album Big Thing contained several freestyle inspired songs such as "All She Wants Is"; New Order who teamed up with Arthur Baker, producing and co-writing the track "Confusion"; Erasure and the Der Deutsche mixes of their song "Blue Savannah"; and the Pet Shop Boys, whose song "Domino Dancing" was produced by Miami-based freestyle producer Lewis Martineé. Australian act I'm Talking utilized freestyle elements into their singles "Trust Me" and "Do You Wanna Be?", both becoming top ten hits in their native Australia. | [
{
"paragraph_id": 0,
"text": "Freestyle, or Latin freestyle (initially called Latin hip hop) is a form of electronic dance music that emerged in the New York metropolitan area, Philadelphia, and Miami, primarily among Hispanic Americans and Italian Americans in the 1980s. It experienced its greatest popularity from the late 1980s until the early 1990s. A common theme of freestyle lyricism originated as heartbreak in an urban environment typified by New York City.",
"title": ""
},
{
"paragraph_id": 1,
"text": "An important precursor to freestyle is 1982's \"Planet Rock\" by Afrika Bambaataa & Soul Sonic Force. Shannon's 1983 hit \"Let the Music Play\" is often considered the first freestyle song and the first major song recorded by a Latin American artist is \"Please Don't Go\" by Nayobe from 1984. From there, freestyle gained a large presence in American clubs, especially in New York and Miami. Radio airplay followed in the mid 1980s.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Performers such as Exposé, Lisa Lisa and Cult Jam, Stevie B and Sweet Sensation gained mainstream chart success with the genre in the late 1980s and early 1990s, but its popularity largely faded by the end of the decade. Both classic and newer freestyle output remain popular as a niche genre in Brazil (where it is an influence on funk carioca), Germany and Canada.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Freestyle music developed in the early 1980s, primarily simultaneously in the Hispanic (mainly Puerto Rican) communities of Upper Manhattan and The Bronx and in the Italian-American communities in Brooklyn, the Bronx, and other boroughs of New York City, New Jersey, Westchester County and Long Island. It initially was a fusion of synthetic instrumentation and syncopated percussion of 1980s electro, as favored by fans of breakdancing. Sampling, as found in synth-pop music and hip-hop, was incorporated. Key influences include Afrika Bambaataa & Soul Sonic Force's \"Planet Rock\" (1982) and Shannon's \"Let the Music Play\" (1983), the latter was a top-ten Billboard Hot 100 hit. In 1984, a Latin presence was established when the first song recorded in the genre by a Latin American artist, \"Please Don't Go\", by newcomer Nayobe (a singer from Brooklyn and of Afro-Cuban descent) was recorded and released. The song became a success, reaching No. 23 on the Billboard Hot Dance Music/Club Play chart. In 1985, a Spanish version of the song was released with the title \"No Te Vayas\". By 1987, freestyle began getting more airplay on American pop radio stations. Songs such as \"Come Go with Me\" by Exposé, \"Show Me\" by the Cover Girls, \"Fascinated\" by Company B, \"Silent Morning\" by Noel, and \"Catch Me (I'm Falling)\" by Pretty Poison, brought freestyle into the mainstream. House music, based partly on disco rhythms, was by 1992 challenging the relatively upbeat, syncopated freestyle sound. Pitchfork considers the Miami Mix of ABC's single \"When Smokey Sings\" to be proto-freestyle, despite that version being released in 1987. Many early or popular freestyle artists and DJs, such as Jellybean, Tony Torres, Raul Soto, Roman Ricardo, Lil Suzy, and Nocera were of Hispanic or Italian ancestry, which was one reason for the style's popularity among Hispanic Americans and Italian Americans in the New York City area and Philadelphia.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Freestyle's Top 40 Radio airplay started to really take off by 1987, and it began to disappear from the airwaves in the early 1990s as radio stations moved to Top 40-only formats. Artists such as George Lamond, Exposé, Sweet Sensation, and Stevie B were still heard on mainstream radio, but other notable freestyle artists did not fare as well. Carlos Berrios and Platinum producer Frankie Cutlass used a freestyle production on \"Temptation\" by Corina and \"Together Forever\" by Lisette Melendez. The songs were released in 1991, almost simultaneously, and caused a resurgence in the style when they were embraced by Top 40 radio. \"Temptation\" reached the number 6 spot on the Billboard Hot 100 Chart. These hits were followed by the success of Lisa Lisa and Cult Jam, who had been one of the earliest freestyle acts. Their records were produced by Full Force, who had also worked with UTFO and James Brown.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Several primarily freestyle artists released ballads during the 1980s and early 1990s that crossed over to the pop charts and charted higher than their previous work. These include \"Seasons Change\" by Exposé, \"Thinking of You\" by Sa-Fire, \"One More Try\" by Timmy T, \"Because I Love You (The Postman Song)\" by Stevie B, and \"If Wishes Came True\" by Sweet Sensation. Brenda K. Starr reached the Hot 100 with her ballad \"I Still Believe\". Freestyle shortly thereafter gave way to mainstream pop artists such as MC Hammer, Paula Abdul, Bobby Brown, New Kids on the Block, and Milli Vanilli (with some artists utilizing elements of freestyle beginning in the 1980s) using hip hop beats and electro samples in a mainstream form with slicker production and MTV-friendly videos. These artists were successful on crossover stations as well as R&B stations, and freestyle was replaced as an underground genre by newer styles such as new jack swing, trance and Eurodance. Despite this, some freestyle acts managed to garner hits well into the 1990s, with acts such as Cynthia and Rockell scoring minor hits on the Billboard Hot 100 as late as 1998.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Freestyle remained a largely underground genre with a sizable following in New York, but has recently seen a comeback in the cities where the music originally experienced its greatest success. New York City impresario Steve Sylvester and producer Sal Abbetiello of Fever Records launched Stevie Sly's Freestyle Party show at the Manhattan live music venue, Coda on April 1, 2004. The show featured Judy Torres, Cynthia, and the Cover Girls and was attended by several celebrity guests. The Coda show was successful, and was followed by a summer 2006 Madison Square Garden concert that showcased freestyle's most successful performers. New freestyle releases are popular with enthusiasts and newcomers alike. Miami rapper Pitbull collaborated with Miami freestyle artist Stevie B to create an updated version of Stevie B's hit, \"Spring Love\".",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Currently, freestyle music continues to have a thriving fanbase in certain parts of the country, with New York City Italian-American DJs such as Bad Boy Joe and Louie DeVito helping to maintain an active freestyle scene in the NYC metro area. In cities like New York, Miami, and Los Angeles, recent concerts by freestyle artists have been extremely successful, with many events selling out.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "As Latin freestyle in the late 1980s and early 1990s gradually became superseded with house music, dance-pop, and regular hip hop on one front and Spanish-language pop music with marginal Latin freestyle influences on another, \"harder strain\" of house music originating in New York City was known to incorporate elements of Latin freestyle and the old school hip hop sound. Principal architects of the genre were Todd Terry (early instances include \"Alright Alright,\" and \"Dum Dum Cry\") and Nitro Deluxe. Deluxe's \"This Brutal House,\" fusing Latin percussion and the New York electro sound of Man Parrish with brash house music, proved to have an impact on the United Kingdom's club music scene, presaging the early 1990s British rave scene.",
"title": "Influence on other genres"
},
{
"paragraph_id": 9,
"text": "The genre was recognized as a subgenre of hip-hop in the mid-1980s. It was dominated by \"hard\" electro beats of the type used primarily at the time in hip-hop music. Freestyle was more appreciated in larger cities.",
"title": "Influence on other genres"
},
{
"paragraph_id": 10,
"text": "\"Let the Music Play\" by Shannon, is often named as the genre's first hit, and its sound, called \"The Shannon Sound\", as the foundation of the genre, although also known as the beginnings of the electro genre which then gave birth to techno. Afrika Bambaataa's \"Planet Rock\" was arguably the first freestyle song produced. \"Let the Music Play\" eventually became freestyle's biggest hit, and still receives frequent airplay. Its producers Chris Barbosa and Mark Liggett changed and redefined the electro funk sound with the addition of Latin-American rhythms and a syncopated drum-machine sound.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 11,
"text": "In March 2013, Radio City Music Hall hosted a freestyle concert. Top freestyle artists included in the line-up were TKA, Safire, Judy Torres, Cynthia, Cover Girls, Lisa Lisa, Shannon, Noel, and Lisette Melendez. Originally scheduled as a one-night event, a second night was added shortly after the first night was sold out in a matter of days.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 12,
"text": "Radio stations nationwide began to play hits by artists like TKA, Sweet Sensation, Exposé, and Sa-Fire on the same playlists as Michael Jackson and Madonna. \"(You Are My) All and All\" by Joyce Sims became the first freestyle record to cross over into the R&B market, and was one of the first to reach the European market. Radio station WPOW/Power 96 was noted for exposing freestyle to South Florida in the mid-'80s through the early '90s, as well as mixing in some local Miami bass into its playlist.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 13,
"text": "'Pretty Tony' Butler produced several hits on Miami's Jam-Packed Records, including Debbie Deb's \"When I Hear Music\" and \"Lookout Weekend\", and Trinere's \"I'll Be All You'll Ever Need\" and \"They're Playing Our Song\". Company B, Stevie B, Paris by Air, Linear, Will to Power and Exposé's later hits defined Miami freestyle. Tolga Katas is credited as one of the first persons to create a hit record entirely on a computer, and produced Stevie B's \"Party Your Body\", \"In My Eyes\" and \"Dreamin' of Love\". Katas' record label Futura Records was an incubator for artists such as Linear, who achieved international success after a move from Futura to Atlantic Records.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 14,
"text": "The groundbreaking \"Nightime\" by Pretty Poison featuring red headed diva Jade Starling in 1984 initially put Philadelphia on the freestyle map. Their follow-up \"Catch Me I'm Falling\" was a worldwide hit and brought freestyle to American Bandstand, Soul Train, Solid Gold and the Arsenio Hall Show. \"Catch Me I'm Falling\" broke on the street during the summer of 1987 and was the #1 single at WCAU (98 Hot Hits) and #2 at WUSL (Power 99) during the first two weeks of July. Virgin Records was quick to sign Pretty Poison helping to usher in the avalanche of other major label signings from the expanding freestyle scene.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 15,
"text": "Several freestyle acts followed on the heels of Pretty Poison emerging from the metropolitan Philadelphia, PA area in the early 1990s, benefiting from both the clubs and the overnight success of then-Dance friendly Rhythmic Top 40 WIOQ. Artists such as T.P.E. (The Philadelphia Experiment) enjoyed regional success.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 16,
"text": "Freestyle had a notable following in California, especially Los Angeles, the Central Valley, San Francisco Bay, and San Diego. California's large Latino community enjoyed the sounds of America's East Coast club scene, and a number of California artists became popular with East Coast freestyle enthusiasts. In Northern California, primarily San Francisco and San Jose, they leaned toward a similar rhythm dance to hi-NRG, so most of the Californian freestyle emerged from the southern regions of the Bay Area and Los Angeles.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 17,
"text": "Timmy T, Bernadette, Caleb-B, SF Spanish Fly, Angelina, One Voice, M:G, Stephanie Fastro and The S Factor were from the Bay Area, and from San Diego were Gustavo Campain, Alex Campain, Jose (Jojo) Santos, Robert Romo of the group Internal Affairs, F. Felix, Leticia and Frankie J.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 18,
"text": "The Filipino American community in California also embraced freestyle music during the late 1980s and early 1990s. Jaya was one of the first Filipino-American freestyle singers, reaching number 44 in 1990 with \"If You Leave Me Now\". Later Filipino-American freestyle artists include Jocelyn Enriquez, Buffy, Korell, Damien Bautista, One Voice, Kuya, Sharyn Maceren, and others.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 19,
"text": "Freestyle's popularity spread outward from the Greater Toronto Area's Italian, Hispanic/Latino and Greek populations in the late 1980s and early 1990s. It was showcased alongside house music in various Toronto nightclubs, but by the mid-1990s was replaced almost entirely by house music.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 20,
"text": "Lil' Suzy released several 12-inch singles and performed live on the Canadian live dance music television program Electric Circus. Montreal singer Nancy Martinez's 1986 single \"For Tonight\" would become the first Canadian freestyle single to reach the top 40 on the Billboard Hot 100 chart, while the Montreal girl group 11:30 reached the Canadian chart with \"Ole Ole\" in 2000.",
"title": "Freestyle scenes"
},
{
"paragraph_id": 21,
"text": "Performers and producers associated with the style also came from around the world, including Turkish-American Murat Konar (the writer of Information Society's \"Running\"), Paul Lekakis from Greece, Asian artist Leonard (Leon Youngboy) who released the song \"Youngboys\", and British musicians including Freeez, Paul Hardcastle, Samantha Fox, and even Robin Gibb of the Bee Gees, who also adopted the freestyle sound in his 1984 album Secret Agent, having worked with producer Chris Barbosa. Several British new wave and synthpop bands also teamed up with freestyle producers or were influenced by the genre, and released freestyle songs or remixes. These include Duran Duran whose song \"Notorious\" was remixed by the Latin Rascals, and whose album Big Thing contained several freestyle inspired songs such as \"All She Wants Is\"; New Order who teamed up with Arthur Baker, producing and co-writing the track \"Confusion\"; Erasure and the Der Deutsche mixes of their song \"Blue Savannah\"; and the Pet Shop Boys, whose song \"Domino Dancing\" was produced by Miami-based freestyle producer Lewis Martineé. Australian act I'm Talking utilized freestyle elements into their singles \"Trust Me\" and \"Do You Wanna Be?\", both becoming top ten hits in their native Australia.",
"title": "Freestyle scenes"
}
]
| Freestyle, or Latin freestyle is a form of electronic dance music that emerged in the New York metropolitan area, Philadelphia, and Miami, primarily among Hispanic Americans and Italian Americans in the 1980s. It experienced its greatest popularity from the late 1980s until the early 1990s. A common theme of freestyle lyricism originated as heartbreak in an urban environment typified by New York City. An important precursor to freestyle is 1982's "Planet Rock" by Afrika Bambaataa & Soul Sonic Force. Shannon's 1983 hit "Let the Music Play" is often considered the first freestyle song and the first major song recorded by a Latin American artist is "Please Don't Go" by Nayobe from 1984. From there, freestyle gained a large presence in American clubs, especially in New York and Miami. Radio airplay followed in the mid 1980s. Performers such as Exposé, Lisa Lisa and Cult Jam, Stevie B and Sweet Sensation gained mainstream chart success with the genre in the late 1980s and early 1990s, but its popularity largely faded by the end of the decade. Both classic and newer freestyle output remain popular as a niche genre in Brazil, Germany and Canada. | 2001-05-27T21:21:41Z | 2023-12-23T01:54:13Z | [
"Template:Short description",
"Template:About",
"Template:Unreferenced section",
"Template:Dead link",
"Template:Hiphop",
"Template:Distinguish",
"Template:Cite news",
"Template:Synth pop-footer",
"Template:Electronica",
"Template:Pop music",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Amerisalsa",
"Template:Post-disco",
"Template:Infobox music genre",
"Template:More citations needed",
"Template:ISBN",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Freestyle_music |
10,810 | Fantasy (psychology) | In psychology, fantasy is a broad range of mental experiences, mediated by the faculty of imagination in the human brain, and marked by an expression of certain desires through vivid mental imagery. Fantasies are generally associated with scenarios that are impossible or unlikely to happen.
In everyday life, individuals often find their thoughts "pursue a series of fantasies concerning things they wish they could do or wish they had done ... fantasies of control or of sovereign choice ... daydreams."
George Eman Vaillant in his study of defence mechanisms took as a central example of "an immature defence ... fantasy — living in a 'Walter Mitty' dream world where you imagine you are successful and popular, instead of making real efforts to make friends and succeed at a job."
Other researchers and theorists find that fantasy has beneficial elements — providing "small regressions and compensatory wish fulfilments which are recuperative in effect." Research by Deirdre Barrett reports that people differ radically in the vividness, as well as frequency of fantasy, and that those who have the most elaborately developed fantasy life are often the people who make productive use of their imaginations in art, literature, or by being especially creative and innovative in more traditional professions.
According to Sigmund Freud, a fantasy is constructed around multiple, often repressed wishes, and employs disguise to mask and mark the very defensive processes by which desire is enacted. The subject's desire to maintain distance from the repressed wish and simultaneously experience it opens up a type of third person syntax allowing for multiple entry into the fantasy. Therefore, in fantasy, vision is multiplied—it becomes possible to see from more than one position at the same time, to see oneself and to see oneself seeing oneself, to divide vision and dislocate subjectivity. This radical omission of the “I” position creates space for all those processes that depend upon such a center, including not only identification but also the field and organization of vision itself.
For Freud, sexuality is linked from the very beginning to an object of fantasy. However, “the object to be rediscovered is not the lost object, but its substitute by displacement; the lost object is the object of self-preservation, of hunger, and the object one seeks to re-find in sexuality is an object displaced in relation to that first object.” This initial scene of fantasy is created out of the frustrated infants' deflection away from the instinctual need for milk and nourishment towards a phantasmization of the mothers' breast, which is in close proximity to the instinctual need. Now bodily pleasure is derived from the sucking of the mother's breast itself. The mouth that was the original source of nourishment is now the mouth that takes pleasure in its own sucking. This substitution of the breast for milk and the breast for a phantasmic scene represents a further level of mediation which is increasingly psychic. The child cannot experience the pleasure of milk without the psychic re-inscription of the scene in the mind. “The finding of an object is in fact a re-finding of it.” It is in the movement and constant restaging away from the instinct that desire is constituted and mobilized.
A similarly positive view of fantasy was taken by Sigmund Freud who considered fantasy (German: Fantasie) a defence mechanism. He considered that men and women "cannot subsist on the scanty satisfaction which they can extort from reality. 'We simply cannot do without auxiliary constructions,' as Theodor Fontane once said ... [without] dwelling on imaginary wish fulfillments." As childhood adaptation to the reality principle developed, so too "one species of thought activity was split off; it was kept free from reality-testing and remained subordinated to the pleasure principle alone. This activity is fantasying ... continued as day-dreaming." He compared such phantasising to the way a "nature reserve preserves its original state where everything ... including what is useless and even what is noxious, can grow and proliferate there as it pleases."
Daydreams for Freud were thus a valuable resource. "These day-dreams are cathected with a large amount of interest; they are carefully cherished by the subject and usually concealed with a great deal of sensitivity ... such phantasies may be unconscious just as well as conscious." He considered these fantasies to include a great deal of the true constitutional essence of a personality, and that the energetic man "is one who succeeds by his efforts in turning his wishful phantasies into reality," whereas the artist "can transform his phantasies into artistic creations instead of into symptoms ... the doom of neurosis."
Melanie Klein extended Freud's concept of fantasy to cover the developing child's relationship to a world of internal objects. In her thought, this kind of "play activity inside the person is known as 'unconscious fantasy'. And these phantasies are often very violent and aggressive. They are different from ordinary day-dreams or 'fantasies'."
The term "fantasy" became a central issue with the development of the Kleinian group as a distinctive strand within the British Psycho-Analytical Society, and was at the heart of the so-called controversial discussions of the wartime years. "A paper by Susan Isaacs (1952) on 'The nature and function of Phantasy' ... has been generally accepted by the Klein group in London as a fundamental statement of their position." As a defining feature, "Kleinian psychoanalysts regard the unconscious as made up of phantasies of relations with objects. These are thought of as primary and innate, and as the mental representations of instincts ... the psychological equivalents in the mind of defence mechanisms."
Isaacs considered that "unconscious phantasies exert a continuous influence throughout life, both in normal and neurotic people, the difference lying in the specific character of the dominant phantasies." Most schools of psychoanalytic thought would now accept that both in analysis and life, we perceive reality through a veil of unconscious fantasy. Isaacs however claimed that "Freud's 'hallucinatory wish-fulfilment' and his 'introjection' and 'projection' are the basis of the fantasy life," and how far unconscious fantasy was a genuine development of Freud's ideas, how far it represented the formation of a new psychoanalytic paradigm, is perhaps the key question of the controversial discussions.
Lacan engaged from early on with "the phantasies revealed by Melanie Klein ... the imago of the mother ... this shadow of the bad internal objects" — with the Imaginary. Increasingly, however, it was Freud's idea of fantasy as a kind of "screen-memory, representing something of more importance with which it was in some way connected" that was for him of greater importance. Lacan came to believe that "the phantasy is never anything more than the screen that conceals something quite primary, something determinate in the function of repetition."
Phantasies thus both link to and block off the individual's unconscious, his kernel or real core: "subject and real are to be situated on either side of the split, in the resistance of the phantasy", which thus comes close to the centre of the individual's personality and its splits and conflicts. "The subject situates himself as determined by the phantasy ... whether in the dream or in any of the more or less well-developed forms of day-dreaming;" and as a rule "a subject's fantasies are close variations on a single theme ... the 'fundamental fantasy' ... minimizing the variations in meaning which might otherwise cause a problem for desire."
The goal of therapy thus became "la traversée du fantasme, the crossing over, traversal, or traversing of the fundamental fantasy." For Lacan, "The traversing of fantasy involves the subject's assumption of a new position with respect to the Other as language and the Other as desire ... a utopian moment beyond neurosis." The question he was left with was "What, then, does he who has passed through the experience ... who has traversed the radical phantasy ... become?."
The postmodern intersubjectivity of the 21st century has seen a new interest in fantasy as a form of interpersonal communication. Here, we are told, "We need to go beyond the pleasure principle, the reality principle, and repetition compulsion to ... the fantasy principle - not, as Freud did, reduce fantasies to wishes ... [but consider] all other imaginable emotions" and thus envisage emotional fantasies as a possible means of moving beyond stereotypes to more nuanced forms of personal and social relating.
Such a perspective "sees emotions as central to developing fantasies about each other that are not determined by collective 'typifications'."
Two characteristics of someone with narcissistic personality disorder are:
Fantasy is a common symptom in people suffering from schizophrenia. In fact these people depict specific patterns of high-neurological activities in their brains' default mode network, which possibly constitute the biomarker of these fantasies. Also people suffering from schizophrenia who committed contact sexual abuses against women, report experiencing aggressive sexual fantasies. | [
{
"paragraph_id": 0,
"text": "In psychology, fantasy is a broad range of mental experiences, mediated by the faculty of imagination in the human brain, and marked by an expression of certain desires through vivid mental imagery. Fantasies are generally associated with scenarios that are impossible or unlikely to happen.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In everyday life, individuals often find their thoughts \"pursue a series of fantasies concerning things they wish they could do or wish they had done ... fantasies of control or of sovereign choice ... daydreams.\"",
"title": "Conscious fantasy"
},
{
"paragraph_id": 2,
"text": "George Eman Vaillant in his study of defence mechanisms took as a central example of \"an immature defence ... fantasy — living in a 'Walter Mitty' dream world where you imagine you are successful and popular, instead of making real efforts to make friends and succeed at a job.\"",
"title": "Conscious fantasy"
},
{
"paragraph_id": 3,
"text": "Other researchers and theorists find that fantasy has beneficial elements — providing \"small regressions and compensatory wish fulfilments which are recuperative in effect.\" Research by Deirdre Barrett reports that people differ radically in the vividness, as well as frequency of fantasy, and that those who have the most elaborately developed fantasy life are often the people who make productive use of their imaginations in art, literature, or by being especially creative and innovative in more traditional professions.",
"title": "Conscious fantasy"
},
{
"paragraph_id": 4,
"text": "According to Sigmund Freud, a fantasy is constructed around multiple, often repressed wishes, and employs disguise to mask and mark the very defensive processes by which desire is enacted. The subject's desire to maintain distance from the repressed wish and simultaneously experience it opens up a type of third person syntax allowing for multiple entry into the fantasy. Therefore, in fantasy, vision is multiplied—it becomes possible to see from more than one position at the same time, to see oneself and to see oneself seeing oneself, to divide vision and dislocate subjectivity. This radical omission of the “I” position creates space for all those processes that depend upon such a center, including not only identification but also the field and organization of vision itself.",
"title": "Freud and fantasy"
},
{
"paragraph_id": 5,
"text": "For Freud, sexuality is linked from the very beginning to an object of fantasy. However, “the object to be rediscovered is not the lost object, but its substitute by displacement; the lost object is the object of self-preservation, of hunger, and the object one seeks to re-find in sexuality is an object displaced in relation to that first object.” This initial scene of fantasy is created out of the frustrated infants' deflection away from the instinctual need for milk and nourishment towards a phantasmization of the mothers' breast, which is in close proximity to the instinctual need. Now bodily pleasure is derived from the sucking of the mother's breast itself. The mouth that was the original source of nourishment is now the mouth that takes pleasure in its own sucking. This substitution of the breast for milk and the breast for a phantasmic scene represents a further level of mediation which is increasingly psychic. The child cannot experience the pleasure of milk without the psychic re-inscription of the scene in the mind. “The finding of an object is in fact a re-finding of it.” It is in the movement and constant restaging away from the instinct that desire is constituted and mobilized.",
"title": "Freud and fantasy"
},
{
"paragraph_id": 6,
"text": "A similarly positive view of fantasy was taken by Sigmund Freud who considered fantasy (German: Fantasie) a defence mechanism. He considered that men and women \"cannot subsist on the scanty satisfaction which they can extort from reality. 'We simply cannot do without auxiliary constructions,' as Theodor Fontane once said ... [without] dwelling on imaginary wish fulfillments.\" As childhood adaptation to the reality principle developed, so too \"one species of thought activity was split off; it was kept free from reality-testing and remained subordinated to the pleasure principle alone. This activity is fantasying ... continued as day-dreaming.\" He compared such phantasising to the way a \"nature reserve preserves its original state where everything ... including what is useless and even what is noxious, can grow and proliferate there as it pleases.\"",
"title": "Freud and daydreams"
},
{
"paragraph_id": 7,
"text": "Daydreams for Freud were thus a valuable resource. \"These day-dreams are cathected with a large amount of interest; they are carefully cherished by the subject and usually concealed with a great deal of sensitivity ... such phantasies may be unconscious just as well as conscious.\" He considered these fantasies to include a great deal of the true constitutional essence of a personality, and that the energetic man \"is one who succeeds by his efforts in turning his wishful phantasies into reality,\" whereas the artist \"can transform his phantasies into artistic creations instead of into symptoms ... the doom of neurosis.\"",
"title": "Freud and daydreams"
},
{
"paragraph_id": 8,
"text": "Melanie Klein extended Freud's concept of fantasy to cover the developing child's relationship to a world of internal objects. In her thought, this kind of \"play activity inside the person is known as 'unconscious fantasy'. And these phantasies are often very violent and aggressive. They are different from ordinary day-dreams or 'fantasies'.\"",
"title": "Klein and unconscious fantasy"
},
{
"paragraph_id": 9,
"text": "The term \"fantasy\" became a central issue with the development of the Kleinian group as a distinctive strand within the British Psycho-Analytical Society, and was at the heart of the so-called controversial discussions of the wartime years. \"A paper by Susan Isaacs (1952) on 'The nature and function of Phantasy' ... has been generally accepted by the Klein group in London as a fundamental statement of their position.\" As a defining feature, \"Kleinian psychoanalysts regard the unconscious as made up of phantasies of relations with objects. These are thought of as primary and innate, and as the mental representations of instincts ... the psychological equivalents in the mind of defence mechanisms.\"",
"title": "Klein and unconscious fantasy"
},
{
"paragraph_id": 10,
"text": "Isaacs considered that \"unconscious phantasies exert a continuous influence throughout life, both in normal and neurotic people, the difference lying in the specific character of the dominant phantasies.\" Most schools of psychoanalytic thought would now accept that both in analysis and life, we perceive reality through a veil of unconscious fantasy. Isaacs however claimed that \"Freud's 'hallucinatory wish-fulfilment' and his 'introjection' and 'projection' are the basis of the fantasy life,\" and how far unconscious fantasy was a genuine development of Freud's ideas, how far it represented the formation of a new psychoanalytic paradigm, is perhaps the key question of the controversial discussions.",
"title": "Klein and unconscious fantasy"
},
{
"paragraph_id": 11,
"text": "Lacan engaged from early on with \"the phantasies revealed by Melanie Klein ... the imago of the mother ... this shadow of the bad internal objects\" — with the Imaginary. Increasingly, however, it was Freud's idea of fantasy as a kind of \"screen-memory, representing something of more importance with which it was in some way connected\" that was for him of greater importance. Lacan came to believe that \"the phantasy is never anything more than the screen that conceals something quite primary, something determinate in the function of repetition.\"",
"title": "Lacan, fantasy, and desire"
},
{
"paragraph_id": 12,
"text": "Phantasies thus both link to and block off the individual's unconscious, his kernel or real core: \"subject and real are to be situated on either side of the split, in the resistance of the phantasy\", which thus comes close to the centre of the individual's personality and its splits and conflicts. \"The subject situates himself as determined by the phantasy ... whether in the dream or in any of the more or less well-developed forms of day-dreaming;\" and as a rule \"a subject's fantasies are close variations on a single theme ... the 'fundamental fantasy' ... minimizing the variations in meaning which might otherwise cause a problem for desire.\"",
"title": "Lacan, fantasy, and desire"
},
{
"paragraph_id": 13,
"text": "The goal of therapy thus became \"la traversée du fantasme, the crossing over, traversal, or traversing of the fundamental fantasy.\" For Lacan, \"The traversing of fantasy involves the subject's assumption of a new position with respect to the Other as language and the Other as desire ... a utopian moment beyond neurosis.\" The question he was left with was \"What, then, does he who has passed through the experience ... who has traversed the radical phantasy ... become?.\"",
"title": "Lacan, fantasy, and desire"
},
{
"paragraph_id": 14,
"text": "The postmodern intersubjectivity of the 21st century has seen a new interest in fantasy as a form of interpersonal communication. Here, we are told, \"We need to go beyond the pleasure principle, the reality principle, and repetition compulsion to ... the fantasy principle - not, as Freud did, reduce fantasies to wishes ... [but consider] all other imaginable emotions\" and thus envisage emotional fantasies as a possible means of moving beyond stereotypes to more nuanced forms of personal and social relating.",
"title": "The fantasy principle"
},
{
"paragraph_id": 15,
"text": "Such a perspective \"sees emotions as central to developing fantasies about each other that are not determined by collective 'typifications'.\"",
"title": "The fantasy principle"
},
{
"paragraph_id": 16,
"text": "Two characteristics of someone with narcissistic personality disorder are:",
"title": "Narcissistic personality disorder"
},
{
"paragraph_id": 17,
"text": "Fantasy is a common symptom in people suffering from schizophrenia. In fact these people depict specific patterns of high-neurological activities in their brains' default mode network, which possibly constitute the biomarker of these fantasies. Also people suffering from schizophrenia who committed contact sexual abuses against women, report experiencing aggressive sexual fantasies.",
"title": "Schizophrenia"
}
]
| In psychology, fantasy is a broad range of mental experiences, mediated by the faculty of imagination in the human brain, and marked by an expression of certain desires through vivid mental imagery. Fantasies are generally associated with scenarios that are impossible or unlikely to happen. | 2001-06-15T15:51:20Z | 2023-12-22T21:11:55Z | [
"Template:Unreferenced section",
"Template:Webarchive",
"Template:Authority control",
"Template:Reflist",
"Template:Wiktionary-inline",
"Template:Defence mechanisms",
"Template:Short description",
"Template:For",
"Template:Specify",
"Template:Lang-de",
"Template:Columns-list",
"Template:Narcissism"
]
| https://en.wikipedia.org/wiki/Fantasy_(psychology) |
10,814 | Surnames by country | Surname conventions and laws vary around the world. This article gives an overview of surnames around the world.
In Argentina, normally only one family name, the father's paternal family name, is used and registered, as in English-speaking countries. However, it is possible to use both the paternal and maternal name. For example, if Ana Laura Melachenko and Emanuel Darío Guerrero had a daughter named Adabel Anahí, her full name could be Adabel Anahí Guerrero Melachenko. Women, however, do not change their family names upon marriage and continue to use their birth family names instead of their husband's family names. However, women have traditionally, and some still choose to use the old Spanish custom of adjoining "de" and her husband's surname to her own name. For example, if Paula Segovia marries Felipe Cossia, she might keep her birth name or become Paula Segovia de Cossia or Paula Cossia.
There are some province offices where a married woman can use only her birth name, and some others where she has to use the complete name, for legal purposes. The Argentine Civilian Code states both uses are correct, but police offices and passports are issued with the complete name. Today most women prefer to maintain their birth name given that "de" can be interpreted as meaning they belong to their husbands.
When Eva Duarte married Juan Domingo Perón, she could be addressed as Eva Duarte de Perón, but the preferred style was Eva Perón, or the familiar and affectionate Evita (little Eva).
Combined names come from old traditional families and are considered one last name, but are rare. Although Argentina is a Spanish-speaking country, it is also composed of other varied European influences, such as Italian, French, Russian, German, etc.
Children typically use their fathers' last names only. Some state offices have started to use both last names, in the traditional father then mother order, to reduce the risk of a person being mistaken for others using the same name combinations, e.g. if Eva Duarte and Juan Perón had a child named Juan, he might be misidentified if he were called Juan Perón, but not if he was known as Juan Perón Duarte.
In early 2008, some new legislation is under consideration that will place the mother's last name ahead the father's last name, as it is done in Portuguese-speaking countries and only optionally in Spain, despite Argentina being a Spanish-speaking country.
In Chile, marriage has no effect at all on either of the spouses' names, so people keep their birth names for all their life, no matter how many times marital status, theirs or that of their parents, may change. However, in some upper-class circles or in older couples, even though considered to be old-fashioned, it is still customary for a wife to use her husband's name as reference, as in "Doña María Inés de Ramírez" (literally Lady María Inés (wife) of Ramírez).
Children will always bear the surname of the father followed by that of the mother, but if there is no known father and the mother is single, the children can bear either both of her mother's surnames or the mother's first surname followed by any of the surnames of the mother's parents or grandparents, or the child may bear the mother's first surname twice in a row.
France
Belgium
Canadian
There are about 1,000,000 different family names in German. German family names most often derive from given names, geographical names, occupational designations, bodily attributes or even traits of character. Hyphenations notwithstanding, they mostly consist of a single word; in those rare cases where the family name is linked to the given names by particles such as von or zu, they usually indicate noble ancestry. Not all noble families used these names (see Riedesel), while some farm families, particularly in Westphalia, used the particle von or zu followed by their farm or former farm's name as a family name (see Meyer zu Erpen).
Family names in German-speaking countries are usually positioned last, after all given names. There are exceptions, however: in parts of Austria and Bavaria and the Alemannic-speaking areas, the family name is regularly put in front of the first given name. Also in many – especially rural – parts of Germany, to emphasize family affiliation there is often an inversion in colloquial use, in which the family name becomes a possessive: Rüters Erich, for example, would be Erich of the Rüter family.
In Germany today, upon marriage, both partners can choose to keep their birth name or choose either partner's name as the common name. In the latter case the partner whose name was not chosen can keep their birth name hyphenated to the new name (e.g. Schmidt and Meyer choose to marry under the name Meyer. The former Schmidt can choose to be called Meyer, Schmidt-Meyer or Meyer-Schmidt), but any children will only get the single common name. In the case that both partners keep their birth name they must decide on one of the two family names for all their future children. (German name)
Changing one's family name for reasons other than marriage, divorce or adoption is possible only if the application is approved by the responsible government agency. In Germany, permission will usually be granted if:
Otherwise, name changes will normally not be granted.
The Netherlands and Belgium (Flanders)
In the Nordic countries, family names often, but certainly not always, originate from a patronymic. In Denmark and Norway, the corresponding ending is -sen, as in Karlsen. Names ending with dotter/datter (daughter), such as Olofsdotter, are rare but occurring, and only apply to women. Today, the patronymic names are passed on similarly to family names in other Western countries, and a person's father does not have to be called Karl if he or she has the surname Karlsson. However, in 2006 Denmark reinstated patronymic and matronymic surnames as an option. Thus, parents Karl Larsen and Anna Hansen can name a son Karlsen or Annasen and a daughter Karlsdotter or Annasdotter.
Before the 19th century there was the same system in Scandinavia as in Iceland today. Noble families, however, as a rule adopted a family name, which could refer to a presumed or real forefather (e.g. Earl Birger Magnusson Folkunge ) or to the family's coat of arms (e.g. King Gustav Eriksson Vasa). In many surviving family noble names, such as Silfversparre ("silver chevron"; in modern spelling, Silver-) or Stiernhielm ("star-helmet"; in modernized spelling, stjärnhjälm), the spelling is obsolete, but since it applies to a name, remains unchanged. (Some names from relatively modern times also use archaic or otherwise aberrant spelling as a stylistic trait; e.g. -quist instead of standard -kvist "twig" or -grén instead of standard -gren, "branch".)
Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted names in a similar fashion to that of the nobility. Family names joining two elements from nature such as the Swedish Bergman ("mountain man"), Holmberg ("island mountain"), Lindgren ("linden branch"), Sandström ("sand stream") and Åkerlund ("field meadow") were quite frequent and remain common today. The same is true for similar Norwegian and Danish names. Another common practice was to adopt one's place of origin as a middle or surname.
Even more important a driver of change was the need, for administrative purposes, to develop a system under which each individual had a "stable" name from birth to death. In the old days, people would be known by their name, patronymic and the farm they lived at. This last element would change if a person got a new job, bought a new farm, or otherwise came to live somewhere else. (This is part of the origin, in this part of the world, of the custom of women changing their names upon marriage. Originally it indicated, basically, a change of address, and from older times, there are numerous examples of men doing the same thing). The many patronymic names may derive from the fact that people who moved from the country to the cities, also gave up the name of the farm they came from. As a worker, you passed by your father's name, and this name passed on to the next generation as a family name. Einar Gerhardsen, the Norwegian prime minister, used a true patronym, as his father was named Gerhard Olsen (Gerhard, the son of Ola). Gerhardsen passed his own patronym on to his children as a family name. This has been common in many working-class families. The tradition of keeping the farm name as a family name got stronger during the first half of the 20th century in Norway.
These names often indicated the place of residence of the family. For this reason, Denmark and Norway have a very high incidence of last names derived from those of farms, many signified by the suffixes like -bø, -rud, -heim/-um, -land or -set (these being examples from Norway). In Denmark, the most common suffix is -gaard — the modern spelling is gård in Danish and can be either gård or gard in Norwegian, but as in Sweden, archaic spelling persists in surnames. The most well-known example of this kind of surname is probably Kierkegaard (combined by the words "kirke/kierke" (= church) and "gaard" (= farm) meaning "the farm located by the Church". It is, however, a common misunderstanding that the name relates to its direct translation: churchyard/cemetery), but many others could be cited. It should also be noted that, since the names in question are derived from the original owners' domiciles, the possession of this kind of name is no longer an indicator of affinity with others who bear it.
In many cases, names were taken from the nature around them. In Norway, for instance, there is an abundance of surnames based on coastal geography, with suffixes like -strand, -øy, -holm, -vik, -fjord or -nes. Like the names derived from farms, most of these family names reflected the family's place of residence at the time the family name was "fixed", however. A family name such as Swedish Dahlgren is derived from "dahl" meaning valley and "gren" meaning branch; or similarly Upvall meaning "upper-valley"; It depends on the country, language, and dialect.
In Scandinavia family names often, but certainly not always, originate from a patronymic. Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted surnames in a similar fashion to that of the gentry. Family names joining two elements from nature such as the Swedish Bergman ("mountain man"), Holmberg ("island mountain"), Lindgren ("linden branch"), Sandström ("sand stream") and Åkerlund ("field grove") were quite frequent and remain common today.
Finland including Karelia and Estonia was the eastern part of The Kingdom of Sweden from its unification around 1100–1200 AD until the year 1809 when Finland was conquered by Russia. During the Russian revolution 1917, Finland proclaimed the republic Finland and Sweden and many European countries rapidly acknowledged the new nation Finland. Finland has mainly Finnish (increasing) and Swedish (decreasing) surnames and first names. There are two predominant surname traditions among the Finnish in Finland: the West Finnish and the East Finnish. The surname traditions of Swedish-speaking farmers, fishermen and craftsmen resembles the West Finnish tradition, while smaller populations of Sami and Romani people have traditions of their own. Finland was exposed to a very small immigration from Russia, so Russian names barely exists.
Until the mid-20th century, Finland was a predominantly agrarian society, and the names of West Finns were based on their association with a particular area, farm, or homestead, e.g. Jaakko Jussila ("Jaakko from the farm of Jussi"). On the other hand, the East Finnish surname tradition dates back to at least the 13th century. There, the Savonians pursued slash-and-burn agriculture which necessitated moving several times during a person's lifetime. This in turn required the families to have surnames, which were in wide use among the common folk as early as the 13th century. By the mid-16th century, the East Finnish surnames had become hereditary. Typically, the oldest East Finnish surnames were formed from the first names of the patriarchs of the families, e.g. Ikävalko, Termonen, Pentikäinen. In the 16th, 17th, and 18th centuries, new names were most often formed by adding the name of the former or current place of living (e.g. Puumalainen < Puumala). In the East Finnish tradition, the women carried the family name of their fathers in female form (e.g. Puumalatar < Puumalainen). By the 19th century, this practice fell into disuse due to the influence of the West-European surname tradition.
In Western Finland, agrarian names dominated, and the last name of the person was usually given according to the farm or holding they lived on. In 1921, surnames became compulsory for all Finns. At this point, the agrarian names were usually adopted as surnames. A typical feature of such names is the addition of prefixes Ala- (Sub-) or Ylä- (Up-), giving the location of the holding along a waterway in relation of the main holding. (e.g. Yli-Ojanperä, Ala-Verronen). The Swedish speaking farmers along the coast of Österbotten usually used two surnames – one which pointed out the father's name (e.g. Eriksson, Andersson, Johansson) and one which related to the farm or the land their family or bigger family owned or had some connection to (e.g. Holm, Fant, Westergård, Kloo). So a full name could be Johan Karlsson Kvist, for his daughter Elvira Johansdotter Kvist, and when she married a man with the Ahlskog farm, Elvira kept the first surname Johansdotter but changed the second surname to her husbands (e.g. Elvira Johansdotter Ahlskog). During the 20th century they started to drop the -son surname while they kept the second. So in Western Finland the Swedish speaking had names like Johan Varg, Karl Viskas, Sebastian Byskata and Elin Loo, while the Swedes in Sweden at the other side of the Baltic Sea kept surnames ending with -son (e.g. Johan Eriksson, Thor Andersson, Anna-Karin Johansson).
A third tradition of surnames was introduced in south Finland by the Swedish-speaking upper and middle classes, which used typical German and Swedish surnames. By custom, all Finnish-speaking persons who were able to get a position of some status in urban or learned society, discarded their Finnish name, adopting a Swedish, German or (in the case of clergy) Latin surname. In the case of enlisted soldiers, the new name was given regardless of the wishes of the individual.
In the late 19th and early 20th century, the overall modernization process, and especially the political movement of fennicization, caused a movement for adoption of Finnish surnames. At that time, many persons with a Swedish or otherwise foreign surname changed their family name to a Finnish one. The features of nature with endings -o/ö, -nen (Meriö < Meri "sea", Nieminen < Niemi "point") are typical of the names of this era, as well as more or less direct translations of Swedish names (Paasivirta < Hällström).
In 21st-century Finland, the use of surnames follows the German model. Every person is legally obligated to have a first and last name. At most, three first names are allowed. The Finnish married couple may adopt the name of either spouse, or either spouse (or both spouses) may decide to use a double name. The parents may choose either surname or the double surname for their children, but all siblings must share the same surname. All persons have the right to change their surname once without any specific reason. A surname that is un-Finnish, contrary to the usages of the Swedish or Finnish languages, or is in use by any person residing in Finland cannot be accepted as the new name, unless valid family reasons or religious or national customs give a reason for waiving this requirement. However, persons may change their surname to any surname that has ever been used by their ancestors if they can prove such claim. Some immigrants have had difficulty naming their children, as they must choose from an approved list based on the family's household language.
In the Finnish language, both the root of the surname and the first name can be modified by consonant gradation regularly when inflected to a case.
In Iceland, most people have no family name; a person's last name is most commonly a patronymic, i.e. derived from the father's first name. For example, when a man called Karl has a daughter called Anna and a son called Magnús, their full names will typically be Anna Karlsdóttir ("Karl's daughter") and Magnús Karlsson ("Karl's son"). The name is not changed upon marriage.
Slavic countries are noted for having masculine and feminine versions for many (but not all) of their names. In most countries the use of a feminine form is obligatory in official documents as well as in other communication, except for foreigners. In some countries only the male form figures in official use (Bosnia and Herzegovina, Croatia, Montenegro, Serbia, Slovenia), but in communication (speech, print) a feminine form is often used.
In Slovenia the last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used.
If the name has no suffix, it may or may not have a feminine version. Sometimes it has the ending changed (such as the addition of -a). In the Czech Republic and Slovakia, suffixless names, such as those of German origin, are feminized by adding -ová (for example, Schusterová).
Bulgarian names usually consist of three components – given name, patronymic (based on father's name), family name.
Given names have many variations, but the most common names have Christian/Greek (e.g. Maria, Ivan, Christo, Peter, Pavel), Slavic (Ognyan, Miroslav, Tihomir) or Protobulgarian (Krum, Asparukh) (pre-Christian) origin. Father's names normally consist of the father's first name and the "-ov" (male) or "-ova" (female) or "-ovi" (plural) suffix.
Family names usually also end with the "-ov", "-ev" (male) or "-ova", "-eva" (female) or "-ovi", "-evi" (plural) suffix.
In many cases (depending on the name root) the suffixes can be also "-ski" (male and plural) or "-ska" (female); "-ovski", "-evski" (male and plural) or "-ovska", "-evska" (female); "-in" (male) or "-ina" (female) or "-ini" (plural); etc.
The meaning of the suffixes is similar to the English word "of", expressing membership in/belonging to a family. For example, the family name Ivanova means a person belonging to the Ivanovi family.
A father's name Petrov means son of Peter.
Regarding the different meaning of the suffixes, "-ov", "-ev"/"-ova", "-eva" are used for expressing relationship to the father and "-in"/"-ina" for relationship to the mother (often for orphans whose father is dead).
Names of Czech people consist of given name (křestní jméno) and surname (příjmení). Usage of the second or middle name is not common. Feminine names are usually derived from masculine ones by a suffix -ová (Nováková) or -á for names being originally adjectives (Veselá), sometimes with a little change of original name's ending (Sedláčková from Sedláček or Svobodová from Svoboda). Women usually change their family names when they get married. The family names are usually nouns (Svoboda, Král, Růžička, Dvořák, Beneš), adjectives (Novotný, Černý, Veselý) or past participles of verbs (Pospíšil). There are also a couple of names with more complicated origin which are actually complete sentences (Skočdopole, Hrejsemnou or Vítámvás). The most common Czech family name is Novák / Nováková.
In addition, many Czechs and some Slovaks have German surnames due to mixing between the ethnic groups over the past thousand years. Deriving women's names from German and other foreign names is often problematic since foreign names do not suit Czech language rules, although most commonly -ová is simply added (Schmidtová; umlauts are often, but not always, dropped, e.g. Müllerová), or the German name is respelled with Czech spelling (Šmitová). Hungarian names, which can be found fairly commonly among Slovaks, can also be either left unchanged (Hungarian Nagy, fem. Nagyová) or respelled according to Czech/Slovak orthography (masc. Naď, fem. Naďová).
In Poland and most of the former Polish–Lithuanian Commonwealth, surnames first appeared during the late Middle Ages. They initially denoted the differences between various people living in the same town or village and bearing the same name. The conventions were similar to those of English surnames, using occupations, patronymic descent, geographic origins, or personal characteristics. Thus, early surnames indicating occupation include Karczmarz ("innkeeper"), Kowal ("blacksmith"), "Złotnik" ("gold smith") and Bednarczyk ("young cooper"), while those indicating patronymic descent include Szczepaniak ("Son of Szczepan), Józefowicz ("Son of Józef), and Kaźmirkiewicz ("Son of Kazimierz"). Similarly, early surnames like Mazur ("the one from Mazury") indicated geographic origin, while ones like Nowak ("the new one"), Biały ("the pale one"), and Wielgus ("the big one") indicated personal characteristics.
In the early 16th century, (the Polish Renaissance), toponymic names became common, especially among the nobility. Initially, the surnames were in a form of "[first name] z ("de", "of") [location]". Later, most surnames were changed to adjective forms, e.g. Jakub Wiślicki ("James of Wiślica") and Zbigniew Oleśnicki ("Zbigniew of Oleśnica"), with masculine suffixes -ski, -cki, -dzki and -icz or respective feminine suffixes -ska, -cka, -dzka and -icz on the east of Polish–Lithuanian Commonwealth. Names formed this way are adjectives grammatically, and therefore change their form depending on sex; for example, Jan Kowalski and Maria Kowalska collectively use the plural Kowalscy.
Names with masculine suffixes -ski, -cki, and -dzki, and corresponding feminine suffixes -ska, -cka, and -dzka became associated with noble origin. Many people from lower classes successively changed their surnames to fit this pattern. This produced many Kowalskis, Bednarskis, Kaczmarskis and so on.
A separate class of surnames derive from the names of noble clans. These are used either as separate names or the first part of a double-barrelled name. Thus, persons named Jan Nieczuja and Krzysztof Nieczuja-Machocki might be related. Similarly, after World War I and World War II, many members of Polish underground organizations adopted their war-time pseudonyms as the first part of their surnames. Edward Rydz thus became Marshal of Poland Edward Śmigły-Rydz and Zdzisław Jeziorański became Jan Nowak-Jeziorański.
A full Russian name consists of personal (given) name, patronymic, and family name (surname).
Most Russian family names originated from patronymics, that is, father's name usually formed by adding the adjective suffix -ov(a) or -ev(a). Contemporary patronymics, however, have a substantive suffix -ich for masculine and the adjective suffix -na for feminine.
For example, the proverbial triad of most common Russian surnames follows:
Feminine forms of these surnames have the ending -a:
Such a pattern of name formation is not unique to Russia or even to the Eastern and Southern Slavs in general; quite common are also names derived from professions, places of origin, and personal characteristics, with various suffixes (e.g. -in(a) and -sky (-skaya)).
Professions:
Places of origin:
Personal characteristics:
A considerable number of "artificial" names exists, for example, those given to seminary graduates; such names were based on Great Feasts of the Orthodox Church or Christian virtues.
Great Orthodox Feasts:
Christian virtues:
Many freed serfs were given surnames after those of their former owners. For example, a serf of the Demidov family might be named Demidovsky, which translates roughly as "belonging to Demidov" or "one of Demidov's bunch".
Grammatically, Russian family names follow the same rules as other nouns or adjectives (names ending with -oy, -aya are grammatically adjectives), with exceptions: some names do not change in different cases and have the same form in both genders (for example, Sedykh, Lata).
Ukrainian and Belarusian names evolved from the same Old East Slavic and Ruthenian language (western Rus') origins. Ukrainian and Belarusian names share many characteristics with family names from other Slavic cultures. Most prominent are the shared root words and suffixes. For example, the root koval (blacksmith) compares to the Polish kowal, and the root bab (woman) is shared with Polish, Slovakian, and Czech. The suffix -vych (son of) corresponds to the South Slavic -vic, the Russian -vich, and the Polish -wicz, while -sky, -ski, and -ska are shared with both Polish and Russian, and -ak with Polish.
However some suffixes are more uniquely characteristic to Ukrainian and Belarusian names, especially: -chuk (Western Ukraine), -enko (all other Ukraine) (both son of), -ko (little [masculine]), -ka (little [feminine]), -shyn, and -uk. See, for example, Mihalko, Ukrainian Presidents Leonid Kravchuk, and Viktor Yushchenko, Belarusian President Alexander Lukashenko, or former Soviet diplomat Andrei Gromyko. Such Ukrainian and Belarusian names can also be found in Russia, Poland, or even other Slavic countries (e.g. Croatian general Zvonimir Červenko), but are due to importation by Ukrainian, Belarusian, or Rusyn ancestors.
Surnames of some South Slavic groups such as Serbs, Croats, Montenegrins, and Bosniaks traditionally end with the suffixes "-ić" and "-vić" (often transliterated to English and other western languages as "ic", "ich", "vic" or "vich". The v is added in the case of a name to which "-ić" is appended would otherwise end with a vowel, to avoid double vowels with the "i" in "-ić".) These are a diminutive indicating descent i.e. "son of". In some cases the family name was derived from a profession (e.g. blacksmith – "Kovač" → "Kovačević").
An analogous ending is also common in Slovenia. As the Slovenian language does not have the softer consonant "ć", in Slovene words and names only "č" is used. So that people from the former Yugoslavia need not change their names, in official documents "ć" is also allowed (as well as "Đ / đ"). Thus, one may have two surname variants, e.g.: Božič, Tomšič (Slovenian origin or assimilated) and Božić, Tomšić (roots from the Serbo-Croat language continuum area). Slovene names ending in -ič do not necessarily have a patrimonial origin.
In general family names in all of these countries follow this pattern with some family names being typically Serbian, some typically Croat and yet others being common throughout the whole linguistic region.
Children usually inherit their fathers' family name. In an older naming convention which was common in Serbia up until the mid-19th century, a person's name would consist of three distinct parts: the person's given name, the patronymic derived from the father's personal name, and the family name, as seen, for example, in the name of the language reformer Vuk Stefanović Karadžić.
Official family names do not have distinct male or female forms, except in North Macedonia, though a somewhat archaic unofficial form of adding suffixes to family names to form female form persists, with -eva, implying "daughter of" or "female descendant of" or -ka, implying "wife of" or "married to". In Slovenia the feminine form of a surname ("-eva" or "-ova") is regularly used in non-official communication (speech, print), but not for official IDs or other legal documents.
Bosniak Muslim names follow the same formation pattern but are usually derived from proper names of Islamic origin, often combining archaic Islamic or feudal Turkish titles i.e. Mulaomerović, Šabanović, Hadžihafizbegović, etc. Also related to Islamic influence is the prefix Hadži- found in some family names. Regardless of religion, this prefix was derived from the honorary title which a distinguished ancestor earned by making a pilgrimage to either Christian or Islamic holy places; Hadžibegić, being a Bosniak Muslim example, and Hadžiantić an Orthodox Christian one.
In Croatia where tribal affiliations persisted longer, Lika, Herzegovina etc., originally a family name, came to signify practically all people living in one area, clan land or holding of the nobles. The Šubić family owned land around the Zrin River in the Central Croatian region of Banovina. The surname became Šubić Zrinski, the most famous being Nikola Šubić Zrinski.
In Montenegro and Herzegovina, family names came to signify all people living within one clan or bratstvo. As there exists a strong tradition of inheriting personal names from grandparents to grandchildren, an additional patronymic usually using suffix -ov had to be introduced to make distinctions between two persons bearing the same personal name and the same family name and living within same area. A noted example is Marko Miljanov Popović, i.e. Marko, son of Miljan, from Popović family.
Due to discriminatory laws in the Austro-Hungarian Empire, some Serb families of Vojvodina discarded the suffix -ić in an attempt to mask their ethnicity and avoid heavy taxation.
The prefix Pop- in Serbian names indicates descent from a priest, for example Gordana Pop Lazić, i.e. descendant of Pop Laza.
Some Serbian family names include prefixes of Turkish origin, such as Uzun- meaning tall, or Kara-, black. Such names were derived from nicknames of family ancestors. A famous example is Karađorđević, descendants of Đorđe Petrović, known as Karađorđe or Black Đorđe.
Among the Bulgarians, another South Slavic people, the typical surname suffix is "-ov" (Ivanov, Kovachev), although other popular suffixes also exist.
In North Macedonia, the most popular suffix today is "-ski".
Slovenes have a great variety of surnames, most of them differentiated according to region. Surnames ending in -ič are by far less frequent than among Croats and Serbs. There are typically Slovenian surnames ending in -ič, such as Blažič, Stanič, Marušič. Many Slovenian surnames, especially in the Slovenian Littoral, end in -čič (Gregorčič, Kocijančič, Miklavčič, etc.), which is uncommon for other South Slavic peoples (except the neighboring Croats, e.g. Kovačić, Jelačić, Kranjčić, etc.). On the other hand, surname endings in -ski and -ov are rare, they can denote a noble origin (especially for the -ski, if it completes a toponym) or a foreign (mostly Czech) origin. One of the most typical Slovene surname endings is -nik (Rupnik, Pučnik, Plečnik, Pogačnik, Podobnik) and other used surname endings are -lin (Pavlin, Mehlin, Ahlin, Ferlin), -ar (Mlakar, Ravnikar, Smrekar Tisnikar) and -lj (Rugelj, Pucelj, Bagatelj, Bricelj). Many Slovenian surnames are linked to Medieval rural settlement patterns. Surnames like Novak (literally, "the new one") or Hribar (from hrib, hill) were given to the peasants settled in newly established farms, usually in high mountains. Peasant families were also named according to the owner of the land which they cultivated: thus, the surname Kralj (King) or Cesar (Emperor) was given to those working on royal estates, Škof (Bishop) or Vidmar to those working on ecclesiastical lands, etc. Many Slovenian surnames are named after animals (Medved – bear, Volk, Vovk or Vouk – wolf, Golob – pigeon, Strnad – yellowhammer, Orel – eagle, Lisjak – fox, or Zajec – rabbit, etc.) or plants Pšenica – wheat, Slak – bindweed, Hrast – oak, etc. Many are named after neighbouring peoples: Horvat, Hrovat, or Hrovatin (Croat), Furlan (Friulian), Nemec (German), Lah (Italian), Vogrin, Vogrič or Vogrinčič (Hungarian), Vošnjak (Bosnian), Čeh (Czech), Turk (Turk), or different Slovene regions: Kranjc, Kranjec or Krajnc (from Carniola), Kraševec (from the Karst Plateau), Korošec (from Carinthia), Kočevar or Hočevar (from the Gottschee county).
In Slovenia last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used. Examples: Novak (m.) & Novakova (f.), Kralj (m.) & Kraljeva (f.), Mali (m.) & Malijeva (f.). Usually surenames on -ova are used together with the title/gender: gospa Novakova (Mrs. Novakova), gospa Kraljeva (Mrs. Kraljeva), gospodična Malijeva (Miss Malijeva, if unmarried), etc. or with the name. So we have Maja Novak on the ID card and Novakova Maja (extremely rarely Maja Novakova) in communication; Tjaša Mali and Malijeva Tjaša (rarely Tjaša Malijeva); respectively. Diminutive forms of last names for females are also available: Novakovka, Kraljevka. As for pronunciation, in Slovenian there is some leeway regarding accentuation. Depending on the region or local usage, you may have either Nóvak & Nóvakova or, more frequently, Novák & Novákova. Accent marks are normally not used.
The given name is always followed by the father's first name, then the father's family surname. Some surnames have a prefix of ibn- (ould- in Mauritania) meaning "son of". The surnames follow similar rules defining a relation to a clan, family, place etc. Some Arab countries have differences due to historic rule by the Ottoman Empire or due to being a different minority.
A large number of Arabic last names start with "Al-" which means "The"
Arab States of the Persian Gulf: Names mainly consist of the person's name followed by the father's first name connected by the word "ibn" or "bin" (meaning "son of"). The last name either refers to the name of the tribe the person belongs to, or to the region, city, or town he/she originates from. In exceptional cases, members of the royal families or ancient tribes mainly, the title (usually H.M./H.E., Prince, or Sheikh) is included in the beginning as a prefix, and the first name can be followed by four names, his father, his grandfather, and great – grandfather, as a representation of the purity of blood and to show the pride one has for his ancestry.
In Arabic-speaking Levantine countries (Jordan, Lebanon, Palestine, Syria) it's common to have family names associated with a certain profession or craft, such as "Al-Haddad"/"Haddad" which means "Blacksmith" or "Al-Najjar"/"Najjar" which means "Carpenter".
In India, surnames are placed as last names or before first names, which often denote: village of origin, caste, clan, office of authority their ancestors held, or trades of their ancestors. The use of surnames is a relatively new convention, introduced during British colonisation. Typically, parts of northern India follow English-speaking Western naming conventions by having a given name followed by a surname. This is not necessarily the case in southern India, where people may adopt a surname out of necessity when migrating or travelling abroad.
The largest variety of surnames is found in the states of Maharashtra and Goa, which numbers more than the rest of India together. Here surnames are placed last, the order being: the given name, followed by the father's name, followed by the family name. The majority of surnames are derived from the place where the family lived, with the 'kar' (Marathi and Konkani) suffix, for example, Mumbaikar, Punekar, Aurangabadkar, Tendulkar, Parrikar, Mangeshkar, Mahendrakar. Another common variety found in Maharashtra and Goa are the ones ending in 'e'. These are usually more archaic than the 'Kar's and usually denote medieval clans or professions like Rane, Salunkhe, Gupte, Bhonsle, Ranadive, Rahane, Hazare, Apte, Satpute, Shinde, Sathe, Londhe, Salve, Kale, Gore, Godbole, etc.
In Andhra Pradesh and Telangana, surnames usually denote family names. It is easy to track family history and the caste they belonged to using a surname.
In Odisha and West Bengal, surnames denote the caste they belong. There are also several local surnames like Das, Patnaik, Mohanty, Jena etc.
In Kerala, surnames denote the caste they belong. There are also several local surnames like Nair, Menon , Panikkar etc.
It is a common in Kerala, Tamil Nadu, and some other parts of South India that the spouse adopts her husband's first name instead of his family or surname name after marriage.
In Rajasthan, the community name and sometimes the gotra or clan name are used as surnames. Usage of community name as surname include: Charan, Jat, Meena, Rajput, etc. Sometimes, the faith name (for example: Jain) can also be used as a surname.
India is a country with numerous distinct cultural and linguistic groups. Thus, Indian surnames, where formalized, fall into seven general types.
Surnames are based on:
The convention is to write the first name followed by middle names and surname. It is common to use the father's first name as the middle name or last name even though it is not universal. In some Indian states like Maharashtra, official documents list the family name first, followed by a comma and the given names.
In modern times, in urban areas at least, this practice is not universal and some wives either suffix their husband's surname or do not alter their surnames at all. In some rural areas, particularly in North India, wives may also take a new first name after their nuptials. Children inherit their surnames from their father.
Jains generally use Jain, Shah, Firodia, Singhal or Gupta as their last names. Sikhs generally use the words Singh ("lion") and Kaur ("princess") as surnames added to the otherwise unisex first names of men and women, respectively. It is also common to use a different surname after Singh in which case Singh or Kaur are used as middle names (Montek Singh Ahluwalia, Surinder Kaur Badal). The tenth Guru of Sikhism ordered (Hukamnama) that any man who considered himself a Sikh must use Singh in his name and any woman who considered herself a Sikh must use Kaur in her name. Other middle names or honorifics that are sometimes used as surnames include Kumar, Dev, Lal, and Chand.
The modern-day spellings of names originated when families translated their surnames to English, with no standardization across the country. Variations are regional, based on how the name was translated from the local language to English in the 18th, 19th and 20th centuries during British rule. Therefore, it is understood in the local traditions that Baranwal and Barnwal represent the same name derived from Uttar Pradesh and Punjab respectively. Similarly, Tagore derives from Bengal while Thakur is from Hindi-speaking areas. The officially recorded spellings tended to become the standard for that family. In the modern times, some states have attempted standardization, particularly where the surnames were corrupted because of the early British insistence of shortening them for convenience. Thus Bandopadhyay became Banerji, Mukhopadhay became Mukherji, Chattopadhyay became Chatterji, etc. This coupled with various other spelling variations created several surnames based on the original surnames. The West Bengal Government now insists on re-converting all the variations to their original form when the child is enrolled in school.
Some parts of Sri Lanka, Thailand, Nepal, Myanmar, and Indonesia have similar patronymic customs to those of India.
Nepali surnames are divided into three origins; Indo-Aryan languages, Tibeto-Burman languages and indigenous origins. Surnames of Khas community contains toponyms as Ghimire, Dahal, Pokharel, Sapkota from respective villages, occupational names as (Adhikari, Bhandari, Karki, Thapa). Many Khas surnames includes suffix as -wal, -al as in Katwal, Silwal, Khanal, Khulal, Rijal. Kshatriya titles such as Bista, Kunwar, Rana, Rawal, Shah, Thakuri, Chand, were taken as surnames by various Kshetri and Thakuris. Khatri Kshetris share surnames with mainstream Pahari Bahuns. Other popular Chhetri surnames include Basnyat, Bogati, Budhathoki, Khadka, Mahat, Raut. Similarly, Brahmin surnames such as Acharya, Joshi, Pandit, Sharma, Upadhyay were taken by Pahari Bahuns. Bahuns bear distinct surnames as Kattel, and share surnames with mainstream Bahuns. Other Bahun surnames include Aryal, Bhattarai, Banskota, Chaulagain, Devkota, Dhakal, Gyawali, Koirala, Mainali, Pandey, Panta, Paudel, Regmi, Subedi, Lamsal, and Dhungel. Khas-Dalits surnames include Kami, Bishwakarma or B.K., Damai, Mijar, Pariyar, Sarki. Newar groups of multiethnic background bears both Indo-Aryan surnames (like Shrestha, Pradhan) and indigenous surnames like Maharjan, Dangol. Magars bear surnames derived from Khas peoples such as Baral, Budhathoki, Lamichhane, Thapa and indigenous origins as Gharti, Pun, Pulami. Other Himalayan Mongoloid castes bears Tibeto-Burmese surnames like Gurung, Tamang, Thakali, Sherpa. Various Kiranti ethnic group contains many Indo-Aryan surnames of Khas origin which were awarded by the government of Khas peoples. These surnames are Rai, Subba depending upon job and position hold by them. Terai community consists both Indo-Aryan and Indigenous origin surnames. Terai Brahmins bears surnames as Jha. Nepalese Muslims bears Islamic surnames such as Ali, Ansari, Begum, Khan, Mohammad, Pathan. Other common Terai surnames are Kayastha.
Pakistani surnames are basically divided in three categories: Arab naming convention, tribal or caste names and ancestral names.
Family names indicating Arab ancestry, e.g. Shaikh, Siddiqui, Abbasi, Syed, Zaidi, Khawaja, Naqvi, Farooqi, Osmani, Alavi, Hassani, and Husseini.
People claiming Afghan ancestry include those with family names like Durrani, Gardezi, Suri, Yousafzai, Afridi, Mullagori, Mohmand, Khattak, Wazir, Mehsud, Niazi.
Family names indicating Turkic heritage include Mughal, Baig or Beg, Pasha, Barlas, and Seljuki. Family names indicating Turkish/Kurd ancestry, Dogar.
People claiming Indic ancestry include those with family names Barelwi, Lakhnavi, Delhvi, Godharvi, Bilgrami, and Rajput. A large number of Muslim Rajputs have retained their surnames such as Chauhan, Rathore, Parmar, and Janjua.
People claiming Iranian ancestry include those with family names Agha, Bukhari, Firdausi, Ghazali, Gilani, Hamadani, Isfahani, Kashani, Kermani, Khorasani, Farooqui, Mir, Mirza, Montazeri, Nishapuri, Noorani, Kayani, Qizilbash, Saadi, Sabzvari, Shirazi, Sistani, Suhrawardi, Yazdani, Zahedi, and Zand.
Tribal names include Abro Afaqi, Afridi, Cheema, Khogyani (Khakwani), Amini, Ansari, Ashrafkhel, Awan, Bajwa, Baloch, Barakzai, Baranzai, Bhatti, Bhutto, Ranjha, Bijarani, Bizenjo, Brohi, Khetran, Bugti, Butt, Farooqui, Gabol, Ghaznavi, Ghilzai, Gichki, Gujjar, Jamali, Jamote, Janjua, Jatoi, Jutt Joyo, Junejo, Karmazkhel, Kayani, Khar, Khattak, Khuhro, Lakhani, Leghari, Lodhi, Magsi, Malik, Mandokhel, Mayo, Marwat, Mengal, Mughal, Palijo, Paracha, Panhwar, Phul, Popalzai, Qureshi & qusmani, Rabbani, Raisani, Rakhshani, Sahi, Swati, Soomro, Sulaimankhel, Talpur, Talwar, Thebo, Yousafzai, and Zamani.
In Pakistan, the official paperwork format regarding personal identity is as follows:
So and so, son of so and so, of such and such tribe or clan and religion and resident of such and such place. For example, Amir Khan s/o Fakeer Khan, tribe Mughal Kayani or Chauhan Rajput, Follower of religion Islam, resident of Village Anywhere, Tehsil Anywhere, District.
In modern Chinese, Japanese, Korean, Taiwanese, and Vietnamese, the family name is placed before the given names, although this order may not be observed in translation. Generally speaking, Chinese, Korean, and Vietnamese names do not alter their order in English (Mao Zedong, Kim Jong-il, Ho Chi Minh) and Japanese names do (Kenzaburō Ōe). However, numerous exceptions exist, particularly for people born in English-speaking countries such as Yo-Yo Ma. This is sometimes systematized: in all Olympic events, the athletes of the People's Republic of China list their names in the Chinese ordering, while Chinese athletes representing other countries, such as the United States, use the Western ordering. (In Vietnam, the system is further complicated by the cultural tradition of addressing people by their given name, usually with an honorific. For example, Phan Văn Khải is properly addressed as Mr. Khải, even though Phan is his family name.)
Chinese family names have many types of origins, some claiming dates as early as the legendary Yellow Emperor (2nd millennium BC):
In history, some changed their surnames due to a naming taboo (from Zhuang 莊 to Yan 嚴 during the era of Liu Zhuang 劉莊) or when the imperial surname was awarded by the Emperor (the imperial surname Li was often bestowed on senior officers during the Tang dynasty).
In modern times, some Chinese adopt an English name in addition to their native given names: e.g., 李柱銘 (Li Zhùmíng) adopted the English name Martin Lee. Particularly in Hong Kong and Singapore, the convention is to write both names together: Martin Lee Chu-ming. Owing to the confusion this can cause, a further convention is sometimes observed of capitalizing the surname: Martin LEE Chu-ming. Sometimes, however, the Chinese given name is forced into the Western system as a middle name ("Martin Chu-ming Lee"); less often, the English given name is forced into the Chinese system ("Lee Chu-ming Martin").
In Japan, the civil law forces a common surname for every married couple, unless in a case of international marriage. In most cases, women surrender their surnames upon marriage, and use the surnames of their husbands. However, a convention that a man uses his wife's family name if the wife is an only child is sometimes observed. A similar tradition called ru zhui (入贅) is common among Chinese when the bride's family is wealthy and has no son but wants the heir to pass on their assets under the same family name. The Chinese character zhui (贅) carries a money radical (貝), which implies that this tradition was originally based on financial reasons. All their offspring carry the mother's family name. If the groom is the first born with an obligation to carry his own ancestor's name, a compromise may be reached in that the first male child carries the mother's family name while subsequent offspring carry the father's family name. The tradition is still in use in many Chinese communities outside mainland China, but largely disused in China because of social changes from communism. Due to the economic reform in the past decade, accumulation and inheritance of personal wealth made a comeback to the Chinese society. It is unknown if this financially motivated tradition would also come back to mainland China.
In Chinese, Korean, Vietnamese and Singaporean cultures, women keep their own surnames, while the family as a whole is referred to by the surnames of the husbands.
In Hong Kong, some women would be known to the public with the surnames of their husbands preceding their own surnames, such as Anson Chan Fang On Sang. Anson is an English given name, On Sang is the given name in Chinese, Chan is the surname of Anson's husband, and Fang is her own surname. A name change on legal documents is not necessary. In Hong Kong's English publications, her family names would have been presented in small cap letters to resolve ambiguity, e.g. Anson CHAN FANG On Sang in full or simply Anson Chan in short form.
In Macau, some people have their names in Portuguese spelt with some Portuguese style, such as Carlos do Rosario Tchiang.
Chinese women in Canada, especially Hongkongers in Toronto, would preserve their maiden names before the surnames of their husbands when written in English, for instance, Rosa Chan Leung, where Chan is the maiden name, and Leung is the surname of the husband.
In Chinese, Korean, and Vietnamese, surnames are predominantly monosyllabic (written with one character), though a small number of common disyllabic (or written with two characters) surnames exists (e.g. the Chinese name Ouyang, the Korean name Jegal and the Vietnamese name Phan-Tran).
Many Chinese, Korean, and Vietnamese surnames are of the same origin, but simply pronounced differently and even transliterated differently overseas in Western nations. For example, the common Chinese surnames Chen, Chan, Chin, Cheng and Tan, the Korean surname Jin, as well as the Vietnamese surname Trần are often all the same exact character 陳. The common Korean surname Kim is also the common Chinese surname Jin, and written 金. The common Mandarin surnames Lin or Lim (林) is also one and the same as the common Cantonese or Vietnamese surname Lam and Korean family name Lim (written/pronounced as Im in South Korea). There are people with the surname of Hayashi (林) in Japan too. The common Chinese surname 李, translated to English as Lee, is, in Chinese, the same character but transliterated as Li according to pinyin convention. Lee is also a common surname of Koreans, and the character is identical.
40% of all Vietnamese have the surname Nguyen. This may be because when a new dynasty took power in Vietnam it was custom to adopt that dynasty's surname. The last dynasty in Vietnam was the Nguyen dynasty, so as a result, many people have this surname.
In Burundi and Rwanda, most, if not all surnames have God in it, for example, Hakizimana (meaning God cures), Nshimirimana (I thank God) or Havyarimana/Habyarimana (God gives birth). But not all surnames end with the suffix -imana. Irakoze is one of these (technically meaning Thank God, though it is hard to translate it correctly in English or probably any other language). Surnames are often different among immediate family members, as parents frequently choose unique surnames for each child, and women keep their maiden names when married. Surnames are placed before given names and frequently written in capital letters, e.g. HAKIZIMANA Jacques.
In several Northeast Bantu languages such as Kamba, Taita and Kikuyu in Kenya the word "wa" (meaning "of") is inserted before the surname, for instance, Mugo wa Kibiru (Kikuyu) and Mekatilili wa Menza (Mijikenda).
The patronymic custom in most of the Horn of Africa gives children the father's first name as their surname. The family then gives the child its first name. Middle names are unknown. So, for example, a person's name might be Bereket Mekonen . In this case, Bereket is the first name and Mekonen is the surname, and also the first name of the father.
The paternal grandfather's name is often used if there is a requirement to identify a person further, for example, in school registration. Also, different cultures and tribes use the father's or grandfather's given name as the family's name. For example, some Oromos use Warra Ali to mean families of Ali, where Ali, is either the householder, a father or grandfather.
In Ethiopia, the customs surrounding the bestowal and use of family names is as varied and complex as the cultures to be found there. There are so many cultures, nations or tribes, that currently there can be no one formula whereby to demonstrate a clear pattern of Ethiopian family names. In general, however, Ethiopians use their father's name as a surname in most instances where identification is necessary, sometimes employing both father's and grandfather's names together where exigency dictates.
Many people in Eritrea have Italian surnames, but all of these are owned by Eritreans of Italian descent.
Libya's names and surnames have a strong Islamic/Arab nature, with some Turkish influence from Ottoman Empire rule of nearly 400 years. Amazigh, Touareg and other minorities also have their own name/surname traditions. Due to its location as a trade route and the different cultures that had their impact on Libya throughout history, one can find names that could have originated in neighboring countries, including clan names from the Arabian Peninsula, and Turkish names derived from military rank or status (Basha, Agha).
A full Albanian name consists of a given name (Albanian: emër), patronymic (Albanian: atësi) and family name (Albanian: mbiemër), for example Agron Mark Gjoni. The patronymic is simply the given name of the individual's father, with no suffix added. The family name is typically a noun in the definite form or at the very least ends with a vowel or -j (an approximant close to -i). Many traditional last names end with -aj (previously -anj), which is more prevalent in certain regions of Albania and Kosovo. For clarification, the "family name" is typically the father's father's name (grandfather).
Proper names in Albanian are fully declinable like any noun (e.g. Marinelda, genitive case i/e Marineldës "of Marinelda").
Armenian surnames almost always have the ending (Armenian: յան) transliterated into English as -yan or -ian (spelled -ean (եան) in Western Armenian and pre-Soviet Eastern Armenian, of Ancient Armenian or Iranian origin, presumably meaning "son of"), though names with that ending can also be found among Persians and a few other nationalities. Armenian surnames can derive from a geographic location, profession, noble rank, personal characteristic or personal name of an ancestor. Armenians in the diaspora sometimes adapt their surnames to help assimilation. In Russia, many have changed -yan to -ov (or -ova for women). In Turkey, many have changed the ending to -oğlu (also meaning "son of"). In English and French-speaking countries, many have shortened their name by removing the ending (for example Charles Aznavour). In ancient Armenia, many noble names ended with the locative -t'si (example, Khorenatsi) or -uni (Bagratuni). Several modern Armenian names also have a Turkish suffix which appears before -ian/-yan: -lian denotes a placename; -djian denotes a profession. Some Western Armenian names have a particle Der, while their Eastern counterparts have Ter. This particle indicates an ancestor who was a priest (Armenian priests can choose to marry or remain celibate, but married priests cannot become a bishop). Thus someone named Der Bedrosian (Western) or Ter Petrosian (Eastern) is a descendant of an Armenian priest. The convention is still in use today: the children of a priest named Hagop Sarkisian would be called Der Sarkisian. Other examples of Armenian surnames: Adonts, Sakunts, Vardanyants, Rshtuni.
It was common for Azerbaijani names to have 3 components: given name, father's name and family name. However in recent years it is becoming increasingly popular to only have 2 components: first name and surname.
While under Soviet rule, it was mandatory for Azerbaijanis to register their names, but most people did not have surnames. This was normally circumvented by taking the individual's father's name and adding a Russian suffixes such as "-yev"/"-ov" for men and "-yeva/-ova" for women (meaning "born of"). For example from "Ali" we get "Aliyev" and "Aliyeva" and from "Husein" we get "Huseinov" and "Huseinova". However as the Soviet era came to an end, many Azerbaijanis dropped these endings in an attempt to derussify. Some chose to replace these with traditional suffixes like "-zade" (Persian for "born of") and "-li/-lu" (Turkish for "with" or "belonging to"), "-oglu/-oghlu" (Turkish for "son of"). Some chose to drop the suffixes entirely.
Most eastern Georgian surnames end with the suffix of "-shvili", (e.g. Kartveli'shvili) Georgian for "child" or "offspring". Western Georgian surnames most commonly have the suffix "-dze", (e.g. Laba'dze) Georgian for "son". Megrelian surnames usually end in "-ia", "-ua" or "-ava". Other location-specific endings exist: In Svaneti "-iani", meaning "belonging to", or "hailing from", is common. In the eastern Georgian highlands common endings are "uri" and "uli". Some noble family names end in "eli", meaning "of (someplace)". In Georgian, the surname is not normally used as the polite form of address; instead, the given name is used together with a title. For instance, Nikoloz Kartvelishvili is politely addressed as bat'ono Nikoloz "My Lord. Nikoloz".
Greek surnames are most commonly patronymics. Occupation, characteristic, or ethnic background and location/origin-based surnames names also occur; they are sometimes supplemented by nicknames.
Commonly, Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case. Exceptionally, some end in -ou, indicating the genitive case of this proper noun for patronymic reasons.
Although surnames are static today, dynamic and changing patronym usage survives in middle names in Greece where the genitive of the father's first name is commonly the middle name.
Because of their codification in the Modern Greek state, surnames have Katharevousa forms even though Katharevousa is no longer the official standard. Thus, the Ancient Greek name Eleutherios forms the Modern Greek proper name Lefteris, and former vernacular practice (prefixing the surname to the proper name) was to call John Eleutherios Leftero-giannis.
Modern practice is to call the same person Giannis Eleftheriou: the proper name is vernacular (and not Ioannis), but the surname is an archaic genitive. However, children are almost always baptised with the archaic form of the name so in official matters, the child will be referred to as Ioannis Eleftheriou and not Giannis Eleftheriou.
Female surnames are most often in the Katharevousa genitive case of a male name. This is an innovation of the Modern Greek state; Byzantine practice was to form a feminine counterpart of the male surname (e.g. masculine Palaiologos, Byzantine feminine Palaiologina, Modern feminine Palaiologou).
In the past, women would change their surname when married to that of their husband (again in the genitive case) signifying the transfer of "dependence" from the father to the husband. In earlier Modern Greek society, women were named with -aina as a feminine suffix on the husband's first name: "Giorgaina", "Mrs George", "Wife of George". Nowadays, a woman's legal surname does not change upon marriage, though she can use the husband's surname socially. Children usually receive the paternal surname, though in rare cases, if the bride and groom have agreed before the marriage, the children can receive the maternal surname.
Some surnames are prefixed with Papa-, indicating ancestry from a priest, e.g. Papageorgiou, the "son of a priest named George". Others, like Archi- and Mastro- signify "boss" and "tradesman" respectively.
Prefixes such as Konto-, Makro-, and Chondro- describe body characteristics, such as "short", "tall/long" and "fat". Gero- and Palaio- signify "old" or "wise".
Other prefixes include Hadji- (Χαντζή- or Χαντζι-) which was an honorific deriving from the Arabic Hadj or pilgrimage, and indicate that the person had made a pilgrimage (in the case of Christians, to Jerusalem) and Kara- which is attributed to the Turkish word for "black" deriving from the Ottoman Empire era. The Turkish suffix -oglou (derived from a patronym, -oğlu in Turkish) can also be found. Although they are of course more common among Greece's Muslim minority, they still can be found among the Christian majority, often Greeks or Karamanlides who were pressured to leave Turkey after the Turkish Republic was founded (since Turkish surnames only date to the founding of the Republic, when Atatürk made them compulsory).
Arvanitic surnames also exist; an example is Tzanavaras or Tzavaras, from the Arvanitic word çanavar or çavar meaning "brave" (pallikari in Greek).
Most Greek patronymic suffixes are diminutives, which vary by region. The most common Hellenic patronymic suffixes are:
Others, less common, are:
Either the surname or the given name may come first in different contexts; in newspapers and in informal uses, the order is given name + surname, while in official documents and forums (tax forms, registrations, military service, school forms), the surname is often listed or said first.
In Hungarian, like Asian languages but unlike most other European ones (see French and German above for exceptions), the family name is placed before the given names. This usage does not apply to non-Hungarian names, for example "Tony Blair" will remain "Tony Blair" when written in Hungarian texts.
Names of Hungarian individuals, however, appear in Western order in English writing.
Indonesians comprise more than 1,300 ethnic groups. Not all of these groups traditionally have surnames, and in the populous Java surnames are not common at all – regardless of which one of the six officially recognized religions the name carrier profess. For instance, a Christian Javanese woman named Agnes Mega Rosalin has three forenames and no surname. "Agnes" is her Christian name, but "Mega" can be the first name she uses and the name which she is addressed with. "Rosalin" is only a middle name. Nonetheless, Indonesians are well aware of the custom of family names, which is known as marga or fam, and such names have become a specific kind of identifier. People can tell what a person's heritage is by his or her family or clan name.
Javanese people are the majority in Indonesia, and most do not have any surname. There are some individuals, especially the old generation, who have only one name, such as "Suharto" and "Sukarno". These are not only common with the Javanese but also with other Indonesian ethnic groups who do not have the tradition of surnames. If, however, they are Muslims, they might opt to follow Arabic naming customs, but Indonesian Muslims do not automatically follow Arabic name traditions.
In conjunction with migration to Europe or America, Indonesians without surnames often adopt a surname based on some family name or middle name. The forms for visa application many Western countries use, has a square for writing the last name which cannot be left unfilled by the applicant.
Most Chinese Indonesians substituted their Chinese surnames with Indonesian-sounding surnames due to political pressure from 1965 to 1998 under Suharto's regime.
Persian last names may be:
Suffixes include: -an (plural suffix), -i ("of"), -zad/-zadeh ("born of"), -pur ("son of"), -nejad ("from the race of"), -nia ("descendant of"), -mand ("having or pertaining to"), -vand ("succeeding"), -far ("holder of"), -doost ("-phile"), -khah ("seeking of"), -manesh ("having the manner of"), -ian/-yan, -gar and -chi ("whose vocation pertains").
An example is names of geographical locations plus "-i": Irani ("Iranian"), Gilani ("of Gilan province"), Tabrizi ("of the city of Tabriz").
Another example is last names that indicate relation to religious groups such as Zoroastrian (e.g. Goshtaspi, Namiranian, Azargoshasp), Jewish (e.g. Yaghubian [Jacobean], Hayyem [Life], Shaul [Saul]) or Muslim (e.g. Alavi, Islamnia, Montazeri)
Last names are arbitrary; their holder need not to have any relation with their meaning.
Traditionally in Iran, the wife does not take her husband's surname, although children take the surname of their father. Individual reactions notwithstanding, it is possible to call a married woman by her husband's surname. This is facilitated by the fact that English words "Mrs.", "Miss", "Woman", "Lady" and "Wife (of)" in a polite context are all translated into "خانم" (Khaanom). Context, however, is important: "خانم گلدوست" (Khaanom Goldust) may, for instance, refer to the daughter of Mr. Goldust instead of his wife. When most of Iranian surnames are used with a name, the name will be ended with a suffix _E or _ie (of) such as Hasan_e roshan (Hasan is name and roshan is surname) that means Hasan of Roshan or Mosa_ie saiidi (Muses of saiidi). The _e is not for surname and it is difficult to say it is a part of surname.
Italy has around 350,000 surnames. Most of them derive from the following sources: patronym or ilk (e.g. Francesco di Marco, "Francis, son of Mark" or Eduardo de Filippo, "Edward belonging to the family of Philip"), occupation (e.g. Enzo Ferrari, "Heinz (of the) Blacksmiths"), personal characteristic (e.g. nicknames or pet names like Dario Forte, "Darius the Strong"), geographic origin (e.g. Elisabetta Romano, "Elisabeth from Rome") and objects (e.g. Carlo Sacchi, "Charles Bags"). The two most common Italian family names, Russo and Rossi, mean the same thing, "Red", possibly referring to the hair color.
Both Western and Eastern orders are used for full names: the given name usually comes first, but the family name may come first in administrative settings; lists are usually indexed according to the last name.
Since 1975, women have kept their own surname when married, but until recently (2000) they could have added the surname of the husband according to the civil code, although it was a very seldom-used practice. In recent years, the husband's surname cannot be used in any official situation. In some unofficial situations, sometimes both surnames are written (the proper first), sometimes separated by in (e.g. Giuseppina Mauri in Crivelli) or, in case of widows, ved. (vedova).
Latvian male surnames usually end in -s, -š or -is whereas the female versions of the same names end in -a or -e or s in both unmarried and married women.
Before the emancipation from serfdom (1817 in Courland, 1819 in Vidzeme, 1861 in Latgale) only noblemen, free craftsmen or people living in towns had surnames. Therefore, the oldest Latvian surnames originate from German or Low German, reflecting the dominance of German as an official language in Latvia till the 19th century. Examples: Meijers/Meijere (German: Meier, farm administrator; akin to Mayor), Millers/Millere (German: Müller, miller), Šmits/Šmite (German: Schmidt, smith), Šulcs/Šulce, Šulca (German: Schultz or Schulz, constable), Ulmanis (German: Ullmann, a person from Ulm), Godmanis (a God-man), Pētersons (son of Peter). Some Latvian surnames, mainly from Latgale are of Polish or Belarusian origin by changing the final -ski/-cki to -skis/-ckis, -czyk to -čiks or -vich/-wicz to -vičs, such as Sokolovkis/Sokolovska, Baldunčiks/Baldunčika or Ratkevičs/Ratkeviča.
Most Latvian peasants received their surnames in 1826 (in Vidzeme), in 1835 (in Courland), and in 1866 (in Latgale). Diminutives were the most common form of family names. Examples: Kalniņš/Kalniņa (small hill), Bērziņš/Bērziņa (small birch).
Nowadays many Latvians of Slavic descent have surnames of Russian, Belarusian, or Ukrainian origin, for example Volkovs/Volkova or Antoņenko.
Lithuanian names follow the Baltic distinction between male and female suffixes of names, although the details are different. Male surnames usually end in -a, -as, -aitis, -ys, -ius, or -us, whereas the female versions change these suffixes to -aitė, -ytė, -iūtė, and -utė respectively (if unmarried), -ienė (if married), or -ė (not indicating the marital status). Some Lithuanians have names of Polish or another Slavic origin, which are made to conform to Lithuanian by changing the final -ski to -skas, such as Sadauskas, with the female version bein -skaitė (if unmarried), -skienė (if married), or -skė (not indicating the marital status).
Different cultures have their impact on the demographics of the Maltese islands, and this is evident in the various surnames Maltese citizens bear nowadays. There are very few Maltese surnames per se: the few that originate from Maltese places of origin include Chircop (Kirkop), Lia (Lija), Balzan (Balzan), Valletta (Valletta), and Sciberras (Xebb ir-Ras Hill, on which Valletta was built). The village of Munxar, Gozo is characterised by the majority of its population having one of two surnames, either Curmi or de Brincat. In Gozo, the surnames Bajada and Farrugia are also common.
Sicilian and Italian surnames are common due to the close vicinity to Malta. Sicilians were the first to colonise the Maltese islands. Common examples include Azzopardi, Bonello, Cauchi, Farrugia, Gauci, Rizzo, Schembri, Tabone, Vassallo, Vella.
Common examples include Depuis, Montfort, Monsenuier, Tafel.
English surnames exist for a number of reasons, but mainly due to migration as well as Malta forming a part of the British Empire in the 19th century and most of the 20th. Common examples include Bone, Harding, Atkins, Mattocks, Smith, Jones, Woods, Turner.
Arabic surnames occur in part due to the early presence of the Arabs in Malta. Common examples include Sammut, Camilleri, Zammit, and Xuereb.
Common surnames of Spanish origin include Abela, Galdes, Herrera, and Guzman.
Surnames from foreign countries from the Middle Ages include German, such as von Brockdorff, Hyzler, and Schranz.
Many of the earliest Maltese surnames are Sicilian Greek, e.g. Cilia, Calleia, Brincat, Cauchi. Much less common are recent surnames from Greece; examples include Dacoutros, and Trakosopoulos
The original Jewish community of Malta and Gozo has left no trace of their presence on the islands since they were expelled in January 1493.
In line with the practice in other Christian, European states, women generally assume their husband's surname after legal marriage, and this is passed on to any children the couple may bear. Some women opt to retain their old name, for professional/personal reasons, or combine their surname with that of their husband.
Mongolians do not use surnames in the way that most Westerners, Chinese or Japanese do. Since the socialist period, patronymics – then called ovog, now called etsgiin ner – are used instead of a surname. If the father's name is unknown, a matronymic is used. The patro- or matronymic is written before the given name. Therefore, if a man with given name Tsakhia has a son, and gives the son the name Elbegdorj, the son's full name is Tsakhia Elbegdorj. Very frequently, the patronymic is given in genitive case, i.e. Tsakhiagiin Elbegdorj. However, the patronymic is rather insignificant in everyday use and usually just given as an initial – Ts. Elbegdorj. People are normally just referred to and addressed by their given name (Elbegdorj guai – Mr. Elbegdorj), and if two people share a common given name, they are usually just kept apart by their initials, not by the full patronymic.
Since 2000, Mongolians have been officially using clan names – ovog, the same word that had been used for the patronymics before – on their IDs. Many people chose the names of the ancient clans and tribes such Borjigin, Besud, Jalair, etc. Also many extended families chose the names of the native places of their ancestors. Some chose the names of their most ancient known ancestor. Some just decided to pass their own given names (or modifications of their given names) to their descendants as clan names. Some chose other attributes of their lives as surnames. Gürragchaa chose Sansar (Cosmos). Clan names precede the patronymics and given names, e.g. Besud Tsakhiagiin Elbegdorj. These clan names have a significance and are included in Mongolian passports.
People from Myanmar or Burmese, have no family names. This, to some, is the only known Asian people having no family names at all. Some of those from Myanmar or Burma, who are familiar with European or American cultures, began to put to their younger generations with a family name – adopted from the notable ancestors. For example, Ms. Aung San Suu Kyi is the daughter of the late Father of Independence General Aung San; Hayma Ne Win, is the daughter of the famous actor Kawleikgyin Ne Win etc.
Until the middle of the 19th century, there was no standardization of surnames in the Philippines. There were native Filipinos without surnames, others whose surnames deliberately did not match that of their families, as well as those who took certain surnames simply because they had a certain prestige, usually ones related to the Roman Catholic religion, such as de los Santos ("of the saints") and de la Cruz ("of the cross"), or of local nobility such as of rajahs or datus.
On 21 November 1849, the Spanish Governor-General of the Philippines, Narciso Clavería y Zaldúa, decreed an end to these arbitrary practices, the systematic distribution of surnames to Filipinos without prior surnames and the universal implementation of the Spanish naming system. This produced the Catálogo alfabético de apellidos ("Alphabetical Catalogue of Surnames"), which listed permitted surnames with origins in Spanish, Filipino, and Hispanized Chinese words, names, and numbers. Thus, many Spanish-sounding Filipino surnames are not surnames common to the rest of the Spanish-speaking world. The book contained many words coming from Spanish and the Philippine languages such as Tagalog, as well as many Basque and Catalan surnames.
The colonial authorities implemented this decree because many Christianized Filipinos assumed religious names. There soon were too many people surnamed de los Santos ("of the saints"), de la Cruz ("of the cross"), del Rosario ("of the Rosary") etc., which made it difficult for the Spanish colonists to control the Filipino people, and most importantly, to collect taxes. These extremely common names were also banned by the decree unless the name has been used by a family for at least four generations. This Spanish naming custom also countered the native custom before the Spanish period, wherein siblings assumed different surnames. Clavería's decree was enforced to different degrees in different parts of the colony.
Because of this implementation of Spanish naming customs, of the arrangement "given name + paternal surname + maternal surname", in the Philippines, a Spanish surname does not necessarily denote Spanish ancestry.
In practice, the application of this decree varied from municipality to municipality. Most municipalities received surnames starting with only one initial letter; in others, this was not well enforced. For example, the majority of residents of the island of Banton in the province of Romblon have surnames starting with F such as Fabicon, Fallarme, Fadrilan, and Ferran. Other examples are most cities and towns in Albay, Catanduanes, Ilocos Sur and Marinduque, where the majority of their residents have surnames beginning with a particular letter.
Thus, although perhaps a majority of Filipinos have Spanish surnames, such a surname does not indicate Spanish ancestry. In addition, most Filipinos currently do not use Spanish accented letters in their Spanish derived names. The lack of accents in Filipino Spanish has been attributed to the lack of accents on the predominantly American typewriters after the United States gained control of the Philippines.
The vast majority of Filipinos follow a naming system in the American order (i.e. given name + middle name + surname), which is the reverse of the Spanish naming order (i.e. given name + paternal surname + maternal surname). Children take the mother's surname as their middle name, followed by their father's as their surname; for example, a son of Juan de la Cruz and his wife María Agbayani may be David Agbayani de la Cruz. Women usually take the surnames of their husband upon marriage, and consequently lose their maiden middle names; so upon her marriage to David de la Cruz, the full name of Laura Yuchengco Macaraeg would become Laura Macaraeg de la Cruz. Their maiden last names automatically become their middle names upon marriage.
There are other sources for surnames. Many Filipinos also have Chinese-derived surnames, which in some cases could indicate Chinese ancestry. Many Hispanized Chinese numerals and other Hispanized Chinese words, however, were also among the surnames in the Catálogo alfabético de apellidos. For those whose surname may indicate Chinese ancestry, analysis of the surname may help to pinpoint when those ancestors arrived in the Philippines. A Hispanized Chinese surname such as Cojuangco suggests an 18th-century arrival while a Chinese surname such as Lim suggests a relatively recent immigration. Some Chinese surnames such as Tiu-Laurel are composed of the immigrant Chinese ancestor's surname as well as the name of that ancestor's godparent on receiving Christian baptism.
In the predominantly Muslim areas of the southern Philippines, adoption of surnames was influenced by Islamic religious terms. As a result, surnames among Filipino Muslims are largely Arabic-based, and include such surnames as Hassan and Haradji.
There are also Filipinos who, to this day, have no surnames at all, particularly if they come from indigenous cultural communities.
Prior to the establishment of the Philippines as a US territory during the earlier part of the 20th century, Filipinos usually followed Iberian naming customs. However, upon the promulgation of the Family Code of 1987, Filipinos formalized adopting the American system of using their surnames.
A common Filipino name will consist of the given name (mostly 2 given names are given), the initial letter of the mother's maiden name and finally the father's surname (i.e. Lucy Anne C. de Guzman). Also, women are allowed to retain their maiden name or use both her and her husband's surname as a double-barreled surname, separated by a dash. This is common in feminist circles or when the woman holds a prominent office (e.g. Gloria Macapagal Arroyo, Miriam Defensor Santiago). In more traditional circles, especially those who belong to the prominent families in the provinces, the custom of the woman being addressed as "Mrs. Husband's Full Name" is still common.
For widows, who chose to marry again, two norms are in existence. For those who were widowed before the Family Code, the full name of the woman remains while the surname of the deceased husband is attached. That is, Maria Andres, who was widowed by Ignacio Dimaculangan will have the name Maria Andres viuda de Dimaculangan. If she chooses to marry again, this name will still continue to exist while the surname of the new husband is attached. Thus, if Maria marries Rene de los Santos, her new name will be Maria Andres viuda de Dimaculangan de los Santos.
However, a new norm is also in existence. The woman may choose to use her husband's surname to be one of her middle names. Thus, Maria Andres viuda de Dimaculangan de los Santos may also be called Maria A.D. de los Santos.
Children will however automatically inherit their father's surname if they are considered legitimate. If the child is born out of wedlock, the mother will automatically pass her surname to the child, unless the father gives a written acknowledgment of paternity. The father may also choose to give the child both his parents' surnames if he wishes (that is Gustavo Paredes, whose parents are Eulogio Paredes and Juliana Angeles, while having Maria Solis as a wife, may name his child Kevin S. Angeles-Paredes.
In some Tagalog regions, the norm of giving patronyms, or in some cases matronyms, is also accepted. These names are of course not official, since family names in the Philippines are inherited. It is not uncommon to refer to someone as Juan anak ni Pablo (John, the son of Paul) or Juan apo ni Teofilo (John, the grandson of Theophilus).
In Romania, like in most of Europe, it is customary for a child to take his father's family name, and a wife to take her husband's last name. However, this is not compulsory – spouses and parents are allowed to choose other options too, as the law is flexible (see Art. 282, Art. 449 Art. 450. of the Civil Code of Romania).
Until the 19th century, the names were primarily of the form "[given name] [father's name] [grandfather's name]". The few exceptions are usually famous people or the nobility (boyars). The name reform introduced around 1850 had the names changed to a western style, most likely imported from France, consisting of a given name followed by a family name.
As such, the name is called prenume (French prénom), while the family name is called nume or, when otherwise ambiguous, nume de familie ("family name"). Although not mandatory, middle names are common.
Historically, when the family name reform was introduced in the mid-19th century, the default was to use a patronym, or a matronym when the father was dead or unknown. A common convention was to append the suffix -escu to the father's name, e.g. Anghelescu ("Anghel's child") and Petrescu ("Petre's child"). (The -escu seems to come from Latin -iscum, thus being cognate with Italian -esco and French -esque.) Another common convention was to append the suffix -eanu to the name of the place of origin, e.g. Munteanu ("from the mountains") and Moldoveanu ("from Moldova"). These uniquely Romanian suffixes strongly identify ancestral nationality.
There are also descriptive family names derived from occupations, nicknames, and events, e.g. Botezatu ("baptised"), Barbu ("bushy bearded"), Prodan ("foster"), Bălan ("blond"), Fieraru ("smith"), Croitoru ("tailor"), "Păcuraru" ("shepherd").
Romanian family names remain the same regardless of the sex of the person.
Although given names appear before family names in most Romanian contexts, official documents invert the order, ostensibly for filing purposes. Correspondingly, Romanians occasionally introduce themselves with their family names first, e.g. a student signing a test paper in school.
Romanians bearing names of non-Romanian origin often adopt Romanianised versions of their ancestral surnames. For example, Jurovschi for Polish Żurowski, or Popovici for Serbian Popović ("son of a priest"), which preserves the original pronunciation of the surname through transliteration. In some cases, these changes were mandated by the state.
In Turkey, following the Surname Law imposed in 1934 in the context of Atatürk's Reforms, every family living in Turkey was given a family name. The surname was generally selected by the elderly people of the family and could be any Turkish word (or a permitted word for families belonging to official minority groups).
Some of the most common family names in Turkey are Yılmaz ('undaunted'), Doğan ('falcon'), Şahin ('hawk'), Yıldırım ('thunderbolt'), Şimşek ('lightning'), Öztürk ('purely Turkish').
Patronymic surnames do not necessarily refer to ancestry, or in most cases cannot be traced back historically. The most usual Turkish patronymic suffix is –oğlu; –ov(a), –yev(a) and –zade also occur in the surnames of Azeri or other Turkic descendants.
Official minorities like Armenians, Greeks, and Jews have surnames in their own mother languages. The Armenian families living in Turkey usually have Armenian surnames and generally have the suffix –yan, –ian, or, using Turkish spelling, -can. Greek descendants usually have Greek surnames which might have Greek suffixes like –ou, –aki(s), –poulos/poulou, –idis/idou, –iadis/iadou or prefixes like papa–. The Sephardic Jews who were expelled from Spain and settled in Turkey in 1492 have both Jewish/Hebrew surnames, and Spanish surnames, usually indicating their native regions, cities or villages back in Spain, like De Leon or Toledano. | [
{
"paragraph_id": 0,
"text": "Surname conventions and laws vary around the world. This article gives an overview of surnames around the world.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Argentina, normally only one family name, the father's paternal family name, is used and registered, as in English-speaking countries. However, it is possible to use both the paternal and maternal name. For example, if Ana Laura Melachenko and Emanuel Darío Guerrero had a daughter named Adabel Anahí, her full name could be Adabel Anahí Guerrero Melachenko. Women, however, do not change their family names upon marriage and continue to use their birth family names instead of their husband's family names. However, women have traditionally, and some still choose to use the old Spanish custom of adjoining \"de\" and her husband's surname to her own name. For example, if Paula Segovia marries Felipe Cossia, she might keep her birth name or become Paula Segovia de Cossia or Paula Cossia.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 2,
"text": "There are some province offices where a married woman can use only her birth name, and some others where she has to use the complete name, for legal purposes. The Argentine Civilian Code states both uses are correct, but police offices and passports are issued with the complete name. Today most women prefer to maintain their birth name given that \"de\" can be interpreted as meaning they belong to their husbands.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 3,
"text": "When Eva Duarte married Juan Domingo Perón, she could be addressed as Eva Duarte de Perón, but the preferred style was Eva Perón, or the familiar and affectionate Evita (little Eva).",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 4,
"text": "Combined names come from old traditional families and are considered one last name, but are rare. Although Argentina is a Spanish-speaking country, it is also composed of other varied European influences, such as Italian, French, Russian, German, etc.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 5,
"text": "Children typically use their fathers' last names only. Some state offices have started to use both last names, in the traditional father then mother order, to reduce the risk of a person being mistaken for others using the same name combinations, e.g. if Eva Duarte and Juan Perón had a child named Juan, he might be misidentified if he were called Juan Perón, but not if he was known as Juan Perón Duarte.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 6,
"text": "In early 2008, some new legislation is under consideration that will place the mother's last name ahead the father's last name, as it is done in Portuguese-speaking countries and only optionally in Spain, despite Argentina being a Spanish-speaking country.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 7,
"text": "In Chile, marriage has no effect at all on either of the spouses' names, so people keep their birth names for all their life, no matter how many times marital status, theirs or that of their parents, may change. However, in some upper-class circles or in older couples, even though considered to be old-fashioned, it is still customary for a wife to use her husband's name as reference, as in \"Doña María Inés de Ramírez\" (literally Lady María Inés (wife) of Ramírez).",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 8,
"text": "Children will always bear the surname of the father followed by that of the mother, but if there is no known father and the mother is single, the children can bear either both of her mother's surnames or the mother's first surname followed by any of the surnames of the mother's parents or grandparents, or the child may bear the mother's first surname twice in a row.",
"title": "Spanish-speaking countries"
},
{
"paragraph_id": 9,
"text": "France",
"title": "French-speaking countries"
},
{
"paragraph_id": 10,
"text": "Belgium",
"title": "French-speaking countries"
},
{
"paragraph_id": 11,
"text": "Canadian",
"title": "French-speaking countries"
},
{
"paragraph_id": 12,
"text": "There are about 1,000,000 different family names in German. German family names most often derive from given names, geographical names, occupational designations, bodily attributes or even traits of character. Hyphenations notwithstanding, they mostly consist of a single word; in those rare cases where the family name is linked to the given names by particles such as von or zu, they usually indicate noble ancestry. Not all noble families used these names (see Riedesel), while some farm families, particularly in Westphalia, used the particle von or zu followed by their farm or former farm's name as a family name (see Meyer zu Erpen).",
"title": "German-speaking countries"
},
{
"paragraph_id": 13,
"text": "Family names in German-speaking countries are usually positioned last, after all given names. There are exceptions, however: in parts of Austria and Bavaria and the Alemannic-speaking areas, the family name is regularly put in front of the first given name. Also in many – especially rural – parts of Germany, to emphasize family affiliation there is often an inversion in colloquial use, in which the family name becomes a possessive: Rüters Erich, for example, would be Erich of the Rüter family.",
"title": "German-speaking countries"
},
{
"paragraph_id": 14,
"text": "In Germany today, upon marriage, both partners can choose to keep their birth name or choose either partner's name as the common name. In the latter case the partner whose name was not chosen can keep their birth name hyphenated to the new name (e.g. Schmidt and Meyer choose to marry under the name Meyer. The former Schmidt can choose to be called Meyer, Schmidt-Meyer or Meyer-Schmidt), but any children will only get the single common name. In the case that both partners keep their birth name they must decide on one of the two family names for all their future children. (German name)",
"title": "German-speaking countries"
},
{
"paragraph_id": 15,
"text": "Changing one's family name for reasons other than marriage, divorce or adoption is possible only if the application is approved by the responsible government agency. In Germany, permission will usually be granted if:",
"title": "German-speaking countries"
},
{
"paragraph_id": 16,
"text": "Otherwise, name changes will normally not be granted.",
"title": "German-speaking countries"
},
{
"paragraph_id": 17,
"text": "The Netherlands and Belgium (Flanders)",
"title": "Dutch-speaking countries"
},
{
"paragraph_id": 18,
"text": "In the Nordic countries, family names often, but certainly not always, originate from a patronymic. In Denmark and Norway, the corresponding ending is -sen, as in Karlsen. Names ending with dotter/datter (daughter), such as Olofsdotter, are rare but occurring, and only apply to women. Today, the patronymic names are passed on similarly to family names in other Western countries, and a person's father does not have to be called Karl if he or she has the surname Karlsson. However, in 2006 Denmark reinstated patronymic and matronymic surnames as an option. Thus, parents Karl Larsen and Anna Hansen can name a son Karlsen or Annasen and a daughter Karlsdotter or Annasdotter.",
"title": "Nordic countries"
},
{
"paragraph_id": 19,
"text": "Before the 19th century there was the same system in Scandinavia as in Iceland today. Noble families, however, as a rule adopted a family name, which could refer to a presumed or real forefather (e.g. Earl Birger Magnusson Folkunge ) or to the family's coat of arms (e.g. King Gustav Eriksson Vasa). In many surviving family noble names, such as Silfversparre (\"silver chevron\"; in modern spelling, Silver-) or Stiernhielm (\"star-helmet\"; in modernized spelling, stjärnhjälm), the spelling is obsolete, but since it applies to a name, remains unchanged. (Some names from relatively modern times also use archaic or otherwise aberrant spelling as a stylistic trait; e.g. -quist instead of standard -kvist \"twig\" or -grén instead of standard -gren, \"branch\".)",
"title": "Nordic countries"
},
{
"paragraph_id": 20,
"text": "Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted names in a similar fashion to that of the nobility. Family names joining two elements from nature such as the Swedish Bergman (\"mountain man\"), Holmberg (\"island mountain\"), Lindgren (\"linden branch\"), Sandström (\"sand stream\") and Åkerlund (\"field meadow\") were quite frequent and remain common today. The same is true for similar Norwegian and Danish names. Another common practice was to adopt one's place of origin as a middle or surname.",
"title": "Nordic countries"
},
{
"paragraph_id": 21,
"text": "Even more important a driver of change was the need, for administrative purposes, to develop a system under which each individual had a \"stable\" name from birth to death. In the old days, people would be known by their name, patronymic and the farm they lived at. This last element would change if a person got a new job, bought a new farm, or otherwise came to live somewhere else. (This is part of the origin, in this part of the world, of the custom of women changing their names upon marriage. Originally it indicated, basically, a change of address, and from older times, there are numerous examples of men doing the same thing). The many patronymic names may derive from the fact that people who moved from the country to the cities, also gave up the name of the farm they came from. As a worker, you passed by your father's name, and this name passed on to the next generation as a family name. Einar Gerhardsen, the Norwegian prime minister, used a true patronym, as his father was named Gerhard Olsen (Gerhard, the son of Ola). Gerhardsen passed his own patronym on to his children as a family name. This has been common in many working-class families. The tradition of keeping the farm name as a family name got stronger during the first half of the 20th century in Norway.",
"title": "Nordic countries"
},
{
"paragraph_id": 22,
"text": "These names often indicated the place of residence of the family. For this reason, Denmark and Norway have a very high incidence of last names derived from those of farms, many signified by the suffixes like -bø, -rud, -heim/-um, -land or -set (these being examples from Norway). In Denmark, the most common suffix is -gaard — the modern spelling is gård in Danish and can be either gård or gard in Norwegian, but as in Sweden, archaic spelling persists in surnames. The most well-known example of this kind of surname is probably Kierkegaard (combined by the words \"kirke/kierke\" (= church) and \"gaard\" (= farm) meaning \"the farm located by the Church\". It is, however, a common misunderstanding that the name relates to its direct translation: churchyard/cemetery), but many others could be cited. It should also be noted that, since the names in question are derived from the original owners' domiciles, the possession of this kind of name is no longer an indicator of affinity with others who bear it.",
"title": "Nordic countries"
},
{
"paragraph_id": 23,
"text": "In many cases, names were taken from the nature around them. In Norway, for instance, there is an abundance of surnames based on coastal geography, with suffixes like -strand, -øy, -holm, -vik, -fjord or -nes. Like the names derived from farms, most of these family names reflected the family's place of residence at the time the family name was \"fixed\", however. A family name such as Swedish Dahlgren is derived from \"dahl\" meaning valley and \"gren\" meaning branch; or similarly Upvall meaning \"upper-valley\"; It depends on the country, language, and dialect.",
"title": "Nordic countries"
},
{
"paragraph_id": 24,
"text": "In Scandinavia family names often, but certainly not always, originate from a patronymic. Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted surnames in a similar fashion to that of the gentry. Family names joining two elements from nature such as the Swedish Bergman (\"mountain man\"), Holmberg (\"island mountain\"), Lindgren (\"linden branch\"), Sandström (\"sand stream\") and Åkerlund (\"field grove\") were quite frequent and remain common today.",
"title": "Nordic countries"
},
{
"paragraph_id": 25,
"text": "Finland including Karelia and Estonia was the eastern part of The Kingdom of Sweden from its unification around 1100–1200 AD until the year 1809 when Finland was conquered by Russia. During the Russian revolution 1917, Finland proclaimed the republic Finland and Sweden and many European countries rapidly acknowledged the new nation Finland. Finland has mainly Finnish (increasing) and Swedish (decreasing) surnames and first names. There are two predominant surname traditions among the Finnish in Finland: the West Finnish and the East Finnish. The surname traditions of Swedish-speaking farmers, fishermen and craftsmen resembles the West Finnish tradition, while smaller populations of Sami and Romani people have traditions of their own. Finland was exposed to a very small immigration from Russia, so Russian names barely exists.",
"title": "Nordic countries"
},
{
"paragraph_id": 26,
"text": "Until the mid-20th century, Finland was a predominantly agrarian society, and the names of West Finns were based on their association with a particular area, farm, or homestead, e.g. Jaakko Jussila (\"Jaakko from the farm of Jussi\"). On the other hand, the East Finnish surname tradition dates back to at least the 13th century. There, the Savonians pursued slash-and-burn agriculture which necessitated moving several times during a person's lifetime. This in turn required the families to have surnames, which were in wide use among the common folk as early as the 13th century. By the mid-16th century, the East Finnish surnames had become hereditary. Typically, the oldest East Finnish surnames were formed from the first names of the patriarchs of the families, e.g. Ikävalko, Termonen, Pentikäinen. In the 16th, 17th, and 18th centuries, new names were most often formed by adding the name of the former or current place of living (e.g. Puumalainen < Puumala). In the East Finnish tradition, the women carried the family name of their fathers in female form (e.g. Puumalatar < Puumalainen). By the 19th century, this practice fell into disuse due to the influence of the West-European surname tradition.",
"title": "Nordic countries"
},
{
"paragraph_id": 27,
"text": "In Western Finland, agrarian names dominated, and the last name of the person was usually given according to the farm or holding they lived on. In 1921, surnames became compulsory for all Finns. At this point, the agrarian names were usually adopted as surnames. A typical feature of such names is the addition of prefixes Ala- (Sub-) or Ylä- (Up-), giving the location of the holding along a waterway in relation of the main holding. (e.g. Yli-Ojanperä, Ala-Verronen). The Swedish speaking farmers along the coast of Österbotten usually used two surnames – one which pointed out the father's name (e.g. Eriksson, Andersson, Johansson) and one which related to the farm or the land their family or bigger family owned or had some connection to (e.g. Holm, Fant, Westergård, Kloo). So a full name could be Johan Karlsson Kvist, for his daughter Elvira Johansdotter Kvist, and when she married a man with the Ahlskog farm, Elvira kept the first surname Johansdotter but changed the second surname to her husbands (e.g. Elvira Johansdotter Ahlskog). During the 20th century they started to drop the -son surname while they kept the second. So in Western Finland the Swedish speaking had names like Johan Varg, Karl Viskas, Sebastian Byskata and Elin Loo, while the Swedes in Sweden at the other side of the Baltic Sea kept surnames ending with -son (e.g. Johan Eriksson, Thor Andersson, Anna-Karin Johansson).",
"title": "Nordic countries"
},
{
"paragraph_id": 28,
"text": "A third tradition of surnames was introduced in south Finland by the Swedish-speaking upper and middle classes, which used typical German and Swedish surnames. By custom, all Finnish-speaking persons who were able to get a position of some status in urban or learned society, discarded their Finnish name, adopting a Swedish, German or (in the case of clergy) Latin surname. In the case of enlisted soldiers, the new name was given regardless of the wishes of the individual.",
"title": "Nordic countries"
},
{
"paragraph_id": 29,
"text": "In the late 19th and early 20th century, the overall modernization process, and especially the political movement of fennicization, caused a movement for adoption of Finnish surnames. At that time, many persons with a Swedish or otherwise foreign surname changed their family name to a Finnish one. The features of nature with endings -o/ö, -nen (Meriö < Meri \"sea\", Nieminen < Niemi \"point\") are typical of the names of this era, as well as more or less direct translations of Swedish names (Paasivirta < Hällström).",
"title": "Nordic countries"
},
{
"paragraph_id": 30,
"text": "In 21st-century Finland, the use of surnames follows the German model. Every person is legally obligated to have a first and last name. At most, three first names are allowed. The Finnish married couple may adopt the name of either spouse, or either spouse (or both spouses) may decide to use a double name. The parents may choose either surname or the double surname for their children, but all siblings must share the same surname. All persons have the right to change their surname once without any specific reason. A surname that is un-Finnish, contrary to the usages of the Swedish or Finnish languages, or is in use by any person residing in Finland cannot be accepted as the new name, unless valid family reasons or religious or national customs give a reason for waiving this requirement. However, persons may change their surname to any surname that has ever been used by their ancestors if they can prove such claim. Some immigrants have had difficulty naming their children, as they must choose from an approved list based on the family's household language.",
"title": "Nordic countries"
},
{
"paragraph_id": 31,
"text": "In the Finnish language, both the root of the surname and the first name can be modified by consonant gradation regularly when inflected to a case.",
"title": "Nordic countries"
},
{
"paragraph_id": 32,
"text": "In Iceland, most people have no family name; a person's last name is most commonly a patronymic, i.e. derived from the father's first name. For example, when a man called Karl has a daughter called Anna and a son called Magnús, their full names will typically be Anna Karlsdóttir (\"Karl's daughter\") and Magnús Karlsson (\"Karl's son\"). The name is not changed upon marriage.",
"title": "Nordic countries"
},
{
"paragraph_id": 33,
"text": "Slavic countries are noted for having masculine and feminine versions for many (but not all) of their names. In most countries the use of a feminine form is obligatory in official documents as well as in other communication, except for foreigners. In some countries only the male form figures in official use (Bosnia and Herzegovina, Croatia, Montenegro, Serbia, Slovenia), but in communication (speech, print) a feminine form is often used.",
"title": "Slavic world"
},
{
"paragraph_id": 34,
"text": "In Slovenia the last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used.",
"title": "Slavic world"
},
{
"paragraph_id": 35,
"text": "If the name has no suffix, it may or may not have a feminine version. Sometimes it has the ending changed (such as the addition of -a). In the Czech Republic and Slovakia, suffixless names, such as those of German origin, are feminized by adding -ová (for example, Schusterová).",
"title": "Slavic world"
},
{
"paragraph_id": 36,
"text": "Bulgarian names usually consist of three components – given name, patronymic (based on father's name), family name.",
"title": "Slavic world"
},
{
"paragraph_id": 37,
"text": "Given names have many variations, but the most common names have Christian/Greek (e.g. Maria, Ivan, Christo, Peter, Pavel), Slavic (Ognyan, Miroslav, Tihomir) or Protobulgarian (Krum, Asparukh) (pre-Christian) origin. Father's names normally consist of the father's first name and the \"-ov\" (male) or \"-ova\" (female) or \"-ovi\" (plural) suffix.",
"title": "Slavic world"
},
{
"paragraph_id": 38,
"text": "Family names usually also end with the \"-ov\", \"-ev\" (male) or \"-ova\", \"-eva\" (female) or \"-ovi\", \"-evi\" (plural) suffix.",
"title": "Slavic world"
},
{
"paragraph_id": 39,
"text": "In many cases (depending on the name root) the suffixes can be also \"-ski\" (male and plural) or \"-ska\" (female); \"-ovski\", \"-evski\" (male and plural) or \"-ovska\", \"-evska\" (female); \"-in\" (male) or \"-ina\" (female) or \"-ini\" (plural); etc.",
"title": "Slavic world"
},
{
"paragraph_id": 40,
"text": "The meaning of the suffixes is similar to the English word \"of\", expressing membership in/belonging to a family. For example, the family name Ivanova means a person belonging to the Ivanovi family.",
"title": "Slavic world"
},
{
"paragraph_id": 41,
"text": "A father's name Petrov means son of Peter.",
"title": "Slavic world"
},
{
"paragraph_id": 42,
"text": "Regarding the different meaning of the suffixes, \"-ov\", \"-ev\"/\"-ova\", \"-eva\" are used for expressing relationship to the father and \"-in\"/\"-ina\" for relationship to the mother (often for orphans whose father is dead).",
"title": "Slavic world"
},
{
"paragraph_id": 43,
"text": "Names of Czech people consist of given name (křestní jméno) and surname (příjmení). Usage of the second or middle name is not common. Feminine names are usually derived from masculine ones by a suffix -ová (Nováková) or -á for names being originally adjectives (Veselá), sometimes with a little change of original name's ending (Sedláčková from Sedláček or Svobodová from Svoboda). Women usually change their family names when they get married. The family names are usually nouns (Svoboda, Král, Růžička, Dvořák, Beneš), adjectives (Novotný, Černý, Veselý) or past participles of verbs (Pospíšil). There are also a couple of names with more complicated origin which are actually complete sentences (Skočdopole, Hrejsemnou or Vítámvás). The most common Czech family name is Novák / Nováková.",
"title": "Slavic world"
},
{
"paragraph_id": 44,
"text": "In addition, many Czechs and some Slovaks have German surnames due to mixing between the ethnic groups over the past thousand years. Deriving women's names from German and other foreign names is often problematic since foreign names do not suit Czech language rules, although most commonly -ová is simply added (Schmidtová; umlauts are often, but not always, dropped, e.g. Müllerová), or the German name is respelled with Czech spelling (Šmitová). Hungarian names, which can be found fairly commonly among Slovaks, can also be either left unchanged (Hungarian Nagy, fem. Nagyová) or respelled according to Czech/Slovak orthography (masc. Naď, fem. Naďová).",
"title": "Slavic world"
},
{
"paragraph_id": 45,
"text": "In Poland and most of the former Polish–Lithuanian Commonwealth, surnames first appeared during the late Middle Ages. They initially denoted the differences between various people living in the same town or village and bearing the same name. The conventions were similar to those of English surnames, using occupations, patronymic descent, geographic origins, or personal characteristics. Thus, early surnames indicating occupation include Karczmarz (\"innkeeper\"), Kowal (\"blacksmith\"), \"Złotnik\" (\"gold smith\") and Bednarczyk (\"young cooper\"), while those indicating patronymic descent include Szczepaniak (\"Son of Szczepan), Józefowicz (\"Son of Józef), and Kaźmirkiewicz (\"Son of Kazimierz\"). Similarly, early surnames like Mazur (\"the one from Mazury\") indicated geographic origin, while ones like Nowak (\"the new one\"), Biały (\"the pale one\"), and Wielgus (\"the big one\") indicated personal characteristics.",
"title": "Slavic world"
},
{
"paragraph_id": 46,
"text": "In the early 16th century, (the Polish Renaissance), toponymic names became common, especially among the nobility. Initially, the surnames were in a form of \"[first name] z (\"de\", \"of\") [location]\". Later, most surnames were changed to adjective forms, e.g. Jakub Wiślicki (\"James of Wiślica\") and Zbigniew Oleśnicki (\"Zbigniew of Oleśnica\"), with masculine suffixes -ski, -cki, -dzki and -icz or respective feminine suffixes -ska, -cka, -dzka and -icz on the east of Polish–Lithuanian Commonwealth. Names formed this way are adjectives grammatically, and therefore change their form depending on sex; for example, Jan Kowalski and Maria Kowalska collectively use the plural Kowalscy.",
"title": "Slavic world"
},
{
"paragraph_id": 47,
"text": "Names with masculine suffixes -ski, -cki, and -dzki, and corresponding feminine suffixes -ska, -cka, and -dzka became associated with noble origin. Many people from lower classes successively changed their surnames to fit this pattern. This produced many Kowalskis, Bednarskis, Kaczmarskis and so on.",
"title": "Slavic world"
},
{
"paragraph_id": 48,
"text": "A separate class of surnames derive from the names of noble clans. These are used either as separate names or the first part of a double-barrelled name. Thus, persons named Jan Nieczuja and Krzysztof Nieczuja-Machocki might be related. Similarly, after World War I and World War II, many members of Polish underground organizations adopted their war-time pseudonyms as the first part of their surnames. Edward Rydz thus became Marshal of Poland Edward Śmigły-Rydz and Zdzisław Jeziorański became Jan Nowak-Jeziorański.",
"title": "Slavic world"
},
{
"paragraph_id": 49,
"text": "A full Russian name consists of personal (given) name, patronymic, and family name (surname).",
"title": "Slavic world"
},
{
"paragraph_id": 50,
"text": "Most Russian family names originated from patronymics, that is, father's name usually formed by adding the adjective suffix -ov(a) or -ev(a). Contemporary patronymics, however, have a substantive suffix -ich for masculine and the adjective suffix -na for feminine.",
"title": "Slavic world"
},
{
"paragraph_id": 51,
"text": "For example, the proverbial triad of most common Russian surnames follows:",
"title": "Slavic world"
},
{
"paragraph_id": 52,
"text": "Feminine forms of these surnames have the ending -a:",
"title": "Slavic world"
},
{
"paragraph_id": 53,
"text": "Such a pattern of name formation is not unique to Russia or even to the Eastern and Southern Slavs in general; quite common are also names derived from professions, places of origin, and personal characteristics, with various suffixes (e.g. -in(a) and -sky (-skaya)).",
"title": "Slavic world"
},
{
"paragraph_id": 54,
"text": "Professions:",
"title": "Slavic world"
},
{
"paragraph_id": 55,
"text": "Places of origin:",
"title": "Slavic world"
},
{
"paragraph_id": 56,
"text": "Personal characteristics:",
"title": "Slavic world"
},
{
"paragraph_id": 57,
"text": "A considerable number of \"artificial\" names exists, for example, those given to seminary graduates; such names were based on Great Feasts of the Orthodox Church or Christian virtues.",
"title": "Slavic world"
},
{
"paragraph_id": 58,
"text": "Great Orthodox Feasts:",
"title": "Slavic world"
},
{
"paragraph_id": 59,
"text": "Christian virtues:",
"title": "Slavic world"
},
{
"paragraph_id": 60,
"text": "Many freed serfs were given surnames after those of their former owners. For example, a serf of the Demidov family might be named Demidovsky, which translates roughly as \"belonging to Demidov\" or \"one of Demidov's bunch\".",
"title": "Slavic world"
},
{
"paragraph_id": 61,
"text": "Grammatically, Russian family names follow the same rules as other nouns or adjectives (names ending with -oy, -aya are grammatically adjectives), with exceptions: some names do not change in different cases and have the same form in both genders (for example, Sedykh, Lata).",
"title": "Slavic world"
},
{
"paragraph_id": 62,
"text": "Ukrainian and Belarusian names evolved from the same Old East Slavic and Ruthenian language (western Rus') origins. Ukrainian and Belarusian names share many characteristics with family names from other Slavic cultures. Most prominent are the shared root words and suffixes. For example, the root koval (blacksmith) compares to the Polish kowal, and the root bab (woman) is shared with Polish, Slovakian, and Czech. The suffix -vych (son of) corresponds to the South Slavic -vic, the Russian -vich, and the Polish -wicz, while -sky, -ski, and -ska are shared with both Polish and Russian, and -ak with Polish.",
"title": "Slavic world"
},
{
"paragraph_id": 63,
"text": "However some suffixes are more uniquely characteristic to Ukrainian and Belarusian names, especially: -chuk (Western Ukraine), -enko (all other Ukraine) (both son of), -ko (little [masculine]), -ka (little [feminine]), -shyn, and -uk. See, for example, Mihalko, Ukrainian Presidents Leonid Kravchuk, and Viktor Yushchenko, Belarusian President Alexander Lukashenko, or former Soviet diplomat Andrei Gromyko. Such Ukrainian and Belarusian names can also be found in Russia, Poland, or even other Slavic countries (e.g. Croatian general Zvonimir Červenko), but are due to importation by Ukrainian, Belarusian, or Rusyn ancestors.",
"title": "Slavic world"
},
{
"paragraph_id": 64,
"text": "Surnames of some South Slavic groups such as Serbs, Croats, Montenegrins, and Bosniaks traditionally end with the suffixes \"-ić\" and \"-vić\" (often transliterated to English and other western languages as \"ic\", \"ich\", \"vic\" or \"vich\". The v is added in the case of a name to which \"-ić\" is appended would otherwise end with a vowel, to avoid double vowels with the \"i\" in \"-ić\".) These are a diminutive indicating descent i.e. \"son of\". In some cases the family name was derived from a profession (e.g. blacksmith – \"Kovač\" → \"Kovačević\").",
"title": "Slavic world"
},
{
"paragraph_id": 65,
"text": "An analogous ending is also common in Slovenia. As the Slovenian language does not have the softer consonant \"ć\", in Slovene words and names only \"č\" is used. So that people from the former Yugoslavia need not change their names, in official documents \"ć\" is also allowed (as well as \"Đ / đ\"). Thus, one may have two surname variants, e.g.: Božič, Tomšič (Slovenian origin or assimilated) and Božić, Tomšić (roots from the Serbo-Croat language continuum area). Slovene names ending in -ič do not necessarily have a patrimonial origin.",
"title": "Slavic world"
},
{
"paragraph_id": 66,
"text": "In general family names in all of these countries follow this pattern with some family names being typically Serbian, some typically Croat and yet others being common throughout the whole linguistic region.",
"title": "Slavic world"
},
{
"paragraph_id": 67,
"text": "Children usually inherit their fathers' family name. In an older naming convention which was common in Serbia up until the mid-19th century, a person's name would consist of three distinct parts: the person's given name, the patronymic derived from the father's personal name, and the family name, as seen, for example, in the name of the language reformer Vuk Stefanović Karadžić.",
"title": "Slavic world"
},
{
"paragraph_id": 68,
"text": "Official family names do not have distinct male or female forms, except in North Macedonia, though a somewhat archaic unofficial form of adding suffixes to family names to form female form persists, with -eva, implying \"daughter of\" or \"female descendant of\" or -ka, implying \"wife of\" or \"married to\". In Slovenia the feminine form of a surname (\"-eva\" or \"-ova\") is regularly used in non-official communication (speech, print), but not for official IDs or other legal documents.",
"title": "Slavic world"
},
{
"paragraph_id": 69,
"text": "Bosniak Muslim names follow the same formation pattern but are usually derived from proper names of Islamic origin, often combining archaic Islamic or feudal Turkish titles i.e. Mulaomerović, Šabanović, Hadžihafizbegović, etc. Also related to Islamic influence is the prefix Hadži- found in some family names. Regardless of religion, this prefix was derived from the honorary title which a distinguished ancestor earned by making a pilgrimage to either Christian or Islamic holy places; Hadžibegić, being a Bosniak Muslim example, and Hadžiantić an Orthodox Christian one.",
"title": "Slavic world"
},
{
"paragraph_id": 70,
"text": "In Croatia where tribal affiliations persisted longer, Lika, Herzegovina etc., originally a family name, came to signify practically all people living in one area, clan land or holding of the nobles. The Šubić family owned land around the Zrin River in the Central Croatian region of Banovina. The surname became Šubić Zrinski, the most famous being Nikola Šubić Zrinski.",
"title": "Slavic world"
},
{
"paragraph_id": 71,
"text": "In Montenegro and Herzegovina, family names came to signify all people living within one clan or bratstvo. As there exists a strong tradition of inheriting personal names from grandparents to grandchildren, an additional patronymic usually using suffix -ov had to be introduced to make distinctions between two persons bearing the same personal name and the same family name and living within same area. A noted example is Marko Miljanov Popović, i.e. Marko, son of Miljan, from Popović family.",
"title": "Slavic world"
},
{
"paragraph_id": 72,
"text": "Due to discriminatory laws in the Austro-Hungarian Empire, some Serb families of Vojvodina discarded the suffix -ić in an attempt to mask their ethnicity and avoid heavy taxation.",
"title": "Slavic world"
},
{
"paragraph_id": 73,
"text": "The prefix Pop- in Serbian names indicates descent from a priest, for example Gordana Pop Lazić, i.e. descendant of Pop Laza.",
"title": "Slavic world"
},
{
"paragraph_id": 74,
"text": "Some Serbian family names include prefixes of Turkish origin, such as Uzun- meaning tall, or Kara-, black. Such names were derived from nicknames of family ancestors. A famous example is Karađorđević, descendants of Đorđe Petrović, known as Karađorđe or Black Đorđe.",
"title": "Slavic world"
},
{
"paragraph_id": 75,
"text": "Among the Bulgarians, another South Slavic people, the typical surname suffix is \"-ov\" (Ivanov, Kovachev), although other popular suffixes also exist.",
"title": "Slavic world"
},
{
"paragraph_id": 76,
"text": "In North Macedonia, the most popular suffix today is \"-ski\".",
"title": "Slavic world"
},
{
"paragraph_id": 77,
"text": "Slovenes have a great variety of surnames, most of them differentiated according to region. Surnames ending in -ič are by far less frequent than among Croats and Serbs. There are typically Slovenian surnames ending in -ič, such as Blažič, Stanič, Marušič. Many Slovenian surnames, especially in the Slovenian Littoral, end in -čič (Gregorčič, Kocijančič, Miklavčič, etc.), which is uncommon for other South Slavic peoples (except the neighboring Croats, e.g. Kovačić, Jelačić, Kranjčić, etc.). On the other hand, surname endings in -ski and -ov are rare, they can denote a noble origin (especially for the -ski, if it completes a toponym) or a foreign (mostly Czech) origin. One of the most typical Slovene surname endings is -nik (Rupnik, Pučnik, Plečnik, Pogačnik, Podobnik) and other used surname endings are -lin (Pavlin, Mehlin, Ahlin, Ferlin), -ar (Mlakar, Ravnikar, Smrekar Tisnikar) and -lj (Rugelj, Pucelj, Bagatelj, Bricelj). Many Slovenian surnames are linked to Medieval rural settlement patterns. Surnames like Novak (literally, \"the new one\") or Hribar (from hrib, hill) were given to the peasants settled in newly established farms, usually in high mountains. Peasant families were also named according to the owner of the land which they cultivated: thus, the surname Kralj (King) or Cesar (Emperor) was given to those working on royal estates, Škof (Bishop) or Vidmar to those working on ecclesiastical lands, etc. Many Slovenian surnames are named after animals (Medved – bear, Volk, Vovk or Vouk – wolf, Golob – pigeon, Strnad – yellowhammer, Orel – eagle, Lisjak – fox, or Zajec – rabbit, etc.) or plants Pšenica – wheat, Slak – bindweed, Hrast – oak, etc. Many are named after neighbouring peoples: Horvat, Hrovat, or Hrovatin (Croat), Furlan (Friulian), Nemec (German), Lah (Italian), Vogrin, Vogrič or Vogrinčič (Hungarian), Vošnjak (Bosnian), Čeh (Czech), Turk (Turk), or different Slovene regions: Kranjc, Kranjec or Krajnc (from Carniola), Kraševec (from the Karst Plateau), Korošec (from Carinthia), Kočevar or Hočevar (from the Gottschee county).",
"title": "Slavic world"
},
{
"paragraph_id": 78,
"text": "In Slovenia last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used. Examples: Novak (m.) & Novakova (f.), Kralj (m.) & Kraljeva (f.), Mali (m.) & Malijeva (f.). Usually surenames on -ova are used together with the title/gender: gospa Novakova (Mrs. Novakova), gospa Kraljeva (Mrs. Kraljeva), gospodična Malijeva (Miss Malijeva, if unmarried), etc. or with the name. So we have Maja Novak on the ID card and Novakova Maja (extremely rarely Maja Novakova) in communication; Tjaša Mali and Malijeva Tjaša (rarely Tjaša Malijeva); respectively. Diminutive forms of last names for females are also available: Novakovka, Kraljevka. As for pronunciation, in Slovenian there is some leeway regarding accentuation. Depending on the region or local usage, you may have either Nóvak & Nóvakova or, more frequently, Novák & Novákova. Accent marks are normally not used.",
"title": "Slavic world"
},
{
"paragraph_id": 79,
"text": "The given name is always followed by the father's first name, then the father's family surname. Some surnames have a prefix of ibn- (ould- in Mauritania) meaning \"son of\". The surnames follow similar rules defining a relation to a clan, family, place etc. Some Arab countries have differences due to historic rule by the Ottoman Empire or due to being a different minority.",
"title": "Arabic-speaking countries"
},
{
"paragraph_id": 80,
"text": "A large number of Arabic last names start with \"Al-\" which means \"The\"",
"title": "Arabic-speaking countries"
},
{
"paragraph_id": 81,
"text": "Arab States of the Persian Gulf: Names mainly consist of the person's name followed by the father's first name connected by the word \"ibn\" or \"bin\" (meaning \"son of\"). The last name either refers to the name of the tribe the person belongs to, or to the region, city, or town he/she originates from. In exceptional cases, members of the royal families or ancient tribes mainly, the title (usually H.M./H.E., Prince, or Sheikh) is included in the beginning as a prefix, and the first name can be followed by four names, his father, his grandfather, and great – grandfather, as a representation of the purity of blood and to show the pride one has for his ancestry.",
"title": "Arabic-speaking countries"
},
{
"paragraph_id": 82,
"text": "In Arabic-speaking Levantine countries (Jordan, Lebanon, Palestine, Syria) it's common to have family names associated with a certain profession or craft, such as \"Al-Haddad\"/\"Haddad\" which means \"Blacksmith\" or \"Al-Najjar\"/\"Najjar\" which means \"Carpenter\".",
"title": "Arabic-speaking countries"
},
{
"paragraph_id": 83,
"text": "In India, surnames are placed as last names or before first names, which often denote: village of origin, caste, clan, office of authority their ancestors held, or trades of their ancestors. The use of surnames is a relatively new convention, introduced during British colonisation. Typically, parts of northern India follow English-speaking Western naming conventions by having a given name followed by a surname. This is not necessarily the case in southern India, where people may adopt a surname out of necessity when migrating or travelling abroad.",
"title": "South Asia"
},
{
"paragraph_id": 84,
"text": "The largest variety of surnames is found in the states of Maharashtra and Goa, which numbers more than the rest of India together. Here surnames are placed last, the order being: the given name, followed by the father's name, followed by the family name. The majority of surnames are derived from the place where the family lived, with the 'kar' (Marathi and Konkani) suffix, for example, Mumbaikar, Punekar, Aurangabadkar, Tendulkar, Parrikar, Mangeshkar, Mahendrakar. Another common variety found in Maharashtra and Goa are the ones ending in 'e'. These are usually more archaic than the 'Kar's and usually denote medieval clans or professions like Rane, Salunkhe, Gupte, Bhonsle, Ranadive, Rahane, Hazare, Apte, Satpute, Shinde, Sathe, Londhe, Salve, Kale, Gore, Godbole, etc.",
"title": "South Asia"
},
{
"paragraph_id": 85,
"text": "In Andhra Pradesh and Telangana, surnames usually denote family names. It is easy to track family history and the caste they belonged to using a surname.",
"title": "South Asia"
},
{
"paragraph_id": 86,
"text": "In Odisha and West Bengal, surnames denote the caste they belong. There are also several local surnames like Das, Patnaik, Mohanty, Jena etc.",
"title": "South Asia"
},
{
"paragraph_id": 87,
"text": "In Kerala, surnames denote the caste they belong. There are also several local surnames like Nair, Menon , Panikkar etc.",
"title": "South Asia"
},
{
"paragraph_id": 88,
"text": "It is a common in Kerala, Tamil Nadu, and some other parts of South India that the spouse adopts her husband's first name instead of his family or surname name after marriage.",
"title": "South Asia"
},
{
"paragraph_id": 89,
"text": "In Rajasthan, the community name and sometimes the gotra or clan name are used as surnames. Usage of community name as surname include: Charan, Jat, Meena, Rajput, etc. Sometimes, the faith name (for example: Jain) can also be used as a surname.",
"title": "South Asia"
},
{
"paragraph_id": 90,
"text": "India is a country with numerous distinct cultural and linguistic groups. Thus, Indian surnames, where formalized, fall into seven general types.",
"title": "South Asia"
},
{
"paragraph_id": 91,
"text": "Surnames are based on:",
"title": "South Asia"
},
{
"paragraph_id": 92,
"text": "The convention is to write the first name followed by middle names and surname. It is common to use the father's first name as the middle name or last name even though it is not universal. In some Indian states like Maharashtra, official documents list the family name first, followed by a comma and the given names.",
"title": "South Asia"
},
{
"paragraph_id": 93,
"text": "In modern times, in urban areas at least, this practice is not universal and some wives either suffix their husband's surname or do not alter their surnames at all. In some rural areas, particularly in North India, wives may also take a new first name after their nuptials. Children inherit their surnames from their father.",
"title": "South Asia"
},
{
"paragraph_id": 94,
"text": "Jains generally use Jain, Shah, Firodia, Singhal or Gupta as their last names. Sikhs generally use the words Singh (\"lion\") and Kaur (\"princess\") as surnames added to the otherwise unisex first names of men and women, respectively. It is also common to use a different surname after Singh in which case Singh or Kaur are used as middle names (Montek Singh Ahluwalia, Surinder Kaur Badal). The tenth Guru of Sikhism ordered (Hukamnama) that any man who considered himself a Sikh must use Singh in his name and any woman who considered herself a Sikh must use Kaur in her name. Other middle names or honorifics that are sometimes used as surnames include Kumar, Dev, Lal, and Chand.",
"title": "South Asia"
},
{
"paragraph_id": 95,
"text": "The modern-day spellings of names originated when families translated their surnames to English, with no standardization across the country. Variations are regional, based on how the name was translated from the local language to English in the 18th, 19th and 20th centuries during British rule. Therefore, it is understood in the local traditions that Baranwal and Barnwal represent the same name derived from Uttar Pradesh and Punjab respectively. Similarly, Tagore derives from Bengal while Thakur is from Hindi-speaking areas. The officially recorded spellings tended to become the standard for that family. In the modern times, some states have attempted standardization, particularly where the surnames were corrupted because of the early British insistence of shortening them for convenience. Thus Bandopadhyay became Banerji, Mukhopadhay became Mukherji, Chattopadhyay became Chatterji, etc. This coupled with various other spelling variations created several surnames based on the original surnames. The West Bengal Government now insists on re-converting all the variations to their original form when the child is enrolled in school.",
"title": "South Asia"
},
{
"paragraph_id": 96,
"text": "Some parts of Sri Lanka, Thailand, Nepal, Myanmar, and Indonesia have similar patronymic customs to those of India.",
"title": "South Asia"
},
{
"paragraph_id": 97,
"text": "Nepali surnames are divided into three origins; Indo-Aryan languages, Tibeto-Burman languages and indigenous origins. Surnames of Khas community contains toponyms as Ghimire, Dahal, Pokharel, Sapkota from respective villages, occupational names as (Adhikari, Bhandari, Karki, Thapa). Many Khas surnames includes suffix as -wal, -al as in Katwal, Silwal, Khanal, Khulal, Rijal. Kshatriya titles such as Bista, Kunwar, Rana, Rawal, Shah, Thakuri, Chand, were taken as surnames by various Kshetri and Thakuris. Khatri Kshetris share surnames with mainstream Pahari Bahuns. Other popular Chhetri surnames include Basnyat, Bogati, Budhathoki, Khadka, Mahat, Raut. Similarly, Brahmin surnames such as Acharya, Joshi, Pandit, Sharma, Upadhyay were taken by Pahari Bahuns. Bahuns bear distinct surnames as Kattel, and share surnames with mainstream Bahuns. Other Bahun surnames include Aryal, Bhattarai, Banskota, Chaulagain, Devkota, Dhakal, Gyawali, Koirala, Mainali, Pandey, Panta, Paudel, Regmi, Subedi, Lamsal, and Dhungel. Khas-Dalits surnames include Kami, Bishwakarma or B.K., Damai, Mijar, Pariyar, Sarki. Newar groups of multiethnic background bears both Indo-Aryan surnames (like Shrestha, Pradhan) and indigenous surnames like Maharjan, Dangol. Magars bear surnames derived from Khas peoples such as Baral, Budhathoki, Lamichhane, Thapa and indigenous origins as Gharti, Pun, Pulami. Other Himalayan Mongoloid castes bears Tibeto-Burmese surnames like Gurung, Tamang, Thakali, Sherpa. Various Kiranti ethnic group contains many Indo-Aryan surnames of Khas origin which were awarded by the government of Khas peoples. These surnames are Rai, Subba depending upon job and position hold by them. Terai community consists both Indo-Aryan and Indigenous origin surnames. Terai Brahmins bears surnames as Jha. Nepalese Muslims bears Islamic surnames such as Ali, Ansari, Begum, Khan, Mohammad, Pathan. Other common Terai surnames are Kayastha.",
"title": "South Asia"
},
{
"paragraph_id": 98,
"text": "Pakistani surnames are basically divided in three categories: Arab naming convention, tribal or caste names and ancestral names.",
"title": "South Asia"
},
{
"paragraph_id": 99,
"text": "Family names indicating Arab ancestry, e.g. Shaikh, Siddiqui, Abbasi, Syed, Zaidi, Khawaja, Naqvi, Farooqi, Osmani, Alavi, Hassani, and Husseini.",
"title": "South Asia"
},
{
"paragraph_id": 100,
"text": "People claiming Afghan ancestry include those with family names like Durrani, Gardezi, Suri, Yousafzai, Afridi, Mullagori, Mohmand, Khattak, Wazir, Mehsud, Niazi.",
"title": "South Asia"
},
{
"paragraph_id": 101,
"text": "Family names indicating Turkic heritage include Mughal, Baig or Beg, Pasha, Barlas, and Seljuki. Family names indicating Turkish/Kurd ancestry, Dogar.",
"title": "South Asia"
},
{
"paragraph_id": 102,
"text": "People claiming Indic ancestry include those with family names Barelwi, Lakhnavi, Delhvi, Godharvi, Bilgrami, and Rajput. A large number of Muslim Rajputs have retained their surnames such as Chauhan, Rathore, Parmar, and Janjua.",
"title": "South Asia"
},
{
"paragraph_id": 103,
"text": "People claiming Iranian ancestry include those with family names Agha, Bukhari, Firdausi, Ghazali, Gilani, Hamadani, Isfahani, Kashani, Kermani, Khorasani, Farooqui, Mir, Mirza, Montazeri, Nishapuri, Noorani, Kayani, Qizilbash, Saadi, Sabzvari, Shirazi, Sistani, Suhrawardi, Yazdani, Zahedi, and Zand.",
"title": "South Asia"
},
{
"paragraph_id": 104,
"text": "Tribal names include Abro Afaqi, Afridi, Cheema, Khogyani (Khakwani), Amini, Ansari, Ashrafkhel, Awan, Bajwa, Baloch, Barakzai, Baranzai, Bhatti, Bhutto, Ranjha, Bijarani, Bizenjo, Brohi, Khetran, Bugti, Butt, Farooqui, Gabol, Ghaznavi, Ghilzai, Gichki, Gujjar, Jamali, Jamote, Janjua, Jatoi, Jutt Joyo, Junejo, Karmazkhel, Kayani, Khar, Khattak, Khuhro, Lakhani, Leghari, Lodhi, Magsi, Malik, Mandokhel, Mayo, Marwat, Mengal, Mughal, Palijo, Paracha, Panhwar, Phul, Popalzai, Qureshi & qusmani, Rabbani, Raisani, Rakhshani, Sahi, Swati, Soomro, Sulaimankhel, Talpur, Talwar, Thebo, Yousafzai, and Zamani.",
"title": "South Asia"
},
{
"paragraph_id": 105,
"text": "In Pakistan, the official paperwork format regarding personal identity is as follows:",
"title": "South Asia"
},
{
"paragraph_id": 106,
"text": "So and so, son of so and so, of such and such tribe or clan and religion and resident of such and such place. For example, Amir Khan s/o Fakeer Khan, tribe Mughal Kayani or Chauhan Rajput, Follower of religion Islam, resident of Village Anywhere, Tehsil Anywhere, District.",
"title": "South Asia"
},
{
"paragraph_id": 107,
"text": "In modern Chinese, Japanese, Korean, Taiwanese, and Vietnamese, the family name is placed before the given names, although this order may not be observed in translation. Generally speaking, Chinese, Korean, and Vietnamese names do not alter their order in English (Mao Zedong, Kim Jong-il, Ho Chi Minh) and Japanese names do (Kenzaburō Ōe). However, numerous exceptions exist, particularly for people born in English-speaking countries such as Yo-Yo Ma. This is sometimes systematized: in all Olympic events, the athletes of the People's Republic of China list their names in the Chinese ordering, while Chinese athletes representing other countries, such as the United States, use the Western ordering. (In Vietnam, the system is further complicated by the cultural tradition of addressing people by their given name, usually with an honorific. For example, Phan Văn Khải is properly addressed as Mr. Khải, even though Phan is his family name.)",
"title": "Sinosphere"
},
{
"paragraph_id": 108,
"text": "Chinese family names have many types of origins, some claiming dates as early as the legendary Yellow Emperor (2nd millennium BC):",
"title": "Sinosphere"
},
{
"paragraph_id": 109,
"text": "In history, some changed their surnames due to a naming taboo (from Zhuang 莊 to Yan 嚴 during the era of Liu Zhuang 劉莊) or when the imperial surname was awarded by the Emperor (the imperial surname Li was often bestowed on senior officers during the Tang dynasty).",
"title": "Sinosphere"
},
{
"paragraph_id": 110,
"text": "In modern times, some Chinese adopt an English name in addition to their native given names: e.g., 李柱銘 (Li Zhùmíng) adopted the English name Martin Lee. Particularly in Hong Kong and Singapore, the convention is to write both names together: Martin Lee Chu-ming. Owing to the confusion this can cause, a further convention is sometimes observed of capitalizing the surname: Martin LEE Chu-ming. Sometimes, however, the Chinese given name is forced into the Western system as a middle name (\"Martin Chu-ming Lee\"); less often, the English given name is forced into the Chinese system (\"Lee Chu-ming Martin\").",
"title": "Sinosphere"
},
{
"paragraph_id": 111,
"text": "In Japan, the civil law forces a common surname for every married couple, unless in a case of international marriage. In most cases, women surrender their surnames upon marriage, and use the surnames of their husbands. However, a convention that a man uses his wife's family name if the wife is an only child is sometimes observed. A similar tradition called ru zhui (入贅) is common among Chinese when the bride's family is wealthy and has no son but wants the heir to pass on their assets under the same family name. The Chinese character zhui (贅) carries a money radical (貝), which implies that this tradition was originally based on financial reasons. All their offspring carry the mother's family name. If the groom is the first born with an obligation to carry his own ancestor's name, a compromise may be reached in that the first male child carries the mother's family name while subsequent offspring carry the father's family name. The tradition is still in use in many Chinese communities outside mainland China, but largely disused in China because of social changes from communism. Due to the economic reform in the past decade, accumulation and inheritance of personal wealth made a comeback to the Chinese society. It is unknown if this financially motivated tradition would also come back to mainland China.",
"title": "Sinosphere"
},
{
"paragraph_id": 112,
"text": "In Chinese, Korean, Vietnamese and Singaporean cultures, women keep their own surnames, while the family as a whole is referred to by the surnames of the husbands.",
"title": "Sinosphere"
},
{
"paragraph_id": 113,
"text": "In Hong Kong, some women would be known to the public with the surnames of their husbands preceding their own surnames, such as Anson Chan Fang On Sang. Anson is an English given name, On Sang is the given name in Chinese, Chan is the surname of Anson's husband, and Fang is her own surname. A name change on legal documents is not necessary. In Hong Kong's English publications, her family names would have been presented in small cap letters to resolve ambiguity, e.g. Anson CHAN FANG On Sang in full or simply Anson Chan in short form.",
"title": "Sinosphere"
},
{
"paragraph_id": 114,
"text": "In Macau, some people have their names in Portuguese spelt with some Portuguese style, such as Carlos do Rosario Tchiang.",
"title": "Sinosphere"
},
{
"paragraph_id": 115,
"text": "Chinese women in Canada, especially Hongkongers in Toronto, would preserve their maiden names before the surnames of their husbands when written in English, for instance, Rosa Chan Leung, where Chan is the maiden name, and Leung is the surname of the husband.",
"title": "Sinosphere"
},
{
"paragraph_id": 116,
"text": "In Chinese, Korean, and Vietnamese, surnames are predominantly monosyllabic (written with one character), though a small number of common disyllabic (or written with two characters) surnames exists (e.g. the Chinese name Ouyang, the Korean name Jegal and the Vietnamese name Phan-Tran).",
"title": "Sinosphere"
},
{
"paragraph_id": 117,
"text": "Many Chinese, Korean, and Vietnamese surnames are of the same origin, but simply pronounced differently and even transliterated differently overseas in Western nations. For example, the common Chinese surnames Chen, Chan, Chin, Cheng and Tan, the Korean surname Jin, as well as the Vietnamese surname Trần are often all the same exact character 陳. The common Korean surname Kim is also the common Chinese surname Jin, and written 金. The common Mandarin surnames Lin or Lim (林) is also one and the same as the common Cantonese or Vietnamese surname Lam and Korean family name Lim (written/pronounced as Im in South Korea). There are people with the surname of Hayashi (林) in Japan too. The common Chinese surname 李, translated to English as Lee, is, in Chinese, the same character but transliterated as Li according to pinyin convention. Lee is also a common surname of Koreans, and the character is identical.",
"title": "Sinosphere"
},
{
"paragraph_id": 118,
"text": "40% of all Vietnamese have the surname Nguyen. This may be because when a new dynasty took power in Vietnam it was custom to adopt that dynasty's surname. The last dynasty in Vietnam was the Nguyen dynasty, so as a result, many people have this surname.",
"title": "Sinosphere"
},
{
"paragraph_id": 119,
"text": "In Burundi and Rwanda, most, if not all surnames have God in it, for example, Hakizimana (meaning God cures), Nshimirimana (I thank God) or Havyarimana/Habyarimana (God gives birth). But not all surnames end with the suffix -imana. Irakoze is one of these (technically meaning Thank God, though it is hard to translate it correctly in English or probably any other language). Surnames are often different among immediate family members, as parents frequently choose unique surnames for each child, and women keep their maiden names when married. Surnames are placed before given names and frequently written in capital letters, e.g. HAKIZIMANA Jacques.",
"title": "Africa"
},
{
"paragraph_id": 120,
"text": "In several Northeast Bantu languages such as Kamba, Taita and Kikuyu in Kenya the word \"wa\" (meaning \"of\") is inserted before the surname, for instance, Mugo wa Kibiru (Kikuyu) and Mekatilili wa Menza (Mijikenda).",
"title": "Africa"
},
{
"paragraph_id": 121,
"text": "The patronymic custom in most of the Horn of Africa gives children the father's first name as their surname. The family then gives the child its first name. Middle names are unknown. So, for example, a person's name might be Bereket Mekonen . In this case, Bereket is the first name and Mekonen is the surname, and also the first name of the father.",
"title": "Africa"
},
{
"paragraph_id": 122,
"text": "The paternal grandfather's name is often used if there is a requirement to identify a person further, for example, in school registration. Also, different cultures and tribes use the father's or grandfather's given name as the family's name. For example, some Oromos use Warra Ali to mean families of Ali, where Ali, is either the householder, a father or grandfather.",
"title": "Africa"
},
{
"paragraph_id": 123,
"text": "In Ethiopia, the customs surrounding the bestowal and use of family names is as varied and complex as the cultures to be found there. There are so many cultures, nations or tribes, that currently there can be no one formula whereby to demonstrate a clear pattern of Ethiopian family names. In general, however, Ethiopians use their father's name as a surname in most instances where identification is necessary, sometimes employing both father's and grandfather's names together where exigency dictates.",
"title": "Africa"
},
{
"paragraph_id": 124,
"text": "Many people in Eritrea have Italian surnames, but all of these are owned by Eritreans of Italian descent.",
"title": "Africa"
},
{
"paragraph_id": 125,
"text": "Libya's names and surnames have a strong Islamic/Arab nature, with some Turkish influence from Ottoman Empire rule of nearly 400 years. Amazigh, Touareg and other minorities also have their own name/surname traditions. Due to its location as a trade route and the different cultures that had their impact on Libya throughout history, one can find names that could have originated in neighboring countries, including clan names from the Arabian Peninsula, and Turkish names derived from military rank or status (Basha, Agha).",
"title": "Africa"
},
{
"paragraph_id": 126,
"text": "A full Albanian name consists of a given name (Albanian: emër), patronymic (Albanian: atësi) and family name (Albanian: mbiemër), for example Agron Mark Gjoni. The patronymic is simply the given name of the individual's father, with no suffix added. The family name is typically a noun in the definite form or at the very least ends with a vowel or -j (an approximant close to -i). Many traditional last names end with -aj (previously -anj), which is more prevalent in certain regions of Albania and Kosovo. For clarification, the \"family name\" is typically the father's father's name (grandfather).",
"title": "Other countries"
},
{
"paragraph_id": 127,
"text": "Proper names in Albanian are fully declinable like any noun (e.g. Marinelda, genitive case i/e Marineldës \"of Marinelda\").",
"title": "Other countries"
},
{
"paragraph_id": 128,
"text": "Armenian surnames almost always have the ending (Armenian: յան) transliterated into English as -yan or -ian (spelled -ean (եան) in Western Armenian and pre-Soviet Eastern Armenian, of Ancient Armenian or Iranian origin, presumably meaning \"son of\"), though names with that ending can also be found among Persians and a few other nationalities. Armenian surnames can derive from a geographic location, profession, noble rank, personal characteristic or personal name of an ancestor. Armenians in the diaspora sometimes adapt their surnames to help assimilation. In Russia, many have changed -yan to -ov (or -ova for women). In Turkey, many have changed the ending to -oğlu (also meaning \"son of\"). In English and French-speaking countries, many have shortened their name by removing the ending (for example Charles Aznavour). In ancient Armenia, many noble names ended with the locative -t'si (example, Khorenatsi) or -uni (Bagratuni). Several modern Armenian names also have a Turkish suffix which appears before -ian/-yan: -lian denotes a placename; -djian denotes a profession. Some Western Armenian names have a particle Der, while their Eastern counterparts have Ter. This particle indicates an ancestor who was a priest (Armenian priests can choose to marry or remain celibate, but married priests cannot become a bishop). Thus someone named Der Bedrosian (Western) or Ter Petrosian (Eastern) is a descendant of an Armenian priest. The convention is still in use today: the children of a priest named Hagop Sarkisian would be called Der Sarkisian. Other examples of Armenian surnames: Adonts, Sakunts, Vardanyants, Rshtuni.",
"title": "Other countries"
},
{
"paragraph_id": 129,
"text": "It was common for Azerbaijani names to have 3 components: given name, father's name and family name. However in recent years it is becoming increasingly popular to only have 2 components: first name and surname.",
"title": "Other countries"
},
{
"paragraph_id": 130,
"text": "While under Soviet rule, it was mandatory for Azerbaijanis to register their names, but most people did not have surnames. This was normally circumvented by taking the individual's father's name and adding a Russian suffixes such as \"-yev\"/\"-ov\" for men and \"-yeva/-ova\" for women (meaning \"born of\"). For example from \"Ali\" we get \"Aliyev\" and \"Aliyeva\" and from \"Husein\" we get \"Huseinov\" and \"Huseinova\". However as the Soviet era came to an end, many Azerbaijanis dropped these endings in an attempt to derussify. Some chose to replace these with traditional suffixes like \"-zade\" (Persian for \"born of\") and \"-li/-lu\" (Turkish for \"with\" or \"belonging to\"), \"-oglu/-oghlu\" (Turkish for \"son of\"). Some chose to drop the suffixes entirely.",
"title": "Other countries"
},
{
"paragraph_id": 131,
"text": "Most eastern Georgian surnames end with the suffix of \"-shvili\", (e.g. Kartveli'shvili) Georgian for \"child\" or \"offspring\". Western Georgian surnames most commonly have the suffix \"-dze\", (e.g. Laba'dze) Georgian for \"son\". Megrelian surnames usually end in \"-ia\", \"-ua\" or \"-ava\". Other location-specific endings exist: In Svaneti \"-iani\", meaning \"belonging to\", or \"hailing from\", is common. In the eastern Georgian highlands common endings are \"uri\" and \"uli\". Some noble family names end in \"eli\", meaning \"of (someplace)\". In Georgian, the surname is not normally used as the polite form of address; instead, the given name is used together with a title. For instance, Nikoloz Kartvelishvili is politely addressed as bat'ono Nikoloz \"My Lord. Nikoloz\".",
"title": "Other countries"
},
{
"paragraph_id": 132,
"text": "Greek surnames are most commonly patronymics. Occupation, characteristic, or ethnic background and location/origin-based surnames names also occur; they are sometimes supplemented by nicknames.",
"title": "Other countries"
},
{
"paragraph_id": 133,
"text": "Commonly, Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case. Exceptionally, some end in -ou, indicating the genitive case of this proper noun for patronymic reasons.",
"title": "Other countries"
},
{
"paragraph_id": 134,
"text": "Although surnames are static today, dynamic and changing patronym usage survives in middle names in Greece where the genitive of the father's first name is commonly the middle name.",
"title": "Other countries"
},
{
"paragraph_id": 135,
"text": "Because of their codification in the Modern Greek state, surnames have Katharevousa forms even though Katharevousa is no longer the official standard. Thus, the Ancient Greek name Eleutherios forms the Modern Greek proper name Lefteris, and former vernacular practice (prefixing the surname to the proper name) was to call John Eleutherios Leftero-giannis.",
"title": "Other countries"
},
{
"paragraph_id": 136,
"text": "Modern practice is to call the same person Giannis Eleftheriou: the proper name is vernacular (and not Ioannis), but the surname is an archaic genitive. However, children are almost always baptised with the archaic form of the name so in official matters, the child will be referred to as Ioannis Eleftheriou and not Giannis Eleftheriou.",
"title": "Other countries"
},
{
"paragraph_id": 137,
"text": "Female surnames are most often in the Katharevousa genitive case of a male name. This is an innovation of the Modern Greek state; Byzantine practice was to form a feminine counterpart of the male surname (e.g. masculine Palaiologos, Byzantine feminine Palaiologina, Modern feminine Palaiologou).",
"title": "Other countries"
},
{
"paragraph_id": 138,
"text": "In the past, women would change their surname when married to that of their husband (again in the genitive case) signifying the transfer of \"dependence\" from the father to the husband. In earlier Modern Greek society, women were named with -aina as a feminine suffix on the husband's first name: \"Giorgaina\", \"Mrs George\", \"Wife of George\". Nowadays, a woman's legal surname does not change upon marriage, though she can use the husband's surname socially. Children usually receive the paternal surname, though in rare cases, if the bride and groom have agreed before the marriage, the children can receive the maternal surname.",
"title": "Other countries"
},
{
"paragraph_id": 139,
"text": "Some surnames are prefixed with Papa-, indicating ancestry from a priest, e.g. Papageorgiou, the \"son of a priest named George\". Others, like Archi- and Mastro- signify \"boss\" and \"tradesman\" respectively.",
"title": "Other countries"
},
{
"paragraph_id": 140,
"text": "Prefixes such as Konto-, Makro-, and Chondro- describe body characteristics, such as \"short\", \"tall/long\" and \"fat\". Gero- and Palaio- signify \"old\" or \"wise\".",
"title": "Other countries"
},
{
"paragraph_id": 141,
"text": "Other prefixes include Hadji- (Χαντζή- or Χαντζι-) which was an honorific deriving from the Arabic Hadj or pilgrimage, and indicate that the person had made a pilgrimage (in the case of Christians, to Jerusalem) and Kara- which is attributed to the Turkish word for \"black\" deriving from the Ottoman Empire era. The Turkish suffix -oglou (derived from a patronym, -oğlu in Turkish) can also be found. Although they are of course more common among Greece's Muslim minority, they still can be found among the Christian majority, often Greeks or Karamanlides who were pressured to leave Turkey after the Turkish Republic was founded (since Turkish surnames only date to the founding of the Republic, when Atatürk made them compulsory).",
"title": "Other countries"
},
{
"paragraph_id": 142,
"text": "Arvanitic surnames also exist; an example is Tzanavaras or Tzavaras, from the Arvanitic word çanavar or çavar meaning \"brave\" (pallikari in Greek).",
"title": "Other countries"
},
{
"paragraph_id": 143,
"text": "Most Greek patronymic suffixes are diminutives, which vary by region. The most common Hellenic patronymic suffixes are:",
"title": "Other countries"
},
{
"paragraph_id": 144,
"text": "Others, less common, are:",
"title": "Other countries"
},
{
"paragraph_id": 145,
"text": "Either the surname or the given name may come first in different contexts; in newspapers and in informal uses, the order is given name + surname, while in official documents and forums (tax forms, registrations, military service, school forms), the surname is often listed or said first.",
"title": "Other countries"
},
{
"paragraph_id": 146,
"text": "In Hungarian, like Asian languages but unlike most other European ones (see French and German above for exceptions), the family name is placed before the given names. This usage does not apply to non-Hungarian names, for example \"Tony Blair\" will remain \"Tony Blair\" when written in Hungarian texts.",
"title": "Other countries"
},
{
"paragraph_id": 147,
"text": "Names of Hungarian individuals, however, appear in Western order in English writing.",
"title": "Other countries"
},
{
"paragraph_id": 148,
"text": "Indonesians comprise more than 1,300 ethnic groups. Not all of these groups traditionally have surnames, and in the populous Java surnames are not common at all – regardless of which one of the six officially recognized religions the name carrier profess. For instance, a Christian Javanese woman named Agnes Mega Rosalin has three forenames and no surname. \"Agnes\" is her Christian name, but \"Mega\" can be the first name she uses and the name which she is addressed with. \"Rosalin\" is only a middle name. Nonetheless, Indonesians are well aware of the custom of family names, which is known as marga or fam, and such names have become a specific kind of identifier. People can tell what a person's heritage is by his or her family or clan name.",
"title": "Other countries"
},
{
"paragraph_id": 149,
"text": "Javanese people are the majority in Indonesia, and most do not have any surname. There are some individuals, especially the old generation, who have only one name, such as \"Suharto\" and \"Sukarno\". These are not only common with the Javanese but also with other Indonesian ethnic groups who do not have the tradition of surnames. If, however, they are Muslims, they might opt to follow Arabic naming customs, but Indonesian Muslims do not automatically follow Arabic name traditions.",
"title": "Other countries"
},
{
"paragraph_id": 150,
"text": "In conjunction with migration to Europe or America, Indonesians without surnames often adopt a surname based on some family name or middle name. The forms for visa application many Western countries use, has a square for writing the last name which cannot be left unfilled by the applicant.",
"title": "Other countries"
},
{
"paragraph_id": 151,
"text": "Most Chinese Indonesians substituted their Chinese surnames with Indonesian-sounding surnames due to political pressure from 1965 to 1998 under Suharto's regime.",
"title": "Other countries"
},
{
"paragraph_id": 152,
"text": "Persian last names may be:",
"title": "Other countries"
},
{
"paragraph_id": 153,
"text": "Suffixes include: -an (plural suffix), -i (\"of\"), -zad/-zadeh (\"born of\"), -pur (\"son of\"), -nejad (\"from the race of\"), -nia (\"descendant of\"), -mand (\"having or pertaining to\"), -vand (\"succeeding\"), -far (\"holder of\"), -doost (\"-phile\"), -khah (\"seeking of\"), -manesh (\"having the manner of\"), -ian/-yan, -gar and -chi (\"whose vocation pertains\").",
"title": "Other countries"
},
{
"paragraph_id": 154,
"text": "An example is names of geographical locations plus \"-i\": Irani (\"Iranian\"), Gilani (\"of Gilan province\"), Tabrizi (\"of the city of Tabriz\").",
"title": "Other countries"
},
{
"paragraph_id": 155,
"text": "Another example is last names that indicate relation to religious groups such as Zoroastrian (e.g. Goshtaspi, Namiranian, Azargoshasp), Jewish (e.g. Yaghubian [Jacobean], Hayyem [Life], Shaul [Saul]) or Muslim (e.g. Alavi, Islamnia, Montazeri)",
"title": "Other countries"
},
{
"paragraph_id": 156,
"text": "Last names are arbitrary; their holder need not to have any relation with their meaning.",
"title": "Other countries"
},
{
"paragraph_id": 157,
"text": "Traditionally in Iran, the wife does not take her husband's surname, although children take the surname of their father. Individual reactions notwithstanding, it is possible to call a married woman by her husband's surname. This is facilitated by the fact that English words \"Mrs.\", \"Miss\", \"Woman\", \"Lady\" and \"Wife (of)\" in a polite context are all translated into \"خانم\" (Khaanom). Context, however, is important: \"خانم گلدوست\" (Khaanom Goldust) may, for instance, refer to the daughter of Mr. Goldust instead of his wife. When most of Iranian surnames are used with a name, the name will be ended with a suffix _E or _ie (of) such as Hasan_e roshan (Hasan is name and roshan is surname) that means Hasan of Roshan or Mosa_ie saiidi (Muses of saiidi). The _e is not for surname and it is difficult to say it is a part of surname.",
"title": "Other countries"
},
{
"paragraph_id": 158,
"text": "Italy has around 350,000 surnames. Most of them derive from the following sources: patronym or ilk (e.g. Francesco di Marco, \"Francis, son of Mark\" or Eduardo de Filippo, \"Edward belonging to the family of Philip\"), occupation (e.g. Enzo Ferrari, \"Heinz (of the) Blacksmiths\"), personal characteristic (e.g. nicknames or pet names like Dario Forte, \"Darius the Strong\"), geographic origin (e.g. Elisabetta Romano, \"Elisabeth from Rome\") and objects (e.g. Carlo Sacchi, \"Charles Bags\"). The two most common Italian family names, Russo and Rossi, mean the same thing, \"Red\", possibly referring to the hair color.",
"title": "Other countries"
},
{
"paragraph_id": 159,
"text": "Both Western and Eastern orders are used for full names: the given name usually comes first, but the family name may come first in administrative settings; lists are usually indexed according to the last name.",
"title": "Other countries"
},
{
"paragraph_id": 160,
"text": "Since 1975, women have kept their own surname when married, but until recently (2000) they could have added the surname of the husband according to the civil code, although it was a very seldom-used practice. In recent years, the husband's surname cannot be used in any official situation. In some unofficial situations, sometimes both surnames are written (the proper first), sometimes separated by in (e.g. Giuseppina Mauri in Crivelli) or, in case of widows, ved. (vedova).",
"title": "Other countries"
},
{
"paragraph_id": 161,
"text": "Latvian male surnames usually end in -s, -š or -is whereas the female versions of the same names end in -a or -e or s in both unmarried and married women.",
"title": "Other countries"
},
{
"paragraph_id": 162,
"text": "Before the emancipation from serfdom (1817 in Courland, 1819 in Vidzeme, 1861 in Latgale) only noblemen, free craftsmen or people living in towns had surnames. Therefore, the oldest Latvian surnames originate from German or Low German, reflecting the dominance of German as an official language in Latvia till the 19th century. Examples: Meijers/Meijere (German: Meier, farm administrator; akin to Mayor), Millers/Millere (German: Müller, miller), Šmits/Šmite (German: Schmidt, smith), Šulcs/Šulce, Šulca (German: Schultz or Schulz, constable), Ulmanis (German: Ullmann, a person from Ulm), Godmanis (a God-man), Pētersons (son of Peter). Some Latvian surnames, mainly from Latgale are of Polish or Belarusian origin by changing the final -ski/-cki to -skis/-ckis, -czyk to -čiks or -vich/-wicz to -vičs, such as Sokolovkis/Sokolovska, Baldunčiks/Baldunčika or Ratkevičs/Ratkeviča.",
"title": "Other countries"
},
{
"paragraph_id": 163,
"text": "Most Latvian peasants received their surnames in 1826 (in Vidzeme), in 1835 (in Courland), and in 1866 (in Latgale). Diminutives were the most common form of family names. Examples: Kalniņš/Kalniņa (small hill), Bērziņš/Bērziņa (small birch).",
"title": "Other countries"
},
{
"paragraph_id": 164,
"text": "Nowadays many Latvians of Slavic descent have surnames of Russian, Belarusian, or Ukrainian origin, for example Volkovs/Volkova or Antoņenko.",
"title": "Other countries"
},
{
"paragraph_id": 165,
"text": "Lithuanian names follow the Baltic distinction between male and female suffixes of names, although the details are different. Male surnames usually end in -a, -as, -aitis, -ys, -ius, or -us, whereas the female versions change these suffixes to -aitė, -ytė, -iūtė, and -utė respectively (if unmarried), -ienė (if married), or -ė (not indicating the marital status). Some Lithuanians have names of Polish or another Slavic origin, which are made to conform to Lithuanian by changing the final -ski to -skas, such as Sadauskas, with the female version bein -skaitė (if unmarried), -skienė (if married), or -skė (not indicating the marital status).",
"title": "Other countries"
},
{
"paragraph_id": 166,
"text": "Different cultures have their impact on the demographics of the Maltese islands, and this is evident in the various surnames Maltese citizens bear nowadays. There are very few Maltese surnames per se: the few that originate from Maltese places of origin include Chircop (Kirkop), Lia (Lija), Balzan (Balzan), Valletta (Valletta), and Sciberras (Xebb ir-Ras Hill, on which Valletta was built). The village of Munxar, Gozo is characterised by the majority of its population having one of two surnames, either Curmi or de Brincat. In Gozo, the surnames Bajada and Farrugia are also common.",
"title": "Other countries"
},
{
"paragraph_id": 167,
"text": "Sicilian and Italian surnames are common due to the close vicinity to Malta. Sicilians were the first to colonise the Maltese islands. Common examples include Azzopardi, Bonello, Cauchi, Farrugia, Gauci, Rizzo, Schembri, Tabone, Vassallo, Vella.",
"title": "Other countries"
},
{
"paragraph_id": 168,
"text": "Common examples include Depuis, Montfort, Monsenuier, Tafel.",
"title": "Other countries"
},
{
"paragraph_id": 169,
"text": "English surnames exist for a number of reasons, but mainly due to migration as well as Malta forming a part of the British Empire in the 19th century and most of the 20th. Common examples include Bone, Harding, Atkins, Mattocks, Smith, Jones, Woods, Turner.",
"title": "Other countries"
},
{
"paragraph_id": 170,
"text": "Arabic surnames occur in part due to the early presence of the Arabs in Malta. Common examples include Sammut, Camilleri, Zammit, and Xuereb.",
"title": "Other countries"
},
{
"paragraph_id": 171,
"text": "Common surnames of Spanish origin include Abela, Galdes, Herrera, and Guzman.",
"title": "Other countries"
},
{
"paragraph_id": 172,
"text": "Surnames from foreign countries from the Middle Ages include German, such as von Brockdorff, Hyzler, and Schranz.",
"title": "Other countries"
},
{
"paragraph_id": 173,
"text": "Many of the earliest Maltese surnames are Sicilian Greek, e.g. Cilia, Calleia, Brincat, Cauchi. Much less common are recent surnames from Greece; examples include Dacoutros, and Trakosopoulos",
"title": "Other countries"
},
{
"paragraph_id": 174,
"text": "The original Jewish community of Malta and Gozo has left no trace of their presence on the islands since they were expelled in January 1493.",
"title": "Other countries"
},
{
"paragraph_id": 175,
"text": "In line with the practice in other Christian, European states, women generally assume their husband's surname after legal marriage, and this is passed on to any children the couple may bear. Some women opt to retain their old name, for professional/personal reasons, or combine their surname with that of their husband.",
"title": "Other countries"
},
{
"paragraph_id": 176,
"text": "Mongolians do not use surnames in the way that most Westerners, Chinese or Japanese do. Since the socialist period, patronymics – then called ovog, now called etsgiin ner – are used instead of a surname. If the father's name is unknown, a matronymic is used. The patro- or matronymic is written before the given name. Therefore, if a man with given name Tsakhia has a son, and gives the son the name Elbegdorj, the son's full name is Tsakhia Elbegdorj. Very frequently, the patronymic is given in genitive case, i.e. Tsakhiagiin Elbegdorj. However, the patronymic is rather insignificant in everyday use and usually just given as an initial – Ts. Elbegdorj. People are normally just referred to and addressed by their given name (Elbegdorj guai – Mr. Elbegdorj), and if two people share a common given name, they are usually just kept apart by their initials, not by the full patronymic.",
"title": "Other countries"
},
{
"paragraph_id": 177,
"text": "Since 2000, Mongolians have been officially using clan names – ovog, the same word that had been used for the patronymics before – on their IDs. Many people chose the names of the ancient clans and tribes such Borjigin, Besud, Jalair, etc. Also many extended families chose the names of the native places of their ancestors. Some chose the names of their most ancient known ancestor. Some just decided to pass their own given names (or modifications of their given names) to their descendants as clan names. Some chose other attributes of their lives as surnames. Gürragchaa chose Sansar (Cosmos). Clan names precede the patronymics and given names, e.g. Besud Tsakhiagiin Elbegdorj. These clan names have a significance and are included in Mongolian passports.",
"title": "Other countries"
},
{
"paragraph_id": 178,
"text": "People from Myanmar or Burmese, have no family names. This, to some, is the only known Asian people having no family names at all. Some of those from Myanmar or Burma, who are familiar with European or American cultures, began to put to their younger generations with a family name – adopted from the notable ancestors. For example, Ms. Aung San Suu Kyi is the daughter of the late Father of Independence General Aung San; Hayma Ne Win, is the daughter of the famous actor Kawleikgyin Ne Win etc.",
"title": "Other countries"
},
{
"paragraph_id": 179,
"text": "Until the middle of the 19th century, there was no standardization of surnames in the Philippines. There were native Filipinos without surnames, others whose surnames deliberately did not match that of their families, as well as those who took certain surnames simply because they had a certain prestige, usually ones related to the Roman Catholic religion, such as de los Santos (\"of the saints\") and de la Cruz (\"of the cross\"), or of local nobility such as of rajahs or datus.",
"title": "Other countries"
},
{
"paragraph_id": 180,
"text": "On 21 November 1849, the Spanish Governor-General of the Philippines, Narciso Clavería y Zaldúa, decreed an end to these arbitrary practices, the systematic distribution of surnames to Filipinos without prior surnames and the universal implementation of the Spanish naming system. This produced the Catálogo alfabético de apellidos (\"Alphabetical Catalogue of Surnames\"), which listed permitted surnames with origins in Spanish, Filipino, and Hispanized Chinese words, names, and numbers. Thus, many Spanish-sounding Filipino surnames are not surnames common to the rest of the Spanish-speaking world. The book contained many words coming from Spanish and the Philippine languages such as Tagalog, as well as many Basque and Catalan surnames.",
"title": "Other countries"
},
{
"paragraph_id": 181,
"text": "The colonial authorities implemented this decree because many Christianized Filipinos assumed religious names. There soon were too many people surnamed de los Santos (\"of the saints\"), de la Cruz (\"of the cross\"), del Rosario (\"of the Rosary\") etc., which made it difficult for the Spanish colonists to control the Filipino people, and most importantly, to collect taxes. These extremely common names were also banned by the decree unless the name has been used by a family for at least four generations. This Spanish naming custom also countered the native custom before the Spanish period, wherein siblings assumed different surnames. Clavería's decree was enforced to different degrees in different parts of the colony.",
"title": "Other countries"
},
{
"paragraph_id": 182,
"text": "Because of this implementation of Spanish naming customs, of the arrangement \"given name + paternal surname + maternal surname\", in the Philippines, a Spanish surname does not necessarily denote Spanish ancestry.",
"title": "Other countries"
},
{
"paragraph_id": 183,
"text": "In practice, the application of this decree varied from municipality to municipality. Most municipalities received surnames starting with only one initial letter; in others, this was not well enforced. For example, the majority of residents of the island of Banton in the province of Romblon have surnames starting with F such as Fabicon, Fallarme, Fadrilan, and Ferran. Other examples are most cities and towns in Albay, Catanduanes, Ilocos Sur and Marinduque, where the majority of their residents have surnames beginning with a particular letter.",
"title": "Other countries"
},
{
"paragraph_id": 184,
"text": "Thus, although perhaps a majority of Filipinos have Spanish surnames, such a surname does not indicate Spanish ancestry. In addition, most Filipinos currently do not use Spanish accented letters in their Spanish derived names. The lack of accents in Filipino Spanish has been attributed to the lack of accents on the predominantly American typewriters after the United States gained control of the Philippines.",
"title": "Other countries"
},
{
"paragraph_id": 185,
"text": "The vast majority of Filipinos follow a naming system in the American order (i.e. given name + middle name + surname), which is the reverse of the Spanish naming order (i.e. given name + paternal surname + maternal surname). Children take the mother's surname as their middle name, followed by their father's as their surname; for example, a son of Juan de la Cruz and his wife María Agbayani may be David Agbayani de la Cruz. Women usually take the surnames of their husband upon marriage, and consequently lose their maiden middle names; so upon her marriage to David de la Cruz, the full name of Laura Yuchengco Macaraeg would become Laura Macaraeg de la Cruz. Their maiden last names automatically become their middle names upon marriage.",
"title": "Other countries"
},
{
"paragraph_id": 186,
"text": "There are other sources for surnames. Many Filipinos also have Chinese-derived surnames, which in some cases could indicate Chinese ancestry. Many Hispanized Chinese numerals and other Hispanized Chinese words, however, were also among the surnames in the Catálogo alfabético de apellidos. For those whose surname may indicate Chinese ancestry, analysis of the surname may help to pinpoint when those ancestors arrived in the Philippines. A Hispanized Chinese surname such as Cojuangco suggests an 18th-century arrival while a Chinese surname such as Lim suggests a relatively recent immigration. Some Chinese surnames such as Tiu-Laurel are composed of the immigrant Chinese ancestor's surname as well as the name of that ancestor's godparent on receiving Christian baptism.",
"title": "Other countries"
},
{
"paragraph_id": 187,
"text": "In the predominantly Muslim areas of the southern Philippines, adoption of surnames was influenced by Islamic religious terms. As a result, surnames among Filipino Muslims are largely Arabic-based, and include such surnames as Hassan and Haradji.",
"title": "Other countries"
},
{
"paragraph_id": 188,
"text": "There are also Filipinos who, to this day, have no surnames at all, particularly if they come from indigenous cultural communities.",
"title": "Other countries"
},
{
"paragraph_id": 189,
"text": "Prior to the establishment of the Philippines as a US territory during the earlier part of the 20th century, Filipinos usually followed Iberian naming customs. However, upon the promulgation of the Family Code of 1987, Filipinos formalized adopting the American system of using their surnames.",
"title": "Other countries"
},
{
"paragraph_id": 190,
"text": "A common Filipino name will consist of the given name (mostly 2 given names are given), the initial letter of the mother's maiden name and finally the father's surname (i.e. Lucy Anne C. de Guzman). Also, women are allowed to retain their maiden name or use both her and her husband's surname as a double-barreled surname, separated by a dash. This is common in feminist circles or when the woman holds a prominent office (e.g. Gloria Macapagal Arroyo, Miriam Defensor Santiago). In more traditional circles, especially those who belong to the prominent families in the provinces, the custom of the woman being addressed as \"Mrs. Husband's Full Name\" is still common.",
"title": "Other countries"
},
{
"paragraph_id": 191,
"text": "For widows, who chose to marry again, two norms are in existence. For those who were widowed before the Family Code, the full name of the woman remains while the surname of the deceased husband is attached. That is, Maria Andres, who was widowed by Ignacio Dimaculangan will have the name Maria Andres viuda de Dimaculangan. If she chooses to marry again, this name will still continue to exist while the surname of the new husband is attached. Thus, if Maria marries Rene de los Santos, her new name will be Maria Andres viuda de Dimaculangan de los Santos.",
"title": "Other countries"
},
{
"paragraph_id": 192,
"text": "However, a new norm is also in existence. The woman may choose to use her husband's surname to be one of her middle names. Thus, Maria Andres viuda de Dimaculangan de los Santos may also be called Maria A.D. de los Santos.",
"title": "Other countries"
},
{
"paragraph_id": 193,
"text": "Children will however automatically inherit their father's surname if they are considered legitimate. If the child is born out of wedlock, the mother will automatically pass her surname to the child, unless the father gives a written acknowledgment of paternity. The father may also choose to give the child both his parents' surnames if he wishes (that is Gustavo Paredes, whose parents are Eulogio Paredes and Juliana Angeles, while having Maria Solis as a wife, may name his child Kevin S. Angeles-Paredes.",
"title": "Other countries"
},
{
"paragraph_id": 194,
"text": "In some Tagalog regions, the norm of giving patronyms, or in some cases matronyms, is also accepted. These names are of course not official, since family names in the Philippines are inherited. It is not uncommon to refer to someone as Juan anak ni Pablo (John, the son of Paul) or Juan apo ni Teofilo (John, the grandson of Theophilus).",
"title": "Other countries"
},
{
"paragraph_id": 195,
"text": "In Romania, like in most of Europe, it is customary for a child to take his father's family name, and a wife to take her husband's last name. However, this is not compulsory – spouses and parents are allowed to choose other options too, as the law is flexible (see Art. 282, Art. 449 Art. 450. of the Civil Code of Romania).",
"title": "Other countries"
},
{
"paragraph_id": 196,
"text": "Until the 19th century, the names were primarily of the form \"[given name] [father's name] [grandfather's name]\". The few exceptions are usually famous people or the nobility (boyars). The name reform introduced around 1850 had the names changed to a western style, most likely imported from France, consisting of a given name followed by a family name.",
"title": "Other countries"
},
{
"paragraph_id": 197,
"text": "As such, the name is called prenume (French prénom), while the family name is called nume or, when otherwise ambiguous, nume de familie (\"family name\"). Although not mandatory, middle names are common.",
"title": "Other countries"
},
{
"paragraph_id": 198,
"text": "Historically, when the family name reform was introduced in the mid-19th century, the default was to use a patronym, or a matronym when the father was dead or unknown. A common convention was to append the suffix -escu to the father's name, e.g. Anghelescu (\"Anghel's child\") and Petrescu (\"Petre's child\"). (The -escu seems to come from Latin -iscum, thus being cognate with Italian -esco and French -esque.) Another common convention was to append the suffix -eanu to the name of the place of origin, e.g. Munteanu (\"from the mountains\") and Moldoveanu (\"from Moldova\"). These uniquely Romanian suffixes strongly identify ancestral nationality.",
"title": "Other countries"
},
{
"paragraph_id": 199,
"text": "There are also descriptive family names derived from occupations, nicknames, and events, e.g. Botezatu (\"baptised\"), Barbu (\"bushy bearded\"), Prodan (\"foster\"), Bălan (\"blond\"), Fieraru (\"smith\"), Croitoru (\"tailor\"), \"Păcuraru\" (\"shepherd\").",
"title": "Other countries"
},
{
"paragraph_id": 200,
"text": "Romanian family names remain the same regardless of the sex of the person.",
"title": "Other countries"
},
{
"paragraph_id": 201,
"text": "Although given names appear before family names in most Romanian contexts, official documents invert the order, ostensibly for filing purposes. Correspondingly, Romanians occasionally introduce themselves with their family names first, e.g. a student signing a test paper in school.",
"title": "Other countries"
},
{
"paragraph_id": 202,
"text": "Romanians bearing names of non-Romanian origin often adopt Romanianised versions of their ancestral surnames. For example, Jurovschi for Polish Żurowski, or Popovici for Serbian Popović (\"son of a priest\"), which preserves the original pronunciation of the surname through transliteration. In some cases, these changes were mandated by the state.",
"title": "Other countries"
},
{
"paragraph_id": 203,
"text": "In Turkey, following the Surname Law imposed in 1934 in the context of Atatürk's Reforms, every family living in Turkey was given a family name. The surname was generally selected by the elderly people of the family and could be any Turkish word (or a permitted word for families belonging to official minority groups).",
"title": "Other countries"
},
{
"paragraph_id": 204,
"text": "Some of the most common family names in Turkey are Yılmaz ('undaunted'), Doğan ('falcon'), Şahin ('hawk'), Yıldırım ('thunderbolt'), Şimşek ('lightning'), Öztürk ('purely Turkish').",
"title": "Other countries"
},
{
"paragraph_id": 205,
"text": "Patronymic surnames do not necessarily refer to ancestry, or in most cases cannot be traced back historically. The most usual Turkish patronymic suffix is –oğlu; –ov(a), –yev(a) and –zade also occur in the surnames of Azeri or other Turkic descendants.",
"title": "Other countries"
},
{
"paragraph_id": 206,
"text": "Official minorities like Armenians, Greeks, and Jews have surnames in their own mother languages. The Armenian families living in Turkey usually have Armenian surnames and generally have the suffix –yan, –ian, or, using Turkish spelling, -can. Greek descendants usually have Greek surnames which might have Greek suffixes like –ou, –aki(s), –poulos/poulou, –idis/idou, –iadis/iadou or prefixes like papa–. The Sephardic Jews who were expelled from Spain and settled in Turkey in 1492 have both Jewish/Hebrew surnames, and Spanish surnames, usually indicating their native regions, cities or villages back in Spain, like De Leon or Toledano.",
"title": "Other countries"
}
]
| Surname conventions and laws vary around the world. This article gives an overview of surnames around the world. | 2001-09-15T16:59:52Z | 2023-12-29T10:05:32Z | [
"Template:Snd",
"Template:Cite book",
"Template:Nowrap",
"Template:TOC limit",
"Template:Reflist",
"Template:Short description",
"Template:ISBN",
"Template:Cite journal",
"Template:Authority control",
"Template:Webarchive",
"Template:More citations needed section",
"Template:Lang-sq",
"Template:Lang-hy",
"Template:In lang",
"Template:Cite conference",
"Template:Wiktionary",
"Template:Unreferenced section",
"Template:Personal names",
"Template:See also",
"Template:Dubious",
"Template:Original research",
"Template:Cite web",
"Template:Further",
"Template:Update inline",
"Template:Details",
"Template:Citation needed",
"Template:Sc",
"Template:Main article"
]
| https://en.wikipedia.org/wiki/Surnames_by_country |
10,815 | Franc | The franc is any of various units of currency. One franc is typically divided into 100 centimes. The name is said to derive from the Latin inscription francorum rex (King of the Franks) used on early French coins and until the 18th century, or from the French franc, meaning "frank" (and "free" in certain contexts, such as coup franc, "free kick").
The countries that use francs today include Switzerland, Liechtenstein, and most of Francophone Africa. The Swiss franc is a major world currency today due to the prominence of Swiss financial institutions.
Before the introduction of the euro in 1999, francs were also used in France, Belgium and Luxembourg, while Andorra and Monaco accepted the French franc as legal tender (Monégasque franc). The franc was also used in French colonies including Algeria and Cambodia. The franc is sometimes Italianised or Hispanicised as the franco, for instance in Luccan franco.
The franc was originally a French gold coin of 3.87 g minted in 1360 on the occasion of the release of King John II ("the Good"), held by the English since his capture at the Battle of Poitiers four years earlier. It was equivalent to one livre tournois (Tours pound).
The French franc was originally a gold coin issued in France from 1360 until 1380, then a silver coin issued between 1575 and 1641. The franc finally became the national currency from 1795 until 1999 (franc coins and notes were legal tender until 2002). Though abolished as a legal coin by King Louis XIII in 1641 in favor of the gold louis and silver écu, the term franc continued to be used in common parlance for the livre tournois. The franc was also minted for many of the former French colonies, such as Morocco, Algeria, French West Africa, and others. Today, after independence, many of these countries continue to use the franc as their standard denomination.
The value of the French franc was locked to the euro at 1 euro = 6.55957 FRF on 31 December 1998, and after the introduction of the euro notes and coins, ceased to be legal tender after 28 February 2002, although they were still exchangeable at banks until 19 February 2012.
Fourteen African countries use the franc CFA (in west Africa, Communauté financière africaine; in equatorial Africa, Coopération financière en Afrique centrale), originally (1945) worth 1.7 French francs and then from 1948, 2 francs (from 1960: 0.02 new franc) but after January 1994 worth only 0.01 French franc. Therefore, from January 1999, 1 CFA franc is equivalent to €0.00152449. On 22 December 2019, it was announced that the CFA franc would be replaced in 2020 by an independent currency to be called Eco.
A separate (franc CFP) circulates in France's Pacific territories, worth €0.0084 (formerly 0.055 French franc).
In 1981, The Comoros established an arrangement with the French government similar to that of the CFA franc. Originally, 50 Comorian francs were worth 1 French franc. In January 1994, the rate was changed to 75 Comorian francs to the French franc. Since 1999, the currency has been pegged to the euro.
The conquest of most of western Europe by Revolutionary and Napoleonic France led to the franc's wide circulation. Following independence from the Kingdom of the Netherlands, the new Kingdom of Belgium in 1832 adopted its own Belgian franc, equivalent to the French one, followed by Luxembourg adopting the Luxembourgish franc in 1848 and Switzerland in 1850. Newly unified Italy adopted the lira on a similar basis in 1862.
In 1865, France, Belgium, Switzerland and Italy created the Latin Monetary Union (to be joined by Spain and Greece in 1868): each would possess a national currency unit (franc, lira, peseta, drachma) worth 4.5 g of silver or 0.290322 g of gold (fine), all freely exchangeable at a rate of 1:1. In the 1870s the gold value was made the fixed standard, a situation which was to continue until 1914.
In 1926 Belgium as well as France experienced depreciation and an abrupt collapse of confidence, leading to the introduction of a new gold currency for international transactions, the belga of 5 francs, and the country's withdrawal from the monetary union, which ceased to exist at the end of the year. The 1921 monetary union of Belgium and Luxembourg survived, however, forming the basis for full economic union in 1932.
Like the French franc, the Belgian and Luxembourg francs ceased to exist on 1 January 1999, when they became fixed at 1 EUR = 40.3399 BEF/LUF, thus a franc was worth €0.024789. Old franc coins and notes lost their legal tender status on 28 February 2002.
One Luxembourg franc was equal to one Belgian franc. Belgian francs were legal tender inside Luxembourg, and Luxembourg francs were legal tender in the whole of Belgium. (In reality, Luxembourg francs were only accepted as means of payment by shops and businesses in the Belgian province of Luxembourg adjacent to the independent Grand Duchy of Luxembourg, this for historical reasons.)
The equivalent name of the Belgian franc in Dutch, Belgium's other official language, was frank. As mentioned before, in Luxembourg the franc was called Frang (plural Frangen) in Luxembourgish.
The Swiss franc (ISO code: CHF or 756; German: Franken; Italian: franco), which appreciated significantly against the new European currency from April to September 2000, remains one of the world's strongest currencies, worth as of August 2023 just over one euro. The Swiss franc is used in Switzerland and in Liechtenstein. Liechtenstein retains the ability to mint its own currency, the Liechtenstein franc, which it does from time to time for commemorative or emergency purposes.
The name of the country "Swiss Confederation" is found on some of the coins in Latin (Confoederatio Helvetica), as Switzerland has four official languages, all of which are used on the notes. The denomination is abbreviated "Fr." on the coins which is the abbreviation in all four languages.
The Saar franc, linked at par to the French franc, was introduced in the Saar Protectorate in 1948. On 1 January 1957, the territory joined the Federal Republic of Germany, nevertheless, in its new member state of Saarland, the Saar franc continued to be the currency until 6 July 1959.
The name of the Saar franc in German, the main official language in the Protectorate, was Franken. Coins displaying German inscriptions and the coat of arms of the Protectorate were circulated and used together with French francs. As banknotes, only French franc bills existed. | [
{
"paragraph_id": 0,
"text": "The franc is any of various units of currency. One franc is typically divided into 100 centimes. The name is said to derive from the Latin inscription francorum rex (King of the Franks) used on early French coins and until the 18th century, or from the French franc, meaning \"frank\" (and \"free\" in certain contexts, such as coup franc, \"free kick\").",
"title": ""
},
{
"paragraph_id": 1,
"text": "The countries that use francs today include Switzerland, Liechtenstein, and most of Francophone Africa. The Swiss franc is a major world currency today due to the prominence of Swiss financial institutions.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Before the introduction of the euro in 1999, francs were also used in France, Belgium and Luxembourg, while Andorra and Monaco accepted the French franc as legal tender (Monégasque franc). The franc was also used in French colonies including Algeria and Cambodia. The franc is sometimes Italianised or Hispanicised as the franco, for instance in Luccan franco.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The franc was originally a French gold coin of 3.87 g minted in 1360 on the occasion of the release of King John II (\"the Good\"), held by the English since his capture at the Battle of Poitiers four years earlier. It was equivalent to one livre tournois (Tours pound).",
"title": "Origins"
},
{
"paragraph_id": 4,
"text": "The French franc was originally a gold coin issued in France from 1360 until 1380, then a silver coin issued between 1575 and 1641. The franc finally became the national currency from 1795 until 1999 (franc coins and notes were legal tender until 2002). Though abolished as a legal coin by King Louis XIII in 1641 in favor of the gold louis and silver écu, the term franc continued to be used in common parlance for the livre tournois. The franc was also minted for many of the former French colonies, such as Morocco, Algeria, French West Africa, and others. Today, after independence, many of these countries continue to use the franc as their standard denomination.",
"title": "French franc"
},
{
"paragraph_id": 5,
"text": "The value of the French franc was locked to the euro at 1 euro = 6.55957 FRF on 31 December 1998, and after the introduction of the euro notes and coins, ceased to be legal tender after 28 February 2002, although they were still exchangeable at banks until 19 February 2012.",
"title": "French franc"
},
{
"paragraph_id": 6,
"text": "Fourteen African countries use the franc CFA (in west Africa, Communauté financière africaine; in equatorial Africa, Coopération financière en Afrique centrale), originally (1945) worth 1.7 French francs and then from 1948, 2 francs (from 1960: 0.02 new franc) but after January 1994 worth only 0.01 French franc. Therefore, from January 1999, 1 CFA franc is equivalent to €0.00152449. On 22 December 2019, it was announced that the CFA franc would be replaced in 2020 by an independent currency to be called Eco.",
"title": "CFA and CFP francs"
},
{
"paragraph_id": 7,
"text": "A separate (franc CFP) circulates in France's Pacific territories, worth €0.0084 (formerly 0.055 French franc).",
"title": "CFA and CFP francs"
},
{
"paragraph_id": 8,
"text": "In 1981, The Comoros established an arrangement with the French government similar to that of the CFA franc. Originally, 50 Comorian francs were worth 1 French franc. In January 1994, the rate was changed to 75 Comorian francs to the French franc. Since 1999, the currency has been pegged to the euro.",
"title": "Comorian franc"
},
{
"paragraph_id": 9,
"text": "The conquest of most of western Europe by Revolutionary and Napoleonic France led to the franc's wide circulation. Following independence from the Kingdom of the Netherlands, the new Kingdom of Belgium in 1832 adopted its own Belgian franc, equivalent to the French one, followed by Luxembourg adopting the Luxembourgish franc in 1848 and Switzerland in 1850. Newly unified Italy adopted the lira on a similar basis in 1862.",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 10,
"text": "In 1865, France, Belgium, Switzerland and Italy created the Latin Monetary Union (to be joined by Spain and Greece in 1868): each would possess a national currency unit (franc, lira, peseta, drachma) worth 4.5 g of silver or 0.290322 g of gold (fine), all freely exchangeable at a rate of 1:1. In the 1870s the gold value was made the fixed standard, a situation which was to continue until 1914.",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 11,
"text": "In 1926 Belgium as well as France experienced depreciation and an abrupt collapse of confidence, leading to the introduction of a new gold currency for international transactions, the belga of 5 francs, and the country's withdrawal from the monetary union, which ceased to exist at the end of the year. The 1921 monetary union of Belgium and Luxembourg survived, however, forming the basis for full economic union in 1932.",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 12,
"text": "Like the French franc, the Belgian and Luxembourg francs ceased to exist on 1 January 1999, when they became fixed at 1 EUR = 40.3399 BEF/LUF, thus a franc was worth €0.024789. Old franc coins and notes lost their legal tender status on 28 February 2002.",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 13,
"text": "One Luxembourg franc was equal to one Belgian franc. Belgian francs were legal tender inside Luxembourg, and Luxembourg francs were legal tender in the whole of Belgium. (In reality, Luxembourg francs were only accepted as means of payment by shops and businesses in the Belgian province of Luxembourg adjacent to the independent Grand Duchy of Luxembourg, this for historical reasons.)",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 14,
"text": "The equivalent name of the Belgian franc in Dutch, Belgium's other official language, was frank. As mentioned before, in Luxembourg the franc was called Frang (plural Frangen) in Luxembourgish.",
"title": "Belgian franc and Luxembourg franc"
},
{
"paragraph_id": 15,
"text": "The Swiss franc (ISO code: CHF or 756; German: Franken; Italian: franco), which appreciated significantly against the new European currency from April to September 2000, remains one of the world's strongest currencies, worth as of August 2023 just over one euro. The Swiss franc is used in Switzerland and in Liechtenstein. Liechtenstein retains the ability to mint its own currency, the Liechtenstein franc, which it does from time to time for commemorative or emergency purposes.",
"title": "Swiss franc and Liechtenstein franc"
},
{
"paragraph_id": 16,
"text": "The name of the country \"Swiss Confederation\" is found on some of the coins in Latin (Confoederatio Helvetica), as Switzerland has four official languages, all of which are used on the notes. The denomination is abbreviated \"Fr.\" on the coins which is the abbreviation in all four languages.",
"title": "Swiss franc and Liechtenstein franc"
},
{
"paragraph_id": 17,
"text": "The Saar franc, linked at par to the French franc, was introduced in the Saar Protectorate in 1948. On 1 January 1957, the territory joined the Federal Republic of Germany, nevertheless, in its new member state of Saarland, the Saar franc continued to be the currency until 6 July 1959.",
"title": "Saar franc"
},
{
"paragraph_id": 18,
"text": "The name of the Saar franc in German, the main official language in the Protectorate, was Franken. Coins displaying German inscriptions and the coat of arms of the Protectorate were circulated and used together with French francs. As banknotes, only French franc bills existed.",
"title": "Saar franc"
}
]
| The franc is any of various units of currency. One franc is typically divided into 100 centimes. The name is said to derive from the Latin inscription francorum rex used on early French coins and until the 18th century, or from the French franc, meaning "frank". The countries that use francs today include Switzerland, Liechtenstein, and most of Francophone Africa. The Swiss franc is a major world currency today due to the prominence of Swiss financial institutions. Before the introduction of the euro in 1999, francs were also used in France, Belgium and Luxembourg, while Andorra and Monaco accepted the French franc as legal tender. The franc was also used in French colonies including Algeria and Cambodia. The franc is sometimes Italianised or Hispanicised as the franco, for instance in Luccan franco. | 2001-05-16T18:40:36Z | 2023-10-30T10:13:43Z | [
"Template:Main",
"Template:Lang",
"Template:Lang-de",
"Template:Lang-it",
"Template:As of",
"Template:Authority control",
"Template:Flag",
"Template:Commons-inline",
"Template:Val",
"Template:Portal",
"Template:Cite web",
"Template:Short description",
"Template:Hatgrp",
"Template:Multiple image",
"Template:Flagicon",
"Template:Franc"
]
| https://en.wikipedia.org/wiki/Franc |
10,819 | Federal Reserve | The Federal Reserve System (often shortened to the Federal Reserve, or simply the Fed) is the central banking system of the United States. It was created on December 23, 1913, with the enactment of the Federal Reserve Act, after a series of financial panics (particularly the panic of 1907) led to the desire for central control of the monetary system in order to alleviate financial crises. Over the years, events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the roles and responsibilities of the Federal Reserve System.
Congress established three key objectives for monetary policy in the Federal Reserve Act: maximizing employment, stabilizing prices, and moderating long-term interest rates. The first two objectives are sometimes referred to as the Federal Reserve's dual mandate. Its duties have expanded over the years, and currently also include supervising and regulating banks, maintaining the stability of the financial system, and providing financial services to depository institutions, the U.S. government, and foreign official institutions. The Fed also conducts research into the economy and provides numerous publications, such as the Beige Book and the FRED database.
The Federal Reserve System is composed of several layers. It is governed by the presidentially-appointed board of governors or Federal Reserve Board (FRB). Twelve regional Federal Reserve Banks, located in cities throughout the nation, regulate and oversee privately-owned commercial banks. Nationally chartered commercial banks are required to hold stock in, and can elect some board members of, the Federal Reserve Bank of their region.
The Federal Open Market Committee (FOMC) sets monetary policy by adjusting the target for the federal funds rate, which influences market interest rates generally and via the monetary transmission mechanism in turn US economic activity. The FOMC consists of all seven members of the board of governors and the twelve regional Federal Reserve Bank presidents, though only five bank presidents vote at a time—the president of the New York Fed and four others who rotate through one-year voting terms. There are also various advisory councils. It has a structure unique among central banks, and is also unusual in that the United States Department of the Treasury, an entity outside of the central bank, prints the currency used.
The federal government sets the salaries of the board's seven governors, and it receives all the system's annual profits, after dividends on member banks' capital investments are paid, and an account surplus is maintained. In 2015, the Federal Reserve earned a net income of $100.2 billion and transferred $97.7 billion to the U.S. Treasury, and 2020 earnings were approximately $88.6 billion with remittances to the U.S. Treasury of $86.9 billion. Although an instrument of the U.S. government, the Federal Reserve System considers itself "an independent central bank because its monetary policy decisions do not have to be approved by the president or by anyone else in the executive or legislative branches of government, it does not receive funding appropriated by Congress, and the terms of the members of the board of governors span multiple presidential and congressional terms."
The primary declared motivation for creating the Federal Reserve System was to address banking panics. Other purposes are stated in the Federal Reserve Act, such as "to furnish an elastic currency, to afford means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes". Before the founding of the Federal Reserve System, the United States underwent several financial crises. A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913. Today the Federal Reserve System has responsibilities in addition to stabilizing the financial system.
Current functions of the Federal Reserve System include:
Banking institutions in the United States are required to hold reserves—amounts of currency and deposits in other banks—equal to only a fraction of the amount of the bank's deposit liabilities owed to customers. This practice is called fractional-reserve banking. As a result, banks usually invest the majority of the funds received from depositors. On rare occasions, too many of the bank's customers will withdraw their savings and the bank will need help from another institution to continue operating; this is called a bank run. Bank runs can lead to a multitude of social and economic problems. The Federal Reserve System was designed as an attempt to prevent or minimize the occurrence of bank runs, and possibly act as a lender of last resort when a bank run does occur. Many economists, following Nobel laureate Milton Friedman, believe that the Federal Reserve inappropriately refused to lend money to small banks during the bank runs of 1929; Friedman argued that this contributed to the Great Depression.
Because some banks refused to clear checks from certain other banks during times of economic uncertainty, a check-clearing system was created in the Federal Reserve System. It is briefly described in The Federal Reserve System—Purposes and Functions as follows:
By creating the Federal Reserve System, Congress intended to eliminate the severe financial crises that had periodically swept the nation, especially the sort of financial panic that occurred in 1907. During that episode, payments were disrupted throughout the country because many banks and clearinghouses refused to clear checks drawn on certain other banks, a practice that contributed to the failure of otherwise solvent banks. To address these problems, Congress gave the Federal Reserve System the authority to establish a nationwide check-clearing system. The System, then, was to provide not only an elastic currency—that is, a currency that would expand or shrink in amount as economic conditions warranted—but also an efficient and equitable check-collection system.
In the United States, the Federal Reserve serves as the lender of last resort to those institutions that cannot obtain credit elsewhere and the collapse of which would have serious implications for the economy. It took over this role from the private sector "clearing houses" which operated during the Free Banking Era; whether public or private, the availability of liquidity was intended to prevent bank runs.
Through its discount window and credit operations, Reserve Banks provide liquidity to banks to meet short-term needs stemming from seasonal fluctuations in deposits or unexpected withdrawals. Longer-term liquidity may also be provided in exceptional circumstances. The rate the Fed charges banks for these loans is called the discount rate (officially the primary credit rate).
By making these loans, the Fed serves as a buffer against unexpected day-to-day fluctuations in reserve demand and supply. This contributes to the effective functioning of the banking system, alleviates pressure in the reserves market and reduces the extent of unexpected movements in the interest rates. For example, on September 16, 2008, the Federal Reserve Board authorized an $85 billion loan to stave off the bankruptcy of international insurance giant American International Group (AIG).
In its role as the central bank of the United States, the Fed serves as a banker's bank and as the government's bank. As the banker's bank, it helps to assure the safety and efficiency of the payments system. As the government's bank or fiscal agent, the Fed processes a variety of financial transactions involving trillions of dollars. Just as an individual might keep an account at a bank, the U.S. Treasury keeps a checking account with the Federal Reserve, through which incoming federal tax deposits and outgoing government payments are handled. As part of this service relationship, the Fed sells and redeems U.S. government securities such as savings bonds and Treasury bills, notes and bonds. It also issues the nation's coin and paper currency. The U.S. Treasury, through its Bureau of the Mint and Bureau of Engraving and Printing, actually produces the nation's cash supply and, in effect, sells the paper currency to the Federal Reserve Banks at manufacturing cost, and the coins at face value. The Federal Reserve Banks then distribute it to other financial institutions in various ways. During the Fiscal Year 2020, the Bureau of Engraving and Printing delivered 57.95 billion notes at an average cost of 7.4 cents per note.
Federal funds are the reserve balances (also called Federal Reserve Deposits) that private banks keep at their local Federal Reserve Bank. These balances are the namesake reserves of the Federal Reserve System. The purpose of keeping funds at a Federal Reserve Bank is to have a mechanism for private banks to lend funds to one another. This market for funds plays an important role in the Federal Reserve System as it is what inspired the name of the system and it is what is used as the basis for monetary policy. Monetary policy is put into effect partly by influencing how much interest the private banks charge each other for the lending of these funds.
Federal reserve accounts contain federal reserve credit, which can be converted into federal reserve notes. Private banks maintain their bank reserves in federal reserve accounts.
The Federal Reserve regulates private banks. The system was designed out of a compromise between the competing philosophies of privatization and government regulation. In 2006 Donald L. Kohn, vice chairman of the board of governors, summarized the history of this compromise:
Agrarian and progressive interests, led by William Jennings Bryan, favored a central bank under public, rather than banker, control. But the vast majority of the nation's bankers, concerned about government intervention in the banking business, opposed a central bank structure directed by political appointees. The legislation that Congress ultimately adopted in 1913 reflected a hard-fought battle to balance these two competing views and created the hybrid public-private, centralized-decentralized structure that we have today.
The balance between private interests and government can also be seen in the structure of the system. Private banks elect members of the board of directors at their regional Federal Reserve Bank while the members of the board of governors are selected by the president of the United States and confirmed by the Senate.
The Federal Banking Agency Audit Act, enacted in 1978 as Public Law 95-320 and 31 U.S.C. section 714 establish that the board of governors of the Federal Reserve System and the Federal Reserve banks may be audited by the Government Accountability Office (GAO).
The GAO has authority to audit check-processing, currency storage and shipments, and some regulatory and bank examination functions, however, there are restrictions to what the GAO may audit. Under the Federal Banking Agency Audit Act, 31 U.S.C. section 714(b), audits of the Federal Reserve Board and Federal Reserve banks do not include (1) transactions for or with a foreign central bank or government or non-private international financing organization; (2) deliberations, decisions, or actions on monetary policy matters; (3) transactions made under the direction of the Federal Open Market Committee; or (4) a part of a discussion or communication among or between members of the board of governors and officers and employees of the Federal Reserve System related to items (1), (2), or (3). See Federal Reserve System Audits: Restrictions on GAO's Access (GAO/T-GGD-94-44), statement of Charles A. Bowsher.
The board of governors in the Federal Reserve System has a number of supervisory and regulatory responsibilities in the U.S. banking system, but not complete responsibility. A general description of the types of regulation and supervision involved in the U.S. banking system is given by the Federal Reserve:
The Board also plays a major role in the supervision and regulation of the U.S. banking system. It has supervisory responsibilities for state-chartered banks that are members of the Federal Reserve System, bank holding companies (companies that control banks), the foreign activities of member banks, the U.S. activities of foreign banks, and Edge Act and "agreement corporations" (limited-purpose institutions that engage in a foreign banking business). The Board and, under delegated authority, the Federal Reserve Banks, supervise approximately 900 state member banks and 5,000 bank holding companies. Other federal agencies also serve as the primary federal supervisors of commercial banks; the Office of the Comptroller of the Currency supervises national banks, and the Federal Deposit Insurance Corporation supervises state banks that are not members of the Federal Reserve System.
Some regulations issued by the Board apply to the entire banking industry, whereas others apply only to member banks, that is, state banks that have chosen to join the Federal Reserve System and national banks, which by law must be members of the System. The Board also issues regulations to carry out major federal laws governing consumer credit protection, such as the Truth in Lending, Equal Credit Opportunity, and Home Mortgage Disclosure Acts. Many of these consumer protection regulations apply to various lenders outside the banking industry as well as to banks.
Members of the Board of Governors are in continual contact with other policy makers in government. They frequently testify before congressional committees on the economy, monetary policy, banking supervision and regulation, consumer credit protection, financial markets, and other matters.
The Board has regular contact with members of the President's Council of Economic Advisers and other key economic officials. The Chair also meets from time to time with the President of the United States and has regular meetings with the Secretary of the Treasury. The Chair has formal responsibilities in the international arena as well.
The board of directors of each Federal Reserve Bank District also has regulatory and supervisory responsibilities. If the board of directors of a district bank has judged that a member bank is performing or behaving poorly, it will report this to the board of governors. This policy is described in law:
Each Federal reserve bank shall keep itself informed of the general character and amount of the loans and investments of its member banks with a view to ascertaining whether undue use is being made of bank credit for the speculative carrying of or trading in securities, real estate, or commodities, or for any other purpose inconsistent with the maintenance of sound credit conditions; and, in determining whether to grant or refuse advances, rediscounts, or other credit accommodations, the Federal reserve bank shall give consideration to such information. The chairman of the Federal reserve bank shall report to the Board of Governors of the Federal Reserve System any such undue use of bank credit by any member bank, together with his recommendation. Whenever, in the judgment of the Board of Governors of the Federal Reserve System, any member bank is making such undue use of bank credit, the Board may, in its discretion, after reasonable notice and an opportunity for a hearing, suspend such bank from the use of the credit facilities of the Federal Reserve System and may terminate such suspension or may renew it from time to time.
The Federal Reserve plays a role in the U.S. payments system. The twelve Federal Reserve Banks provide banking services to depository institutions and to the federal government. For depository institutions, they maintain accounts and provide various payment services, including collecting checks, electronically transferring funds, and distributing and receiving currency and coin. For the federal government, the Reserve Banks act as fiscal agents, paying Treasury checks; processing electronic payments; and issuing, transferring, and redeeming U.S. government securities.
In the Depository Institutions Deregulation and Monetary Control Act of 1980, Congress reaffirmed that the Federal Reserve should promote an efficient nationwide payments system. The act subjects all depository institutions, not just member commercial banks, to reserve requirements and grants them equal access to Reserve Bank payment services. The Federal Reserve plays a role in the nation's retail and wholesale payments systems by providing financial services to depository institutions. Retail payments are generally for relatively small-dollar amounts and often involve a depository institution's retail clients—individuals and smaller businesses. The Reserve Banks' retail services include distributing currency and coin, collecting checks, electronically transferring funds through FedACH (the Federal Reserve's automated clearing house system), and beginning in 2023, facilitating instant payments using the FedNow service. By contrast, wholesale payments are generally for large-dollar amounts and often involve a depository institution's large corporate customers or counterparties, including other financial institutions. The Reserve Banks' wholesale services include electronically transferring funds through the Fedwire Funds Service and transferring securities issued by the U.S. government, its agencies, and certain other entities through the Fedwire Securities Service.
The Federal Reserve System has a "unique structure that is both public and private" and is described as "independent within the government" rather than "independent of government". The System does not require public funding, and derives its authority and purpose from the Federal Reserve Act, which was passed by Congress in 1913 and is subject to Congressional modification or repeal. The four main components of the Federal Reserve System are (1) the board of governors, (2) the Federal Open Market Committee, (3) the twelve regional Federal Reserve Banks, and (4) the member banks throughout the country.
The seven-member board of governors is a large federal agency that functions in business oversight by examining national banks. It is charged with the overseeing of the 12 District Reserve Banks and setting national monetary policy. It also supervises and regulates the U.S. banking system in general. Governors are appointed by the president of the United States and confirmed by the Senate for staggered 14-year terms. One term begins every two years, on February 1 of even-numbered years, and members serving a full term cannot be renominated for a second term. "[U]pon the expiration of their terms of office, members of the Board shall continue to serve until their successors are appointed and have qualified." The law provides for the removal of a member of the board by the president "for cause". The board is required to make an annual report of operations to the Speaker of the U.S. House of Representatives.
The chair and vice chair of the board of governors are appointed by the president from among the sitting governors. They both serve a four-year term and they can be renominated as many times as the president chooses, until their terms on the board of governors expire.
The current members of the board of governors are:
In late December 2011, President Barack Obama nominated Jeremy C. Stein, a Harvard University finance professor and a Democrat, and Jerome Powell, formerly of Dillon Read, Bankers Trust and The Carlyle Group and a Republican. Both candidates also have Treasury Department experience in the Obama and George H. W. Bush administrations respectively.
"Obama administration officials [had] regrouped to identify Fed candidates after Peter Diamond, a Nobel Prize-winning economist, withdrew his nomination to the board in June [2011] in the face of Republican opposition. Richard Clarida, a potential nominee who was a Treasury official under George W. Bush, pulled out of consideration in August [2011]", one account of the December nominations noted. The two other Obama nominees in 2011, Janet Yellen and Sarah Bloom Raskin, were confirmed in September. One of the vacancies was created in 2011 with the resignation of Kevin Warsh, who took office in 2006 to fill the unexpired term ending January 31, 2018, and resigned his position effective March 31, 2011. In March 2012, U.S. Senator David Vitter (R, LA) said he would oppose Obama's Stein and Powell nominations, dampening near-term hopes for approval. However, Senate leaders reached a deal, paving the way for affirmative votes on the two nominees in May 2012 and bringing the board to full strength for the first time since 2006 with Duke's service after term end. Later, on January 6, 2014, the United States Senate confirmed Yellen's nomination to be chair of the Federal Reserve Board of Governors; she was the first woman to hold the position. Subsequently, President Obama nominated Stanley Fischer to replace Yellen as the vice-chair.
In April 2014, Stein announced he was leaving to return to Harvard May 28 with four years remaining on his term. At the time of the announcement, the FOMC "already is down three members as it awaits the Senate confirmation of ... Fischer and Lael Brainard, and as [President] Obama has yet to name a replacement for ... Duke. ... Powell is still serving as he awaits his confirmation for a second term."
Allan R. Landon, former president and CEO of the Bank of Hawaii, was nominated in early 2015 by President Obama to the board.
In July 2015, President Obama nominated University of Michigan economist Kathryn M. Dominguez to fill the second vacancy on the board. The Senate had not yet acted on Landon's confirmation by the time of the second nomination.
Daniel Tarullo submitted his resignation from the board on February 10, 2017, effective on or around April 5, 2017.
The Federal Open Market Committee (FOMC) consists of 12 members, seven from the board of governors and 5 of the regional Federal Reserve Bank presidents. The FOMC oversees and sets policy on open market operations, the principal tool of national monetary policy. These operations affect the amount of Federal Reserve balances available to depository institutions, thereby influencing overall monetary and credit conditions. The FOMC also directs operations undertaken by the Federal Reserve in foreign exchange markets. The FOMC must reach consensus on all decisions. The president of the Federal Reserve Bank of New York is a permanent member of the FOMC; the presidents of the other banks rotate membership at two- and three-year intervals. All Regional Reserve Bank presidents contribute to the committee's assessment of the economy and of policy options, but only the five presidents who are then members of the FOMC vote on policy decisions. The FOMC determines its own internal organization and, by tradition, elects the chair of the board of governors as its chair and the president of the Federal Reserve Bank of New York as its vice chair. Formal meetings typically are held eight times each year in Washington, D.C. Nonvoting Reserve Bank presidents also participate in Committee deliberations and discussion. The FOMC generally meets eight times a year in telephone consultations and other meetings are held when needed.
There is very strong consensus among economists against politicising the FOMC.
The Federal Advisory Council, composed of twelve representatives of the banking industry, advises the board on all matters within its jurisdiction.
There are 12 Federal Reserve Banks, each of which is responsible for member banks located in its district. They are located in Boston, New York, Philadelphia, Cleveland, Richmond, Atlanta, Chicago, St. Louis, Minneapolis, Kansas City, Dallas, and San Francisco. The size of each district was set based upon the population distribution of the United States when the Federal Reserve Act was passed.
The charter and organization of each Federal Reserve Bank is established by law and cannot be altered by the member banks. Member banks do, however, elect six of the nine members of the Federal Reserve Banks' boards of directors.
Each regional Bank has a president, who is the chief executive officer of their Bank. Each regional Reserve Bank's president is nominated by their Bank's board of directors, but the nomination is contingent upon approval by the board of governors. Presidents serve five-year terms and may be reappointed.
Each regional Bank's board consists of nine members. Members are broken down into three classes: A, B, and C. There are three board members in each class. Class A members are chosen by the regional Bank's shareholders, and are intended to represent member banks' interests. Member banks are divided into three categories: large, medium, and small. Each category elects one of the three class A board members. Class B board members are also nominated by the region's member banks, but class B board members are supposed to represent the interests of the public. Lastly, class C board members are appointed by the board of governors, and are also intended to represent the interests of the public.
The Federal Reserve Banks have an intermediate legal status, with some features of private corporations and some features of public federal agencies. The United States has an interest in the Federal Reserve Banks as tax-exempt federally created instrumentalities whose profits belong to the federal government, but this interest is not proprietary. In Lewis v. United States, the United States Court of Appeals for the Ninth Circuit stated that: "The Reserve Banks are not federal instrumentalities for purposes of the FTCA [the Federal Tort Claims Act], but are independent, privately owned and locally controlled corporations." The opinion went on to say, however, that: "The Reserve Banks have properly been held to be federal instrumentalities for some purposes." Another relevant decision is Scott v. Federal Reserve Bank of Kansas City, in which the distinction is made between Federal Reserve Banks, which are federally created instrumentalities, and the board of governors, which is a federal agency.
Regarding the structural relationship between the twelve Federal Reserve banks and the various commercial (member) banks, political science professor Michael D. Reagan has written:
... the "ownership" of the Reserve Banks by the commercial banks is symbolic; they do not exercise the proprietary control associated with the concept of ownership nor share, beyond the statutory dividend, in Reserve Bank "profits." ... Bank ownership and election at the base are therefore devoid of substantive significance, despite the superficial appearance of private bank control that the formal arrangement creates.
A member bank is a private institution and owns stock in its regional Federal Reserve Bank. All nationally chartered banks hold stock in one of the Federal Reserve Banks. State chartered banks may choose to be members (and hold stock in their regional Federal Reserve bank) upon meeting certain standards.
The amount of stock a member bank must own is equal to 3% of its combined capital and surplus. However, holding stock in a Federal Reserve bank is not like owning stock in a publicly traded company. These stocks cannot be sold or traded, and member banks do not control the Federal Reserve Bank as a result of owning this stock. From their Regional Bank, member banks with $10 billion or less in assets receive a dividend of 6%, while member banks with more than $10 billion in assets receive the lesser of 6% or the current 10-year Treasury auction rate. The remainder of the regional Federal Reserve Banks' profits is given over to the United States Treasury Department. In 2015, the Federal Reserve Banks made a profit of $100.2 billion and distributed $2.5 billion in dividends to member banks as well as returning $97.7 billion to the U.S. Treasury.
About 38% of U.S. banks are members of their regional Federal Reserve Bank.
An external auditor selected by the audit committee of the Federal Reserve System regularly audits the Board of Governors and the Federal Reserve Banks. The GAO will audit some activities of the Board of Governors. These audits do not cover "most of the Fed's monetary policy actions or decisions, including discount window lending (direct loans to financial institutions), open-market operations and any other transactions made under the direction of the Federal Open Market Committee" ...[nor may the GAO audit] "dealings with foreign governments and other central banks."
The annual and quarterly financial statements prepared by the Federal Reserve System conform to a basis of accounting that is set by the Federal Reserve Board and does not conform to Generally Accepted Accounting Principles (GAAP) or government Cost Accounting Standards (CAS). The financial reporting standards are defined in the Financial Accounting Manual for the Federal Reserve Banks. The cost accounting standards are defined in the Planning and Control System Manual. As of 27 August 2012, the Federal Reserve Board has been publishing unaudited financial reports for the Federal Reserve banks every quarter.
On November 7, 2008, Bloomberg L.P. brought a lawsuit against the board of governors of the Federal Reserve System to force the board to reveal the identities of firms for which it provided guarantees during the financial crisis of 2007–2008. Bloomberg, L.P. won at the trial court and the Fed's appeals were rejected at both the United States Court of Appeals for the Second Circuit and the U.S. Supreme Court. The data was released on March 31, 2011.
The term "monetary policy" refers to the actions undertaken by a central bank, such as the Federal Reserve, to influence economic activity (the overall demand for goods and services) to help promote national economic goals. The Federal Reserve Act of 1913 gave the Federal Reserve authority to set monetary policy in the United States. The Fed's mandate for monetary policy is commonly known as the dual mandate of promoting maximum employment and stable prices, the latter being interpreted as a stable inflation rate of 2 percent per year on average. The Fed's monetary policy influences economic activity by influencing the general level of interest rates in the economy, which again via the monetary transmission mechanism affects households' and firms' demand for goods and services and in turn employment and inflation.
The Federal Reserve sets monetary policy by influencing the federal funds rate, which is the rate of interbank lending of reserve balances. The rate that banks charge each other for these loans is determined in the interbank market, and the Federal Reserve influences this rate through the "tools" of monetary policy described in the Tools section below. The federal funds rate is a short-term interest rate that the FOMC focuses on, which affects the longer-term interest rates throughout the economy. The Federal Reserve expalined the implementation of its monetary policy in 2021:
The FOMC has the ability to influence the federal funds rate--and thus the cost of short-term interbank credit--by changing the rate of interest the Fed pays on reserve balances that banks hold at the Fed. A bank is unlikely to lend to another bank (or to any of its customers) at an interest rate lower than the rate that the bank can earn on reserve balances held at the Fed. And because overall reserve balances are currently abundant, if a bank wants to borrow reserve balances, it likely will be able to do so without having to pay a rate much above the rate of interest paid by the Fed.
Changes in the target for the federal funds rate affect overall financial conditions through various channels, including subsequent changes in the market interest rates that commercial banks and other lenders charge on short-term and longer-term loans, and changes in asset prices and in currency exchange rates, which again affects private consumption, investment and net export. By easening or tightening the stance of monetary policy, i.e. lowering or raising its target for the federal funds rate, the Fed can either spur or restrain growth in the overall US demand for goods and services.
There are four main tools of monetary policy that the Federal Reserve uses to implement its monetary policy:
The Federal Reserve System implements monetary policy largely by targeting the federal funds rate. This is the interest rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed. This rate is actually determined by the market and is not explicitly mandated by the Fed. The Fed therefore tries to align the effective federal funds rate with the targeted rate, mainly by adjusting its IORB rate. The Federal Reserve System usually adjusts the federal funds rate target by 0.25% or 0.50% at a time.
The interest on reserve balances (IORB) is the interest that the Fed pays on funds held by commercial banks in their reserve balance accounts at the individual Federal Reserve System banks. It is an administrated interest rate (i.e. set directly by the Fed as opposed to a market interest rate which is determined by the forces of supply and demand). As banks are unlikely to lend their reserves in the FFR market for less than they get paid by the Fed, the IORB guides the effective FFR and is used as the primary tool of the Fed's monetary policy.
Open market operations are done through the sale and purchase of United States Treasury security, sometimes called "Treasury bills" or more informally "T-bills" or "Treasuries". The Federal Reserve buys Treasury bills from its primary dealers, which have accounts at depository institutions.
The Federal Reserve's objective for open market operations has varied over the years. During the 1980s, the focus gradually shifted toward attaining a specified level of the federal funds rate (the rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed), a process that was largely complete by the end of the decade.
Until the 2007–2008 financial crisis, the Fed used open market operations as its primary tool to adjust the supply of reserve balances in order to keep the federal funds rate around the Fed's target. This regime is also known as a limited reserves regime. After the financial crisis, the Federal Reserve has adopted a so-called ample reserves regime where open market operations leading to modest changes in the supply of reserves are no longer effective in influencing the FFR. Instead the Fed uses its administered rates, in particular the IORB rate, to influence the FFR. However, open market operations are still an important maintenance tool in the overall framework of the conduct of monetary policy as they are used for ensuring that reserves remain ample.
To smooth temporary or cyclical changes in the money supply, the desk engages in repurchase agreements (repos) with its primary dealers. Repos are essentially secured, short-term lending by the Fed. On the day of the transaction, the Fed deposits money in a primary dealer's reserve account, and receives the promised securities as collateral. When the transaction matures, the process unwinds: the Fed returns the collateral and charges the primary dealer's reserve account for the principal and accrued interest. The term of the repo (the time between settlement and maturity) can vary from 1 day (called an overnight repo) to 65 days.
The Federal Reserve System also directly sets the discount rate, which is the interest rate for "discount window lending", overnight loans that member banks borrow directly from the Fed. This rate is generally set at a rate close to 100 basis points above the target federal funds rate. The idea is to encourage banks to seek alternative funding before using the "discount rate" option. The equivalent operation by the European Central Bank is referred to as the "marginal lending facility".
Both the discount rate and the federal funds rate influence the prime rate, which is usually about 3 percentage points higher than the federal funds rate.
The Term Deposit facility is a program through which the Federal Reserve Banks offer interest-bearing term deposits to eligible institutions. It is intended to facilitate the implementation of monetary policy by providing a tool by which the Federal Reserve can manage the aggregate quantity of reserve balances held by depository institutions. Funds placed in term deposits are removed from the accounts of participating institutions for the life of the term deposit and thus drain reserve balances from the banking system. The program was announced December 9, 2009, and approved April 30, 2010, with an effective date of June 4, 2010. Fed Chair Ben S. Bernanke, testifying before the House Committee on Financial Services, stated that the Term Deposit Facility would be used to reverse the expansion of credit during the Great Recession, by drawing funds out of the money markets into the Federal Reserve Banks. It would therefore result in increased market interest rates, acting as a brake on economic activity and inflation. The Federal Reserve authorized up to five "small-value offerings" in 2010 as a pilot program. After three of the offering auctions were successfully completed, it was announced that small-value auctions would continue on an ongoing basis.
A little-used tool of the Federal Reserve is the quantitative easing policy. Under that policy, the Federal Reserve buys back corporate bonds and mortgage backed securities held by banks or other financial institutions. This in effect puts money back into the financial institutions and allows them to make loans and conduct normal business. The bursting of the United States housing bubble prompted the Fed to buy mortgage-backed securities for the first time in November 2008. Over six weeks, a total of $1.25 trillion were purchased in order to stabilize the housing market, about one-fifth of all U.S. government-backed mortgages.
An instrument of monetary policy adjustment historically employed by the Federal Reserve System was the fractional reserve requirement, also known as the required reserve ratio. The required reserve ratio set the balance that the Federal Reserve System required a depository institution to hold in the Federal Reserve Banks. The required reserve ratio was set by the board of governors of the Federal Reserve System. The reserve requirements have changed over time and some history of these changes is published by the Federal Reserve.
As a response to the financial crisis of 2008, the Federal Reserve started making interest payments on depository institutions' required and excess reserve balances. The payment of interest on excess reserves gave the central bank greater opportunity to address credit market conditions while maintaining the federal funds rate close to the target rate set by the FOMC. The reserve requirement did not play a significant role in the post-2008 interest-on-excess-reserves regime, and in March 2020, the reserve ratio was set to zero for all banks, which meant that no bank was required to hold any reserves, and hence the reserve requirement effectively ceased to exist.
In order to address problems related to the subprime mortgage crisis and United States housing bubble, several new tools were created. The first new tool, called the Term auction facility, was added on December 12, 2007. It was announced as a temporary tool, but remained in place for a prolonged period of time. Creation of the second new tool, called the Term Securities Lending Facility, was announced on March 11, 2008. The main difference between these two facilities was that the Term auction Facility was used to inject cash into the banking system whereas the Term securities Lending Facility was used to inject treasury securities into the banking system. Creation of the third tool, called the Primary Dealer Credit Facility (PDCF), was announced on March 16, 2008. The PDCF was a fundamental change in Federal Reserve policy because it enabled the Fed to lend directly to primary dealers, which was previously against Fed policy. The differences between these three facilities was described by the Federal Reserve:
The Term auction Facility program offers term funding to depository institutions via a bi-weekly auction, for fixed amounts of credit. The Term securities Lending Facility will be an auction for a fixed amount of lending of Treasury general collateral in exchange for OMO-eligible and AAA/Aaa rated private-label residential mortgage-backed securities. The Primary Dealer Credit Facility now allows eligible primary dealers to borrow at the existing Discount Rate for up to 120 days.
Some measures taken by the Federal Reserve to address the financial crisis had not been used since the Great Depression.
The Term auction Facility was a program in which the Federal Reserve auctioned term funds to depository institutions. The creation of this facility was announced by the Federal Reserve on December 12, 2007, and was done in conjunction with the Bank of Canada, the Bank of England, the European Central Bank, and the Swiss National Bank to address elevated pressures in short-term funding markets. The reason it was created was that banks were not lending funds to one another and banks in need of funds were refusing to go to the discount window. Banks were not lending money to each other because there was a fear that the loans would not be paid back. Banks refused to go to the discount window because it was usually associated with the stigma of bank failure. Under the Term auction Facility, the identity of the banks in need of funds was protected in order to avoid the stigma of bank failure. Foreign exchange swap lines with the European Central Bank and Swiss National Bank were opened so the banks in Europe could have access to U.S. dollars. The final Term Auction Facility auction was carried out on March 8, 2010.
The Term securities Lending Facility was a 28-day facility that offered Treasury general collateral to the Federal Reserve Bank of New York's primary dealers in exchange for other program-eligible collateral. It was intended to promote liquidity in the financing markets for Treasury and other collateral and thus to foster the functioning of financial markets more generally. Like the Term auction Facility, the TSLF was done in conjunction with the Bank of Canada, the Bank of England, the European Central Bank, and the Swiss National Bank. The resource allowed dealers to switch debt that was less liquid for U.S. government securities that were easily tradable. The currency swap lines with the European Central Bank and Swiss National Bank were increased. The TSLF was closed on February 1, 2010.
The Primary Dealer Credit Facility (PDCF) was an overnight loan facility that provided funding to primary dealers in exchange for a specified range of eligible collateral and was intended to foster the functioning of financial markets more generally. It ceased extending credit on March 31, 2021.
The Asset Backed Commercial Paper Money Market Mutual Fund Liquidity Facility (ABCPMMMFLF) was also called the AMLF. The Facility began operations on September 22, 2008, and was closed on February 1, 2010.
All U.S. depository institutions, bank holding companies (parent companies or U.S. broker-dealer affiliates), or U.S. branches and agencies of foreign banks were eligible to borrow under this facility pursuant to the discretion of the FRBB.
Collateral eligible for pledge under the Facility was required to meet the following criteria:
On October 7, 2008, the Federal Reserve further expanded the collateral it would loan against to include commercial paper using the Commercial Paper Funding Facility (CPFF). The action made the Fed a crucial source of credit for non-financial businesses in addition to commercial banks and investment firms. Fed officials said they would buy as much of the debt as necessary to get the market functioning again. They refused to say how much that might be, but they noted that around $1.3 trillion worth of commercial paper would qualify. There was $1.61 trillion in outstanding commercial paper, seasonally adjusted, on the market as of 1 October 2008, according to the most recent data from the Fed. That was down from $1.70 trillion in the previous week. Since the summer of 2007, the market had shrunk from more than $2.2 trillion. This program lent out a total $738 billion before it was closed. Forty-five out of 81 of the companies participating in this program were foreign firms. Research shows that Troubled Asset Relief Program (TARP) recipients were twice as likely to participate in the program than other commercial paper issuers who did not take advantage of the TARP bailout. The Fed incurred no losses from the CPFF.
The first attempt at a national currency was during the American Revolutionary War. In 1775, the Continental Congress, as well as the states, began issuing paper currency, calling the bills "Continentals". The Continentals were backed only by future tax revenue, and were used to help finance the Revolutionary War. Overprinting, as well as British counterfeiting, caused the value of the Continental to diminish quickly. This experience with paper money led the United States to strip the power to issue Bills of Credit (paper money) from a draft of the new Constitution on August 16, 1787, as well as banning such issuance by the various states, and limiting the states' ability to make anything but gold or silver coin legal tender on August 28.
In 1791, the government granted the First Bank of the United States a charter to operate as the U.S. central bank until 1811. The First Bank of the United States came to an end under President Madison when Congress refused to renew its charter. The Second Bank of the United States was established in 1816, and lost its authority to be the central bank of the U.S. twenty years later under President Jackson when its charter expired. Both banks were based upon the Bank of England. Ultimately, a third national bank, known as the Federal Reserve, was established in 1913 and still exists to this day.
The first U.S. institution with central banking responsibilities was the First Bank of the United States, chartered by Congress and signed into law by President George Washington on February 25, 1791, at the urging of Alexander Hamilton. This was done despite strong opposition from Thomas Jefferson and James Madison, among numerous others. The charter was for twenty years and expired in 1811 under President Madison, when Congress refused to renew it.
In 1816, however, Madison revived it in the form of the Second Bank of the United States. Years later, early renewal of the bank's charter became the primary issue in the reelection of President Andrew Jackson. After Jackson, who was opposed to the central bank, was reelected, he pulled the government's funds out of the bank. Jackson was the only President to completely pay off the national debt but his efforts to close the bank contributed to the Panic of 1837. The bank's charter was not renewed in 1836, and it would fully dissolve after several years as a private corporation. From 1837 to 1862, in the Free Banking Era there was no formal central bank. From 1846 to 1921, an Independent Treasury System ruled. From 1863 to 1913, a system of national banks was instituted by the 1863 National Banking Act during which series of bank panics, in 1873, 1893, and 1907 occurred.
The main motivation for the third central banking system came from the Panic of 1907, which caused a renewed desire among legislators, economists, and bankers for an overhaul of the monetary system. During the last quarter of the 19th century and the beginning of the 20th century, the United States economy went through a series of financial panics. According to many economists, the previous national banking system had two main weaknesses: an inelastic currency and a lack of liquidity. In 1908, Congress enacted the Aldrich–Vreeland Act, which provided for an emergency currency and established the National Monetary Commission to study banking and currency reform. The National Monetary Commission returned with recommendations which were repeatedly rejected by Congress. A revision crafted during a secret meeting on Jekyll Island by Senator Aldrich and representatives of the nation's top finance and industrial groups later became the basis of the Federal Reserve Act. The House voted on December 22, 1913, with 298 voting yes to 60 voting no. The Senate voted 43–25 on December 23, 1913. President Woodrow Wilson signed the bill later that day.
The head of the bipartisan National Monetary Commission was financial expert and Senate Republican leader Nelson Aldrich. Aldrich set up two commissions – one to study the American monetary system in depth and the other, headed by Aldrich himself, to study the European central banking systems and report on them.
In early November 1910, Aldrich met with five well known members of the New York banking community to devise a central banking bill. Paul Warburg, an attendee of the meeting and longtime advocate of central banking in the U.S., later wrote that Aldrich was "bewildered at all that he had absorbed abroad and he was faced with the difficult task of writing a highly technical bill while being harassed by the daily grind of his parliamentary duties". After ten days of deliberation, the bill, which would later be referred to as the "Aldrich Plan", was agreed upon. It had several key components, including a central bank with a Washington-based headquarters and fifteen branches located throughout the U.S. in geographically strategic locations, and a uniform elastic currency based on gold and commercial paper. Aldrich believed a central banking system with no political involvement was best, but was convinced by Warburg that a plan with no public control was not politically feasible. The compromise involved representation of the public sector on the board of directors.
Aldrich's bill met much opposition from politicians. Critics charged Aldrich of being biased due to his close ties to wealthy bankers such as J. P. Morgan and John D. Rockefeller Jr., Aldrich's son-in-law. Most Republicans favored the Aldrich Plan, but it lacked enough support in Congress to pass because rural and western states viewed it as favoring the "eastern establishment". In contrast, progressive Democrats favored a reserve system owned and operated by the government; they believed that public ownership of the central bank would end Wall Street's control of the American currency supply. Conservative Democrats fought for a privately owned, yet decentralized, reserve system, which would still be free of Wall Street's control.
The original Aldrich Plan was dealt a fatal blow in 1912, when Democrats won the White House and Congress. Nonetheless, President Woodrow Wilson believed that the Aldrich plan would suffice with a few modifications. The plan became the basis for the Federal Reserve Act, which was proposed by Senator Robert Owen in May 1913. The primary difference between the two bills was the transfer of control of the board of directors (called the Federal Open Market Committee in the Federal Reserve Act) to the government. The bill passed Congress on December 23, 1913, on a mostly partisan basis, with most Democrats voting "yea" and most Republicans voting "nay".
Key laws affecting the Federal Reserve have been:
The Federal Reserve records and publishes large amounts of data. A few websites where data is published are at the board of governors' Economic Data and Research page, the board of governors' statistical releases and historical data page, and at the St. Louis Fed's FRED (Federal Reserve Economic Data) page. The Federal Open Market Committee (FOMC) examines many economic indicators prior to determining monetary policy.
Some criticism involves economic data compiled by the Fed. The Fed sponsors much of the monetary economics research in the U.S., and Lawrence H. White objects that this makes it less likely for researchers to publish findings challenging the status quo.
The net worth of households and nonprofit organizations in the United States is published by the Federal Reserve in a report titled Flow of Funds. At the end of the third quarter of fiscal year 2012, this value was $64.8 trillion. At the end of the first quarter of fiscal year 2014, this value was $95.5 trillion.
The most common measures are named M0 (narrowest), M1, M2, and M3. In the United States they are defined by the Federal Reserve as follows:
The Federal Reserve stopped publishing M3 statistics in March 2006, saying that the data cost a lot to collect but did not provide significantly useful information. The other three money supply measures continue to be provided in detail.
The Personal consumption expenditures price index, also referred to as simply the PCE price index, is used as one measure of the value of money. It is a United States-wide indicator of the average increase in prices for all domestic personal consumption. Using a variety of data including United States Consumer Price Index and U.S. Producer Price Index prices, it is derived from the largest component of the gross domestic product in the BEA's National Income and Product Accounts, personal consumption expenditures.
One of the Fed's main roles is to maintain price stability, which means that the Fed's ability to keep a low inflation rate is a long-term measure of their success. Although the Fed is not required to maintain inflation within a specific range, their long run target for the growth of the PCE price index is between 1.5 and 2 percent. There has been debate among policy makers as to whether the Federal Reserve should have a specific inflation targeting policy.
Most mainstream economists favor a low, steady rate of inflation. Chief economist, and advisor to the Federal Reserve, the Congressional Budget Office and the Council of Economic Advisers, Diane C. Swonk observed, in 2022, that "From the Fed's perspective, you have to remember inflation is kind of like cancer. If you don't deal with it now with something that may be painful, you could have something that metastasized and becomes much more chronic later on."
Low (as opposed to zero or negative) inflation may reduce the severity of economic recessions by enabling the labor market to adjust more quickly in a downturn, and reduce the risk that a liquidity trap prevents monetary policy from stabilizing the economy. The task of keeping the rate of inflation low and stable is usually given to monetary authorities.
One of the stated goals of monetary policy is maximum employment. The unemployment rate statistics are collected by the Bureau of Labor Statistics, and like the PCE price index are used as a barometer of the nation's economic health.
The Federal Reserve is self-funded. Over 90 percent of Fed revenues come from open market operations, specifically the interest on the portfolio of Treasury securities as well as "capital gains/losses" that may arise from the buying/selling of the securities and their derivatives as part of Open Market Operations. The balance of revenues come from sales of financial services (check and electronic payment processing) and discount window loans. The board of governors (Federal Reserve Board) creates a budget report once per year for Congress. There are two reports with budget information. The one that lists the complete balance statements with income and expenses, as well as the net profit or loss, is the large report simply titled, "Annual Report". It also includes data about employment throughout the system. The other report, which explains in more detail the expenses of the different aspects of the whole system, is called "Annual Report: Budget Review". These detailed comprehensive reports can be found at the board of governors' website under the section "Reports to Congress"
The Federal Reserve has been remitting interest that it has been receiving back to the United States Treasury. Most of the assets the Fed holds are U.S. Treasury bonds and mortgage-backed securities that it has been purchasing as part of quantitative easing since the 2007–2008 financial crisis. In 2022 the Fed started quantitative tightening (QT) and selling these assets and taking a loss on them in the secondary bond market. As a result, the nearly $100 billion that it was remitting annually to the Treasury, is expected to be discontinued during QT.
One of the keys to understanding the Federal Reserve is the Federal Reserve balance sheet (or balance statement). In accordance with Section 11 of the Federal Reserve Act, the board of governors of the Federal Reserve System publishes once each week the "Consolidated Statement of Condition of All Federal Reserve Banks" showing the condition of each Federal Reserve bank and a consolidated statement for all Federal Reserve banks. The board of governors requires that excess earnings of the Reserve Banks be transferred to the Treasury as interest on Federal Reserve notes.
The Federal Reserve releases its balance sheet every Thursday. Below is the balance sheet as of 8 April 2021 (in billions of dollars):
In addition, the balance sheet also indicates which assets are held as collateral against Federal Reserve Notes.
The Federal Reserve System has faced various criticisms since its inception in 1913. Criticisms include lack of transparency and claims that it is ineffective. | [
{
"paragraph_id": 0,
"text": "The Federal Reserve System (often shortened to the Federal Reserve, or simply the Fed) is the central banking system of the United States. It was created on December 23, 1913, with the enactment of the Federal Reserve Act, after a series of financial panics (particularly the panic of 1907) led to the desire for central control of the monetary system in order to alleviate financial crises. Over the years, events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the roles and responsibilities of the Federal Reserve System.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Congress established three key objectives for monetary policy in the Federal Reserve Act: maximizing employment, stabilizing prices, and moderating long-term interest rates. The first two objectives are sometimes referred to as the Federal Reserve's dual mandate. Its duties have expanded over the years, and currently also include supervising and regulating banks, maintaining the stability of the financial system, and providing financial services to depository institutions, the U.S. government, and foreign official institutions. The Fed also conducts research into the economy and provides numerous publications, such as the Beige Book and the FRED database.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Federal Reserve System is composed of several layers. It is governed by the presidentially-appointed board of governors or Federal Reserve Board (FRB). Twelve regional Federal Reserve Banks, located in cities throughout the nation, regulate and oversee privately-owned commercial banks. Nationally chartered commercial banks are required to hold stock in, and can elect some board members of, the Federal Reserve Bank of their region.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Federal Open Market Committee (FOMC) sets monetary policy by adjusting the target for the federal funds rate, which influences market interest rates generally and via the monetary transmission mechanism in turn US economic activity. The FOMC consists of all seven members of the board of governors and the twelve regional Federal Reserve Bank presidents, though only five bank presidents vote at a time—the president of the New York Fed and four others who rotate through one-year voting terms. There are also various advisory councils. It has a structure unique among central banks, and is also unusual in that the United States Department of the Treasury, an entity outside of the central bank, prints the currency used.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The federal government sets the salaries of the board's seven governors, and it receives all the system's annual profits, after dividends on member banks' capital investments are paid, and an account surplus is maintained. In 2015, the Federal Reserve earned a net income of $100.2 billion and transferred $97.7 billion to the U.S. Treasury, and 2020 earnings were approximately $88.6 billion with remittances to the U.S. Treasury of $86.9 billion. Although an instrument of the U.S. government, the Federal Reserve System considers itself \"an independent central bank because its monetary policy decisions do not have to be approved by the president or by anyone else in the executive or legislative branches of government, it does not receive funding appropriated by Congress, and the terms of the members of the board of governors span multiple presidential and congressional terms.\"",
"title": ""
},
{
"paragraph_id": 5,
"text": "The primary declared motivation for creating the Federal Reserve System was to address banking panics. Other purposes are stated in the Federal Reserve Act, such as \"to furnish an elastic currency, to afford means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes\". Before the founding of the Federal Reserve System, the United States underwent several financial crises. A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913. Today the Federal Reserve System has responsibilities in addition to stabilizing the financial system.",
"title": "Purpose"
},
{
"paragraph_id": 6,
"text": "Current functions of the Federal Reserve System include:",
"title": "Purpose"
},
{
"paragraph_id": 7,
"text": "Banking institutions in the United States are required to hold reserves—amounts of currency and deposits in other banks—equal to only a fraction of the amount of the bank's deposit liabilities owed to customers. This practice is called fractional-reserve banking. As a result, banks usually invest the majority of the funds received from depositors. On rare occasions, too many of the bank's customers will withdraw their savings and the bank will need help from another institution to continue operating; this is called a bank run. Bank runs can lead to a multitude of social and economic problems. The Federal Reserve System was designed as an attempt to prevent or minimize the occurrence of bank runs, and possibly act as a lender of last resort when a bank run does occur. Many economists, following Nobel laureate Milton Friedman, believe that the Federal Reserve inappropriately refused to lend money to small banks during the bank runs of 1929; Friedman argued that this contributed to the Great Depression.",
"title": "Purpose"
},
{
"paragraph_id": 8,
"text": "Because some banks refused to clear checks from certain other banks during times of economic uncertainty, a check-clearing system was created in the Federal Reserve System. It is briefly described in The Federal Reserve System—Purposes and Functions as follows:",
"title": "Purpose"
},
{
"paragraph_id": 9,
"text": "By creating the Federal Reserve System, Congress intended to eliminate the severe financial crises that had periodically swept the nation, especially the sort of financial panic that occurred in 1907. During that episode, payments were disrupted throughout the country because many banks and clearinghouses refused to clear checks drawn on certain other banks, a practice that contributed to the failure of otherwise solvent banks. To address these problems, Congress gave the Federal Reserve System the authority to establish a nationwide check-clearing system. The System, then, was to provide not only an elastic currency—that is, a currency that would expand or shrink in amount as economic conditions warranted—but also an efficient and equitable check-collection system.",
"title": "Purpose"
},
{
"paragraph_id": 10,
"text": "In the United States, the Federal Reserve serves as the lender of last resort to those institutions that cannot obtain credit elsewhere and the collapse of which would have serious implications for the economy. It took over this role from the private sector \"clearing houses\" which operated during the Free Banking Era; whether public or private, the availability of liquidity was intended to prevent bank runs.",
"title": "Purpose"
},
{
"paragraph_id": 11,
"text": "Through its discount window and credit operations, Reserve Banks provide liquidity to banks to meet short-term needs stemming from seasonal fluctuations in deposits or unexpected withdrawals. Longer-term liquidity may also be provided in exceptional circumstances. The rate the Fed charges banks for these loans is called the discount rate (officially the primary credit rate).",
"title": "Purpose"
},
{
"paragraph_id": 12,
"text": "By making these loans, the Fed serves as a buffer against unexpected day-to-day fluctuations in reserve demand and supply. This contributes to the effective functioning of the banking system, alleviates pressure in the reserves market and reduces the extent of unexpected movements in the interest rates. For example, on September 16, 2008, the Federal Reserve Board authorized an $85 billion loan to stave off the bankruptcy of international insurance giant American International Group (AIG).",
"title": "Purpose"
},
{
"paragraph_id": 13,
"text": "In its role as the central bank of the United States, the Fed serves as a banker's bank and as the government's bank. As the banker's bank, it helps to assure the safety and efficiency of the payments system. As the government's bank or fiscal agent, the Fed processes a variety of financial transactions involving trillions of dollars. Just as an individual might keep an account at a bank, the U.S. Treasury keeps a checking account with the Federal Reserve, through which incoming federal tax deposits and outgoing government payments are handled. As part of this service relationship, the Fed sells and redeems U.S. government securities such as savings bonds and Treasury bills, notes and bonds. It also issues the nation's coin and paper currency. The U.S. Treasury, through its Bureau of the Mint and Bureau of Engraving and Printing, actually produces the nation's cash supply and, in effect, sells the paper currency to the Federal Reserve Banks at manufacturing cost, and the coins at face value. The Federal Reserve Banks then distribute it to other financial institutions in various ways. During the Fiscal Year 2020, the Bureau of Engraving and Printing delivered 57.95 billion notes at an average cost of 7.4 cents per note.",
"title": "Purpose"
},
{
"paragraph_id": 14,
"text": "Federal funds are the reserve balances (also called Federal Reserve Deposits) that private banks keep at their local Federal Reserve Bank. These balances are the namesake reserves of the Federal Reserve System. The purpose of keeping funds at a Federal Reserve Bank is to have a mechanism for private banks to lend funds to one another. This market for funds plays an important role in the Federal Reserve System as it is what inspired the name of the system and it is what is used as the basis for monetary policy. Monetary policy is put into effect partly by influencing how much interest the private banks charge each other for the lending of these funds.",
"title": "Purpose"
},
{
"paragraph_id": 15,
"text": "Federal reserve accounts contain federal reserve credit, which can be converted into federal reserve notes. Private banks maintain their bank reserves in federal reserve accounts.",
"title": "Purpose"
},
{
"paragraph_id": 16,
"text": "The Federal Reserve regulates private banks. The system was designed out of a compromise between the competing philosophies of privatization and government regulation. In 2006 Donald L. Kohn, vice chairman of the board of governors, summarized the history of this compromise:",
"title": "Purpose"
},
{
"paragraph_id": 17,
"text": "Agrarian and progressive interests, led by William Jennings Bryan, favored a central bank under public, rather than banker, control. But the vast majority of the nation's bankers, concerned about government intervention in the banking business, opposed a central bank structure directed by political appointees. The legislation that Congress ultimately adopted in 1913 reflected a hard-fought battle to balance these two competing views and created the hybrid public-private, centralized-decentralized structure that we have today.",
"title": "Purpose"
},
{
"paragraph_id": 18,
"text": "The balance between private interests and government can also be seen in the structure of the system. Private banks elect members of the board of directors at their regional Federal Reserve Bank while the members of the board of governors are selected by the president of the United States and confirmed by the Senate.",
"title": "Purpose"
},
{
"paragraph_id": 19,
"text": "The Federal Banking Agency Audit Act, enacted in 1978 as Public Law 95-320 and 31 U.S.C. section 714 establish that the board of governors of the Federal Reserve System and the Federal Reserve banks may be audited by the Government Accountability Office (GAO).",
"title": "Purpose"
},
{
"paragraph_id": 20,
"text": "The GAO has authority to audit check-processing, currency storage and shipments, and some regulatory and bank examination functions, however, there are restrictions to what the GAO may audit. Under the Federal Banking Agency Audit Act, 31 U.S.C. section 714(b), audits of the Federal Reserve Board and Federal Reserve banks do not include (1) transactions for or with a foreign central bank or government or non-private international financing organization; (2) deliberations, decisions, or actions on monetary policy matters; (3) transactions made under the direction of the Federal Open Market Committee; or (4) a part of a discussion or communication among or between members of the board of governors and officers and employees of the Federal Reserve System related to items (1), (2), or (3). See Federal Reserve System Audits: Restrictions on GAO's Access (GAO/T-GGD-94-44), statement of Charles A. Bowsher.",
"title": "Purpose"
},
{
"paragraph_id": 21,
"text": "The board of governors in the Federal Reserve System has a number of supervisory and regulatory responsibilities in the U.S. banking system, but not complete responsibility. A general description of the types of regulation and supervision involved in the U.S. banking system is given by the Federal Reserve:",
"title": "Purpose"
},
{
"paragraph_id": 22,
"text": "The Board also plays a major role in the supervision and regulation of the U.S. banking system. It has supervisory responsibilities for state-chartered banks that are members of the Federal Reserve System, bank holding companies (companies that control banks), the foreign activities of member banks, the U.S. activities of foreign banks, and Edge Act and \"agreement corporations\" (limited-purpose institutions that engage in a foreign banking business). The Board and, under delegated authority, the Federal Reserve Banks, supervise approximately 900 state member banks and 5,000 bank holding companies. Other federal agencies also serve as the primary federal supervisors of commercial banks; the Office of the Comptroller of the Currency supervises national banks, and the Federal Deposit Insurance Corporation supervises state banks that are not members of the Federal Reserve System.",
"title": "Purpose"
},
{
"paragraph_id": 23,
"text": "Some regulations issued by the Board apply to the entire banking industry, whereas others apply only to member banks, that is, state banks that have chosen to join the Federal Reserve System and national banks, which by law must be members of the System. The Board also issues regulations to carry out major federal laws governing consumer credit protection, such as the Truth in Lending, Equal Credit Opportunity, and Home Mortgage Disclosure Acts. Many of these consumer protection regulations apply to various lenders outside the banking industry as well as to banks.",
"title": "Purpose"
},
{
"paragraph_id": 24,
"text": "Members of the Board of Governors are in continual contact with other policy makers in government. They frequently testify before congressional committees on the economy, monetary policy, banking supervision and regulation, consumer credit protection, financial markets, and other matters.",
"title": "Purpose"
},
{
"paragraph_id": 25,
"text": "The Board has regular contact with members of the President's Council of Economic Advisers and other key economic officials. The Chair also meets from time to time with the President of the United States and has regular meetings with the Secretary of the Treasury. The Chair has formal responsibilities in the international arena as well.",
"title": "Purpose"
},
{
"paragraph_id": 26,
"text": "The board of directors of each Federal Reserve Bank District also has regulatory and supervisory responsibilities. If the board of directors of a district bank has judged that a member bank is performing or behaving poorly, it will report this to the board of governors. This policy is described in law:",
"title": "Purpose"
},
{
"paragraph_id": 27,
"text": "Each Federal reserve bank shall keep itself informed of the general character and amount of the loans and investments of its member banks with a view to ascertaining whether undue use is being made of bank credit for the speculative carrying of or trading in securities, real estate, or commodities, or for any other purpose inconsistent with the maintenance of sound credit conditions; and, in determining whether to grant or refuse advances, rediscounts, or other credit accommodations, the Federal reserve bank shall give consideration to such information. The chairman of the Federal reserve bank shall report to the Board of Governors of the Federal Reserve System any such undue use of bank credit by any member bank, together with his recommendation. Whenever, in the judgment of the Board of Governors of the Federal Reserve System, any member bank is making such undue use of bank credit, the Board may, in its discretion, after reasonable notice and an opportunity for a hearing, suspend such bank from the use of the credit facilities of the Federal Reserve System and may terminate such suspension or may renew it from time to time.",
"title": "Purpose"
},
{
"paragraph_id": 28,
"text": "The Federal Reserve plays a role in the U.S. payments system. The twelve Federal Reserve Banks provide banking services to depository institutions and to the federal government. For depository institutions, they maintain accounts and provide various payment services, including collecting checks, electronically transferring funds, and distributing and receiving currency and coin. For the federal government, the Reserve Banks act as fiscal agents, paying Treasury checks; processing electronic payments; and issuing, transferring, and redeeming U.S. government securities.",
"title": "Purpose"
},
{
"paragraph_id": 29,
"text": "In the Depository Institutions Deregulation and Monetary Control Act of 1980, Congress reaffirmed that the Federal Reserve should promote an efficient nationwide payments system. The act subjects all depository institutions, not just member commercial banks, to reserve requirements and grants them equal access to Reserve Bank payment services. The Federal Reserve plays a role in the nation's retail and wholesale payments systems by providing financial services to depository institutions. Retail payments are generally for relatively small-dollar amounts and often involve a depository institution's retail clients—individuals and smaller businesses. The Reserve Banks' retail services include distributing currency and coin, collecting checks, electronically transferring funds through FedACH (the Federal Reserve's automated clearing house system), and beginning in 2023, facilitating instant payments using the FedNow service. By contrast, wholesale payments are generally for large-dollar amounts and often involve a depository institution's large corporate customers or counterparties, including other financial institutions. The Reserve Banks' wholesale services include electronically transferring funds through the Fedwire Funds Service and transferring securities issued by the U.S. government, its agencies, and certain other entities through the Fedwire Securities Service.",
"title": "Purpose"
},
{
"paragraph_id": 30,
"text": "The Federal Reserve System has a \"unique structure that is both public and private\" and is described as \"independent within the government\" rather than \"independent of government\". The System does not require public funding, and derives its authority and purpose from the Federal Reserve Act, which was passed by Congress in 1913 and is subject to Congressional modification or repeal. The four main components of the Federal Reserve System are (1) the board of governors, (2) the Federal Open Market Committee, (3) the twelve regional Federal Reserve Banks, and (4) the member banks throughout the country.",
"title": "Structure"
},
{
"paragraph_id": 31,
"text": "The seven-member board of governors is a large federal agency that functions in business oversight by examining national banks. It is charged with the overseeing of the 12 District Reserve Banks and setting national monetary policy. It also supervises and regulates the U.S. banking system in general. Governors are appointed by the president of the United States and confirmed by the Senate for staggered 14-year terms. One term begins every two years, on February 1 of even-numbered years, and members serving a full term cannot be renominated for a second term. \"[U]pon the expiration of their terms of office, members of the Board shall continue to serve until their successors are appointed and have qualified.\" The law provides for the removal of a member of the board by the president \"for cause\". The board is required to make an annual report of operations to the Speaker of the U.S. House of Representatives.",
"title": "Structure"
},
{
"paragraph_id": 32,
"text": "The chair and vice chair of the board of governors are appointed by the president from among the sitting governors. They both serve a four-year term and they can be renominated as many times as the president chooses, until their terms on the board of governors expire.",
"title": "Structure"
},
{
"paragraph_id": 33,
"text": "The current members of the board of governors are:",
"title": "Structure"
},
{
"paragraph_id": 34,
"text": "In late December 2011, President Barack Obama nominated Jeremy C. Stein, a Harvard University finance professor and a Democrat, and Jerome Powell, formerly of Dillon Read, Bankers Trust and The Carlyle Group and a Republican. Both candidates also have Treasury Department experience in the Obama and George H. W. Bush administrations respectively.",
"title": "Structure"
},
{
"paragraph_id": 35,
"text": "\"Obama administration officials [had] regrouped to identify Fed candidates after Peter Diamond, a Nobel Prize-winning economist, withdrew his nomination to the board in June [2011] in the face of Republican opposition. Richard Clarida, a potential nominee who was a Treasury official under George W. Bush, pulled out of consideration in August [2011]\", one account of the December nominations noted. The two other Obama nominees in 2011, Janet Yellen and Sarah Bloom Raskin, were confirmed in September. One of the vacancies was created in 2011 with the resignation of Kevin Warsh, who took office in 2006 to fill the unexpired term ending January 31, 2018, and resigned his position effective March 31, 2011. In March 2012, U.S. Senator David Vitter (R, LA) said he would oppose Obama's Stein and Powell nominations, dampening near-term hopes for approval. However, Senate leaders reached a deal, paving the way for affirmative votes on the two nominees in May 2012 and bringing the board to full strength for the first time since 2006 with Duke's service after term end. Later, on January 6, 2014, the United States Senate confirmed Yellen's nomination to be chair of the Federal Reserve Board of Governors; she was the first woman to hold the position. Subsequently, President Obama nominated Stanley Fischer to replace Yellen as the vice-chair.",
"title": "Structure"
},
{
"paragraph_id": 36,
"text": "In April 2014, Stein announced he was leaving to return to Harvard May 28 with four years remaining on his term. At the time of the announcement, the FOMC \"already is down three members as it awaits the Senate confirmation of ... Fischer and Lael Brainard, and as [President] Obama has yet to name a replacement for ... Duke. ... Powell is still serving as he awaits his confirmation for a second term.\"",
"title": "Structure"
},
{
"paragraph_id": 37,
"text": "Allan R. Landon, former president and CEO of the Bank of Hawaii, was nominated in early 2015 by President Obama to the board.",
"title": "Structure"
},
{
"paragraph_id": 38,
"text": "In July 2015, President Obama nominated University of Michigan economist Kathryn M. Dominguez to fill the second vacancy on the board. The Senate had not yet acted on Landon's confirmation by the time of the second nomination.",
"title": "Structure"
},
{
"paragraph_id": 39,
"text": "Daniel Tarullo submitted his resignation from the board on February 10, 2017, effective on or around April 5, 2017.",
"title": "Structure"
},
{
"paragraph_id": 40,
"text": "The Federal Open Market Committee (FOMC) consists of 12 members, seven from the board of governors and 5 of the regional Federal Reserve Bank presidents. The FOMC oversees and sets policy on open market operations, the principal tool of national monetary policy. These operations affect the amount of Federal Reserve balances available to depository institutions, thereby influencing overall monetary and credit conditions. The FOMC also directs operations undertaken by the Federal Reserve in foreign exchange markets. The FOMC must reach consensus on all decisions. The president of the Federal Reserve Bank of New York is a permanent member of the FOMC; the presidents of the other banks rotate membership at two- and three-year intervals. All Regional Reserve Bank presidents contribute to the committee's assessment of the economy and of policy options, but only the five presidents who are then members of the FOMC vote on policy decisions. The FOMC determines its own internal organization and, by tradition, elects the chair of the board of governors as its chair and the president of the Federal Reserve Bank of New York as its vice chair. Formal meetings typically are held eight times each year in Washington, D.C. Nonvoting Reserve Bank presidents also participate in Committee deliberations and discussion. The FOMC generally meets eight times a year in telephone consultations and other meetings are held when needed.",
"title": "Structure"
},
{
"paragraph_id": 41,
"text": "There is very strong consensus among economists against politicising the FOMC.",
"title": "Structure"
},
{
"paragraph_id": 42,
"text": "The Federal Advisory Council, composed of twelve representatives of the banking industry, advises the board on all matters within its jurisdiction.",
"title": "Structure"
},
{
"paragraph_id": 43,
"text": "There are 12 Federal Reserve Banks, each of which is responsible for member banks located in its district. They are located in Boston, New York, Philadelphia, Cleveland, Richmond, Atlanta, Chicago, St. Louis, Minneapolis, Kansas City, Dallas, and San Francisco. The size of each district was set based upon the population distribution of the United States when the Federal Reserve Act was passed.",
"title": "Structure"
},
{
"paragraph_id": 44,
"text": "The charter and organization of each Federal Reserve Bank is established by law and cannot be altered by the member banks. Member banks do, however, elect six of the nine members of the Federal Reserve Banks' boards of directors.",
"title": "Structure"
},
{
"paragraph_id": 45,
"text": "Each regional Bank has a president, who is the chief executive officer of their Bank. Each regional Reserve Bank's president is nominated by their Bank's board of directors, but the nomination is contingent upon approval by the board of governors. Presidents serve five-year terms and may be reappointed.",
"title": "Structure"
},
{
"paragraph_id": 46,
"text": "Each regional Bank's board consists of nine members. Members are broken down into three classes: A, B, and C. There are three board members in each class. Class A members are chosen by the regional Bank's shareholders, and are intended to represent member banks' interests. Member banks are divided into three categories: large, medium, and small. Each category elects one of the three class A board members. Class B board members are also nominated by the region's member banks, but class B board members are supposed to represent the interests of the public. Lastly, class C board members are appointed by the board of governors, and are also intended to represent the interests of the public.",
"title": "Structure"
},
{
"paragraph_id": 47,
"text": "The Federal Reserve Banks have an intermediate legal status, with some features of private corporations and some features of public federal agencies. The United States has an interest in the Federal Reserve Banks as tax-exempt federally created instrumentalities whose profits belong to the federal government, but this interest is not proprietary. In Lewis v. United States, the United States Court of Appeals for the Ninth Circuit stated that: \"The Reserve Banks are not federal instrumentalities for purposes of the FTCA [the Federal Tort Claims Act], but are independent, privately owned and locally controlled corporations.\" The opinion went on to say, however, that: \"The Reserve Banks have properly been held to be federal instrumentalities for some purposes.\" Another relevant decision is Scott v. Federal Reserve Bank of Kansas City, in which the distinction is made between Federal Reserve Banks, which are federally created instrumentalities, and the board of governors, which is a federal agency.",
"title": "Structure"
},
{
"paragraph_id": 48,
"text": "Regarding the structural relationship between the twelve Federal Reserve banks and the various commercial (member) banks, political science professor Michael D. Reagan has written:",
"title": "Structure"
},
{
"paragraph_id": 49,
"text": "... the \"ownership\" of the Reserve Banks by the commercial banks is symbolic; they do not exercise the proprietary control associated with the concept of ownership nor share, beyond the statutory dividend, in Reserve Bank \"profits.\" ... Bank ownership and election at the base are therefore devoid of substantive significance, despite the superficial appearance of private bank control that the formal arrangement creates.",
"title": "Structure"
},
{
"paragraph_id": 50,
"text": "A member bank is a private institution and owns stock in its regional Federal Reserve Bank. All nationally chartered banks hold stock in one of the Federal Reserve Banks. State chartered banks may choose to be members (and hold stock in their regional Federal Reserve bank) upon meeting certain standards.",
"title": "Structure"
},
{
"paragraph_id": 51,
"text": "The amount of stock a member bank must own is equal to 3% of its combined capital and surplus. However, holding stock in a Federal Reserve bank is not like owning stock in a publicly traded company. These stocks cannot be sold or traded, and member banks do not control the Federal Reserve Bank as a result of owning this stock. From their Regional Bank, member banks with $10 billion or less in assets receive a dividend of 6%, while member banks with more than $10 billion in assets receive the lesser of 6% or the current 10-year Treasury auction rate. The remainder of the regional Federal Reserve Banks' profits is given over to the United States Treasury Department. In 2015, the Federal Reserve Banks made a profit of $100.2 billion and distributed $2.5 billion in dividends to member banks as well as returning $97.7 billion to the U.S. Treasury.",
"title": "Structure"
},
{
"paragraph_id": 52,
"text": "About 38% of U.S. banks are members of their regional Federal Reserve Bank.",
"title": "Structure"
},
{
"paragraph_id": 53,
"text": "An external auditor selected by the audit committee of the Federal Reserve System regularly audits the Board of Governors and the Federal Reserve Banks. The GAO will audit some activities of the Board of Governors. These audits do not cover \"most of the Fed's monetary policy actions or decisions, including discount window lending (direct loans to financial institutions), open-market operations and any other transactions made under the direction of the Federal Open Market Committee\" ...[nor may the GAO audit] \"dealings with foreign governments and other central banks.\"",
"title": "Structure"
},
{
"paragraph_id": 54,
"text": "The annual and quarterly financial statements prepared by the Federal Reserve System conform to a basis of accounting that is set by the Federal Reserve Board and does not conform to Generally Accepted Accounting Principles (GAAP) or government Cost Accounting Standards (CAS). The financial reporting standards are defined in the Financial Accounting Manual for the Federal Reserve Banks. The cost accounting standards are defined in the Planning and Control System Manual. As of 27 August 2012, the Federal Reserve Board has been publishing unaudited financial reports for the Federal Reserve banks every quarter.",
"title": "Structure"
},
{
"paragraph_id": 55,
"text": "On November 7, 2008, Bloomberg L.P. brought a lawsuit against the board of governors of the Federal Reserve System to force the board to reveal the identities of firms for which it provided guarantees during the financial crisis of 2007–2008. Bloomberg, L.P. won at the trial court and the Fed's appeals were rejected at both the United States Court of Appeals for the Second Circuit and the U.S. Supreme Court. The data was released on March 31, 2011.",
"title": "Structure"
},
{
"paragraph_id": 56,
"text": "The term \"monetary policy\" refers to the actions undertaken by a central bank, such as the Federal Reserve, to influence economic activity (the overall demand for goods and services) to help promote national economic goals. The Federal Reserve Act of 1913 gave the Federal Reserve authority to set monetary policy in the United States. The Fed's mandate for monetary policy is commonly known as the dual mandate of promoting maximum employment and stable prices, the latter being interpreted as a stable inflation rate of 2 percent per year on average. The Fed's monetary policy influences economic activity by influencing the general level of interest rates in the economy, which again via the monetary transmission mechanism affects households' and firms' demand for goods and services and in turn employment and inflation.",
"title": "Monetary policy"
},
{
"paragraph_id": 57,
"text": "The Federal Reserve sets monetary policy by influencing the federal funds rate, which is the rate of interbank lending of reserve balances. The rate that banks charge each other for these loans is determined in the interbank market, and the Federal Reserve influences this rate through the \"tools\" of monetary policy described in the Tools section below. The federal funds rate is a short-term interest rate that the FOMC focuses on, which affects the longer-term interest rates throughout the economy. The Federal Reserve expalined the implementation of its monetary policy in 2021:",
"title": "Monetary policy"
},
{
"paragraph_id": 58,
"text": "The FOMC has the ability to influence the federal funds rate--and thus the cost of short-term interbank credit--by changing the rate of interest the Fed pays on reserve balances that banks hold at the Fed. A bank is unlikely to lend to another bank (or to any of its customers) at an interest rate lower than the rate that the bank can earn on reserve balances held at the Fed. And because overall reserve balances are currently abundant, if a bank wants to borrow reserve balances, it likely will be able to do so without having to pay a rate much above the rate of interest paid by the Fed.",
"title": "Monetary policy"
},
{
"paragraph_id": 59,
"text": "Changes in the target for the federal funds rate affect overall financial conditions through various channels, including subsequent changes in the market interest rates that commercial banks and other lenders charge on short-term and longer-term loans, and changes in asset prices and in currency exchange rates, which again affects private consumption, investment and net export. By easening or tightening the stance of monetary policy, i.e. lowering or raising its target for the federal funds rate, the Fed can either spur or restrain growth in the overall US demand for goods and services.",
"title": "Monetary policy"
},
{
"paragraph_id": 60,
"text": "There are four main tools of monetary policy that the Federal Reserve uses to implement its monetary policy:",
"title": "Monetary policy"
},
{
"paragraph_id": 61,
"text": "",
"title": "Monetary policy"
},
{
"paragraph_id": 62,
"text": "The Federal Reserve System implements monetary policy largely by targeting the federal funds rate. This is the interest rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed. This rate is actually determined by the market and is not explicitly mandated by the Fed. The Fed therefore tries to align the effective federal funds rate with the targeted rate, mainly by adjusting its IORB rate. The Federal Reserve System usually adjusts the federal funds rate target by 0.25% or 0.50% at a time.",
"title": "Monetary policy"
},
{
"paragraph_id": 63,
"text": "The interest on reserve balances (IORB) is the interest that the Fed pays on funds held by commercial banks in their reserve balance accounts at the individual Federal Reserve System banks. It is an administrated interest rate (i.e. set directly by the Fed as opposed to a market interest rate which is determined by the forces of supply and demand). As banks are unlikely to lend their reserves in the FFR market for less than they get paid by the Fed, the IORB guides the effective FFR and is used as the primary tool of the Fed's monetary policy.",
"title": "Monetary policy"
},
{
"paragraph_id": 64,
"text": "Open market operations are done through the sale and purchase of United States Treasury security, sometimes called \"Treasury bills\" or more informally \"T-bills\" or \"Treasuries\". The Federal Reserve buys Treasury bills from its primary dealers, which have accounts at depository institutions.",
"title": "Monetary policy"
},
{
"paragraph_id": 65,
"text": "The Federal Reserve's objective for open market operations has varied over the years. During the 1980s, the focus gradually shifted toward attaining a specified level of the federal funds rate (the rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed), a process that was largely complete by the end of the decade.",
"title": "Monetary policy"
},
{
"paragraph_id": 66,
"text": "Until the 2007–2008 financial crisis, the Fed used open market operations as its primary tool to adjust the supply of reserve balances in order to keep the federal funds rate around the Fed's target. This regime is also known as a limited reserves regime. After the financial crisis, the Federal Reserve has adopted a so-called ample reserves regime where open market operations leading to modest changes in the supply of reserves are no longer effective in influencing the FFR. Instead the Fed uses its administered rates, in particular the IORB rate, to influence the FFR. However, open market operations are still an important maintenance tool in the overall framework of the conduct of monetary policy as they are used for ensuring that reserves remain ample.",
"title": "Monetary policy"
},
{
"paragraph_id": 67,
"text": "To smooth temporary or cyclical changes in the money supply, the desk engages in repurchase agreements (repos) with its primary dealers. Repos are essentially secured, short-term lending by the Fed. On the day of the transaction, the Fed deposits money in a primary dealer's reserve account, and receives the promised securities as collateral. When the transaction matures, the process unwinds: the Fed returns the collateral and charges the primary dealer's reserve account for the principal and accrued interest. The term of the repo (the time between settlement and maturity) can vary from 1 day (called an overnight repo) to 65 days.",
"title": "Monetary policy"
},
{
"paragraph_id": 68,
"text": "The Federal Reserve System also directly sets the discount rate, which is the interest rate for \"discount window lending\", overnight loans that member banks borrow directly from the Fed. This rate is generally set at a rate close to 100 basis points above the target federal funds rate. The idea is to encourage banks to seek alternative funding before using the \"discount rate\" option. The equivalent operation by the European Central Bank is referred to as the \"marginal lending facility\".",
"title": "Monetary policy"
},
{
"paragraph_id": 69,
"text": "Both the discount rate and the federal funds rate influence the prime rate, which is usually about 3 percentage points higher than the federal funds rate.",
"title": "Monetary policy"
},
{
"paragraph_id": 70,
"text": "The Term Deposit facility is a program through which the Federal Reserve Banks offer interest-bearing term deposits to eligible institutions. It is intended to facilitate the implementation of monetary policy by providing a tool by which the Federal Reserve can manage the aggregate quantity of reserve balances held by depository institutions. Funds placed in term deposits are removed from the accounts of participating institutions for the life of the term deposit and thus drain reserve balances from the banking system. The program was announced December 9, 2009, and approved April 30, 2010, with an effective date of June 4, 2010. Fed Chair Ben S. Bernanke, testifying before the House Committee on Financial Services, stated that the Term Deposit Facility would be used to reverse the expansion of credit during the Great Recession, by drawing funds out of the money markets into the Federal Reserve Banks. It would therefore result in increased market interest rates, acting as a brake on economic activity and inflation. The Federal Reserve authorized up to five \"small-value offerings\" in 2010 as a pilot program. After three of the offering auctions were successfully completed, it was announced that small-value auctions would continue on an ongoing basis.",
"title": "Monetary policy"
},
{
"paragraph_id": 71,
"text": "A little-used tool of the Federal Reserve is the quantitative easing policy. Under that policy, the Federal Reserve buys back corporate bonds and mortgage backed securities held by banks or other financial institutions. This in effect puts money back into the financial institutions and allows them to make loans and conduct normal business. The bursting of the United States housing bubble prompted the Fed to buy mortgage-backed securities for the first time in November 2008. Over six weeks, a total of $1.25 trillion were purchased in order to stabilize the housing market, about one-fifth of all U.S. government-backed mortgages.",
"title": "Monetary policy"
},
{
"paragraph_id": 72,
"text": "An instrument of monetary policy adjustment historically employed by the Federal Reserve System was the fractional reserve requirement, also known as the required reserve ratio. The required reserve ratio set the balance that the Federal Reserve System required a depository institution to hold in the Federal Reserve Banks. The required reserve ratio was set by the board of governors of the Federal Reserve System. The reserve requirements have changed over time and some history of these changes is published by the Federal Reserve.",
"title": "Monetary policy"
},
{
"paragraph_id": 73,
"text": "As a response to the financial crisis of 2008, the Federal Reserve started making interest payments on depository institutions' required and excess reserve balances. The payment of interest on excess reserves gave the central bank greater opportunity to address credit market conditions while maintaining the federal funds rate close to the target rate set by the FOMC. The reserve requirement did not play a significant role in the post-2008 interest-on-excess-reserves regime, and in March 2020, the reserve ratio was set to zero for all banks, which meant that no bank was required to hold any reserves, and hence the reserve requirement effectively ceased to exist.",
"title": "Monetary policy"
},
{
"paragraph_id": 74,
"text": "In order to address problems related to the subprime mortgage crisis and United States housing bubble, several new tools were created. The first new tool, called the Term auction facility, was added on December 12, 2007. It was announced as a temporary tool, but remained in place for a prolonged period of time. Creation of the second new tool, called the Term Securities Lending Facility, was announced on March 11, 2008. The main difference between these two facilities was that the Term auction Facility was used to inject cash into the banking system whereas the Term securities Lending Facility was used to inject treasury securities into the banking system. Creation of the third tool, called the Primary Dealer Credit Facility (PDCF), was announced on March 16, 2008. The PDCF was a fundamental change in Federal Reserve policy because it enabled the Fed to lend directly to primary dealers, which was previously against Fed policy. The differences between these three facilities was described by the Federal Reserve:",
"title": "Monetary policy"
},
{
"paragraph_id": 75,
"text": "The Term auction Facility program offers term funding to depository institutions via a bi-weekly auction, for fixed amounts of credit. The Term securities Lending Facility will be an auction for a fixed amount of lending of Treasury general collateral in exchange for OMO-eligible and AAA/Aaa rated private-label residential mortgage-backed securities. The Primary Dealer Credit Facility now allows eligible primary dealers to borrow at the existing Discount Rate for up to 120 days.",
"title": "Monetary policy"
},
{
"paragraph_id": 76,
"text": "Some measures taken by the Federal Reserve to address the financial crisis had not been used since the Great Depression.",
"title": "Monetary policy"
},
{
"paragraph_id": 77,
"text": "The Term auction Facility was a program in which the Federal Reserve auctioned term funds to depository institutions. The creation of this facility was announced by the Federal Reserve on December 12, 2007, and was done in conjunction with the Bank of Canada, the Bank of England, the European Central Bank, and the Swiss National Bank to address elevated pressures in short-term funding markets. The reason it was created was that banks were not lending funds to one another and banks in need of funds were refusing to go to the discount window. Banks were not lending money to each other because there was a fear that the loans would not be paid back. Banks refused to go to the discount window because it was usually associated with the stigma of bank failure. Under the Term auction Facility, the identity of the banks in need of funds was protected in order to avoid the stigma of bank failure. Foreign exchange swap lines with the European Central Bank and Swiss National Bank were opened so the banks in Europe could have access to U.S. dollars. The final Term Auction Facility auction was carried out on March 8, 2010.",
"title": "Monetary policy"
},
{
"paragraph_id": 78,
"text": "The Term securities Lending Facility was a 28-day facility that offered Treasury general collateral to the Federal Reserve Bank of New York's primary dealers in exchange for other program-eligible collateral. It was intended to promote liquidity in the financing markets for Treasury and other collateral and thus to foster the functioning of financial markets more generally. Like the Term auction Facility, the TSLF was done in conjunction with the Bank of Canada, the Bank of England, the European Central Bank, and the Swiss National Bank. The resource allowed dealers to switch debt that was less liquid for U.S. government securities that were easily tradable. The currency swap lines with the European Central Bank and Swiss National Bank were increased. The TSLF was closed on February 1, 2010.",
"title": "Monetary policy"
},
{
"paragraph_id": 79,
"text": "The Primary Dealer Credit Facility (PDCF) was an overnight loan facility that provided funding to primary dealers in exchange for a specified range of eligible collateral and was intended to foster the functioning of financial markets more generally. It ceased extending credit on March 31, 2021.",
"title": "Monetary policy"
},
{
"paragraph_id": 80,
"text": "The Asset Backed Commercial Paper Money Market Mutual Fund Liquidity Facility (ABCPMMMFLF) was also called the AMLF. The Facility began operations on September 22, 2008, and was closed on February 1, 2010.",
"title": "Monetary policy"
},
{
"paragraph_id": 81,
"text": "All U.S. depository institutions, bank holding companies (parent companies or U.S. broker-dealer affiliates), or U.S. branches and agencies of foreign banks were eligible to borrow under this facility pursuant to the discretion of the FRBB.",
"title": "Monetary policy"
},
{
"paragraph_id": 82,
"text": "Collateral eligible for pledge under the Facility was required to meet the following criteria:",
"title": "Monetary policy"
},
{
"paragraph_id": 83,
"text": "On October 7, 2008, the Federal Reserve further expanded the collateral it would loan against to include commercial paper using the Commercial Paper Funding Facility (CPFF). The action made the Fed a crucial source of credit for non-financial businesses in addition to commercial banks and investment firms. Fed officials said they would buy as much of the debt as necessary to get the market functioning again. They refused to say how much that might be, but they noted that around $1.3 trillion worth of commercial paper would qualify. There was $1.61 trillion in outstanding commercial paper, seasonally adjusted, on the market as of 1 October 2008, according to the most recent data from the Fed. That was down from $1.70 trillion in the previous week. Since the summer of 2007, the market had shrunk from more than $2.2 trillion. This program lent out a total $738 billion before it was closed. Forty-five out of 81 of the companies participating in this program were foreign firms. Research shows that Troubled Asset Relief Program (TARP) recipients were twice as likely to participate in the program than other commercial paper issuers who did not take advantage of the TARP bailout. The Fed incurred no losses from the CPFF.",
"title": "Monetary policy"
},
{
"paragraph_id": 84,
"text": "The first attempt at a national currency was during the American Revolutionary War. In 1775, the Continental Congress, as well as the states, began issuing paper currency, calling the bills \"Continentals\". The Continentals were backed only by future tax revenue, and were used to help finance the Revolutionary War. Overprinting, as well as British counterfeiting, caused the value of the Continental to diminish quickly. This experience with paper money led the United States to strip the power to issue Bills of Credit (paper money) from a draft of the new Constitution on August 16, 1787, as well as banning such issuance by the various states, and limiting the states' ability to make anything but gold or silver coin legal tender on August 28.",
"title": "History"
},
{
"paragraph_id": 85,
"text": "In 1791, the government granted the First Bank of the United States a charter to operate as the U.S. central bank until 1811. The First Bank of the United States came to an end under President Madison when Congress refused to renew its charter. The Second Bank of the United States was established in 1816, and lost its authority to be the central bank of the U.S. twenty years later under President Jackson when its charter expired. Both banks were based upon the Bank of England. Ultimately, a third national bank, known as the Federal Reserve, was established in 1913 and still exists to this day.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "The first U.S. institution with central banking responsibilities was the First Bank of the United States, chartered by Congress and signed into law by President George Washington on February 25, 1791, at the urging of Alexander Hamilton. This was done despite strong opposition from Thomas Jefferson and James Madison, among numerous others. The charter was for twenty years and expired in 1811 under President Madison, when Congress refused to renew it.",
"title": "History"
},
{
"paragraph_id": 87,
"text": "In 1816, however, Madison revived it in the form of the Second Bank of the United States. Years later, early renewal of the bank's charter became the primary issue in the reelection of President Andrew Jackson. After Jackson, who was opposed to the central bank, was reelected, he pulled the government's funds out of the bank. Jackson was the only President to completely pay off the national debt but his efforts to close the bank contributed to the Panic of 1837. The bank's charter was not renewed in 1836, and it would fully dissolve after several years as a private corporation. From 1837 to 1862, in the Free Banking Era there was no formal central bank. From 1846 to 1921, an Independent Treasury System ruled. From 1863 to 1913, a system of national banks was instituted by the 1863 National Banking Act during which series of bank panics, in 1873, 1893, and 1907 occurred.",
"title": "History"
},
{
"paragraph_id": 88,
"text": "The main motivation for the third central banking system came from the Panic of 1907, which caused a renewed desire among legislators, economists, and bankers for an overhaul of the monetary system. During the last quarter of the 19th century and the beginning of the 20th century, the United States economy went through a series of financial panics. According to many economists, the previous national banking system had two main weaknesses: an inelastic currency and a lack of liquidity. In 1908, Congress enacted the Aldrich–Vreeland Act, which provided for an emergency currency and established the National Monetary Commission to study banking and currency reform. The National Monetary Commission returned with recommendations which were repeatedly rejected by Congress. A revision crafted during a secret meeting on Jekyll Island by Senator Aldrich and representatives of the nation's top finance and industrial groups later became the basis of the Federal Reserve Act. The House voted on December 22, 1913, with 298 voting yes to 60 voting no. The Senate voted 43–25 on December 23, 1913. President Woodrow Wilson signed the bill later that day.",
"title": "History"
},
{
"paragraph_id": 89,
"text": "The head of the bipartisan National Monetary Commission was financial expert and Senate Republican leader Nelson Aldrich. Aldrich set up two commissions – one to study the American monetary system in depth and the other, headed by Aldrich himself, to study the European central banking systems and report on them.",
"title": "History"
},
{
"paragraph_id": 90,
"text": "In early November 1910, Aldrich met with five well known members of the New York banking community to devise a central banking bill. Paul Warburg, an attendee of the meeting and longtime advocate of central banking in the U.S., later wrote that Aldrich was \"bewildered at all that he had absorbed abroad and he was faced with the difficult task of writing a highly technical bill while being harassed by the daily grind of his parliamentary duties\". After ten days of deliberation, the bill, which would later be referred to as the \"Aldrich Plan\", was agreed upon. It had several key components, including a central bank with a Washington-based headquarters and fifteen branches located throughout the U.S. in geographically strategic locations, and a uniform elastic currency based on gold and commercial paper. Aldrich believed a central banking system with no political involvement was best, but was convinced by Warburg that a plan with no public control was not politically feasible. The compromise involved representation of the public sector on the board of directors.",
"title": "History"
},
{
"paragraph_id": 91,
"text": "Aldrich's bill met much opposition from politicians. Critics charged Aldrich of being biased due to his close ties to wealthy bankers such as J. P. Morgan and John D. Rockefeller Jr., Aldrich's son-in-law. Most Republicans favored the Aldrich Plan, but it lacked enough support in Congress to pass because rural and western states viewed it as favoring the \"eastern establishment\". In contrast, progressive Democrats favored a reserve system owned and operated by the government; they believed that public ownership of the central bank would end Wall Street's control of the American currency supply. Conservative Democrats fought for a privately owned, yet decentralized, reserve system, which would still be free of Wall Street's control.",
"title": "History"
},
{
"paragraph_id": 92,
"text": "The original Aldrich Plan was dealt a fatal blow in 1912, when Democrats won the White House and Congress. Nonetheless, President Woodrow Wilson believed that the Aldrich plan would suffice with a few modifications. The plan became the basis for the Federal Reserve Act, which was proposed by Senator Robert Owen in May 1913. The primary difference between the two bills was the transfer of control of the board of directors (called the Federal Open Market Committee in the Federal Reserve Act) to the government. The bill passed Congress on December 23, 1913, on a mostly partisan basis, with most Democrats voting \"yea\" and most Republicans voting \"nay\".",
"title": "History"
},
{
"paragraph_id": 93,
"text": "Key laws affecting the Federal Reserve have been:",
"title": "History"
},
{
"paragraph_id": 94,
"text": "The Federal Reserve records and publishes large amounts of data. A few websites where data is published are at the board of governors' Economic Data and Research page, the board of governors' statistical releases and historical data page, and at the St. Louis Fed's FRED (Federal Reserve Economic Data) page. The Federal Open Market Committee (FOMC) examines many economic indicators prior to determining monetary policy.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 95,
"text": "Some criticism involves economic data compiled by the Fed. The Fed sponsors much of the monetary economics research in the U.S., and Lawrence H. White objects that this makes it less likely for researchers to publish findings challenging the status quo.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 96,
"text": "The net worth of households and nonprofit organizations in the United States is published by the Federal Reserve in a report titled Flow of Funds. At the end of the third quarter of fiscal year 2012, this value was $64.8 trillion. At the end of the first quarter of fiscal year 2014, this value was $95.5 trillion.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 97,
"text": "The most common measures are named M0 (narrowest), M1, M2, and M3. In the United States they are defined by the Federal Reserve as follows:",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 98,
"text": "The Federal Reserve stopped publishing M3 statistics in March 2006, saying that the data cost a lot to collect but did not provide significantly useful information. The other three money supply measures continue to be provided in detail.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 99,
"text": "The Personal consumption expenditures price index, also referred to as simply the PCE price index, is used as one measure of the value of money. It is a United States-wide indicator of the average increase in prices for all domestic personal consumption. Using a variety of data including United States Consumer Price Index and U.S. Producer Price Index prices, it is derived from the largest component of the gross domestic product in the BEA's National Income and Product Accounts, personal consumption expenditures.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 100,
"text": "One of the Fed's main roles is to maintain price stability, which means that the Fed's ability to keep a low inflation rate is a long-term measure of their success. Although the Fed is not required to maintain inflation within a specific range, their long run target for the growth of the PCE price index is between 1.5 and 2 percent. There has been debate among policy makers as to whether the Federal Reserve should have a specific inflation targeting policy.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 101,
"text": "Most mainstream economists favor a low, steady rate of inflation. Chief economist, and advisor to the Federal Reserve, the Congressional Budget Office and the Council of Economic Advisers, Diane C. Swonk observed, in 2022, that \"From the Fed's perspective, you have to remember inflation is kind of like cancer. If you don't deal with it now with something that may be painful, you could have something that metastasized and becomes much more chronic later on.\"",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 102,
"text": "Low (as opposed to zero or negative) inflation may reduce the severity of economic recessions by enabling the labor market to adjust more quickly in a downturn, and reduce the risk that a liquidity trap prevents monetary policy from stabilizing the economy. The task of keeping the rate of inflation low and stable is usually given to monetary authorities.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 103,
"text": "One of the stated goals of monetary policy is maximum employment. The unemployment rate statistics are collected by the Bureau of Labor Statistics, and like the PCE price index are used as a barometer of the nation's economic health.",
"title": "Measurement of economic variables"
},
{
"paragraph_id": 104,
"text": "The Federal Reserve is self-funded. Over 90 percent of Fed revenues come from open market operations, specifically the interest on the portfolio of Treasury securities as well as \"capital gains/losses\" that may arise from the buying/selling of the securities and their derivatives as part of Open Market Operations. The balance of revenues come from sales of financial services (check and electronic payment processing) and discount window loans. The board of governors (Federal Reserve Board) creates a budget report once per year for Congress. There are two reports with budget information. The one that lists the complete balance statements with income and expenses, as well as the net profit or loss, is the large report simply titled, \"Annual Report\". It also includes data about employment throughout the system. The other report, which explains in more detail the expenses of the different aspects of the whole system, is called \"Annual Report: Budget Review\". These detailed comprehensive reports can be found at the board of governors' website under the section \"Reports to Congress\"",
"title": "Budget"
},
{
"paragraph_id": 105,
"text": "The Federal Reserve has been remitting interest that it has been receiving back to the United States Treasury. Most of the assets the Fed holds are U.S. Treasury bonds and mortgage-backed securities that it has been purchasing as part of quantitative easing since the 2007–2008 financial crisis. In 2022 the Fed started quantitative tightening (QT) and selling these assets and taking a loss on them in the secondary bond market. As a result, the nearly $100 billion that it was remitting annually to the Treasury, is expected to be discontinued during QT.",
"title": "Budget"
},
{
"paragraph_id": 106,
"text": "One of the keys to understanding the Federal Reserve is the Federal Reserve balance sheet (or balance statement). In accordance with Section 11 of the Federal Reserve Act, the board of governors of the Federal Reserve System publishes once each week the \"Consolidated Statement of Condition of All Federal Reserve Banks\" showing the condition of each Federal Reserve bank and a consolidated statement for all Federal Reserve banks. The board of governors requires that excess earnings of the Reserve Banks be transferred to the Treasury as interest on Federal Reserve notes.",
"title": "Balance sheet"
},
{
"paragraph_id": 107,
"text": "The Federal Reserve releases its balance sheet every Thursday. Below is the balance sheet as of 8 April 2021 (in billions of dollars):",
"title": "Balance sheet"
},
{
"paragraph_id": 108,
"text": "In addition, the balance sheet also indicates which assets are held as collateral against Federal Reserve Notes.",
"title": "Balance sheet"
},
{
"paragraph_id": 109,
"text": "The Federal Reserve System has faced various criticisms since its inception in 1913. Criticisms include lack of transparency and claims that it is ineffective.",
"title": "Criticism"
}
]
| The Federal Reserve System is the central banking system of the United States. It was created on December 23, 1913, with the enactment of the Federal Reserve Act, after a series of financial panics led to the desire for central control of the monetary system in order to alleviate financial crises. Over the years, events such as the Great Depression in the 1930s and the Great Recession during the 2000s have led to the expansion of the roles and responsibilities of the Federal Reserve System. Congress established three key objectives for monetary policy in the Federal Reserve Act: maximizing employment, stabilizing prices, and moderating long-term interest rates. The first two objectives are sometimes referred to as the Federal Reserve's dual mandate. Its duties have expanded over the years, and currently also include supervising and regulating banks, maintaining the stability of the financial system, and providing financial services to depository institutions, the U.S. government, and foreign official institutions. The Fed also conducts research into the economy and provides numerous publications, such as the Beige Book and the FRED database. The Federal Reserve System is composed of several layers. It is governed by the presidentially-appointed board of governors or Federal Reserve Board (FRB). Twelve regional Federal Reserve Banks, located in cities throughout the nation, regulate and oversee privately-owned commercial banks. Nationally chartered commercial banks are required to hold stock in, and can elect some board members of, the Federal Reserve Bank of their region. The Federal Open Market Committee (FOMC) sets monetary policy by adjusting the target for the federal funds rate, which influences market interest rates generally and via the monetary transmission mechanism in turn US economic activity. The FOMC consists of all seven members of the board of governors and the twelve regional Federal Reserve Bank presidents, though only five bank presidents vote at a time—the president of the New York Fed and four others who rotate through one-year voting terms. There are also various advisory councils. It has a structure unique among central banks, and is also unusual in that the United States Department of the Treasury, an entity outside of the central bank, prints the currency used. The federal government sets the salaries of the board's seven governors, and it receives all the system's annual profits, after dividends on member banks' capital investments are paid, and an account surplus is maintained. In 2015, the Federal Reserve earned a net income of $100.2 billion and transferred $97.7 billion to the U.S. Treasury, and 2020 earnings were approximately $88.6 billion with remittances to the U.S. Treasury of $86.9 billion. Although an instrument of the U.S. government, the Federal Reserve System considers itself "an independent central bank because its monetary policy decisions do not have to be approved by the president or by anyone else in the executive or legislative branches of government, it does not receive funding appropriated by Congress, and the terms of the members of the board of governors span multiple presidential and congressional terms." | 2001-05-16T22:53:31Z | 2023-12-25T05:26:08Z | [
"Template:Space",
"Template:Cite book",
"Template:Cbignore",
"Template:Official website",
"Template:Div col",
"Template:Use mdy dates",
"Template:Div col end",
"Template:Cite web",
"Template:US currency and coinage",
"Template:United States topics",
"Template:Redirect",
"Template:Further",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Cite AV media",
"Template:Authority control",
"Template:Refn",
"Template:Main",
"Template:Rp",
"Template:Cite SSRN",
"Template:ISBN",
"Template:Federal Reserve System",
"Template:Pp-pc1",
"Template:Blockquote",
"Template:Color box",
"Template:Reflist",
"Template:Usc",
"Template:Short description",
"Template:Legend-line",
"Template:Federal Reserve Governors",
"Template:Legend",
"Template:Nbsp",
"Template:Harvnb",
"Template:Cite report",
"Template:EB1922 Poster",
"Template:Banking in the United States",
"Template:Infobox central bank",
"Template:Nsmdns",
"Template:Clear",
"Template:-",
"Template:Cite news",
"Template:Cite journal",
"Template:Wikimedia",
"Template:Use American English",
"Template:Central banks",
"Template:Federal Reserve Banks",
"Template:Div end",
"Template:Bank regulation in the United States",
"Template:As of"
]
| https://en.wikipedia.org/wiki/Federal_Reserve |
10,821 | Francium | Francium is a chemical element; it has symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain in which it appears), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s; thus, the element is classed as an alkali metal.
Bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element.
Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 (in the family of uranium-235) continually forms and decays. As little as 200–500 g exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.
Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium.
Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around 8.0 °C (46.4 °F); a value of 27 °C (81 °F) is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave 20 ± 1.5 °C (68.0 ± 2.7 °F). A calculation based on the melting temperatures of binary ionic crystals gives 24.861 ± 0.517 °C (76.750 ± 0.931 °F). The estimated boiling point of 620 °C (1,148 °F) is also uncertain; the estimates 598 °C (1,108 °F) and 677 °C (1,251 °F), as well as the extrapolation from Mendeleev's method of 640 °C (1,184 °F), have also been suggested. The density of francium is expected to be around 2.48 g/cm (Mendeleev's method extrapolates 2.4 g/cm).
Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr ion should be more polarizable than the Cs ion.
As a result of francium being very unstable, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation.
Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u (178.7 J mol K).
Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halogens. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure.
Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO2) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. The relativistic destabilisation of the 6p3/2 spinor may make francium compounds in oxidation states higher than +1 possible, such as [FrF6]; but this has not been experimentally confirmed.
The only double salt known of francium has the formula Fr9Bi2I9.
There are 37 known isotopes of francium ranging in atomic mass from 197 to 233. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common.
Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy).
Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). Although all primordial Np is extinct, the neptunium decay series continues to exist naturally in tiny traces due to (n,2n) knockout reactions in natural U.
The least stable ground state isotope is francium-215, with a half-life of 90 ns: it undergoes a 9.54 MeV alpha decay to astatine-211.
Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical.
Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory.
As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium. Research teams attempted to locate and isolate this missing element, and at least four false claims were made that the element had been found before an authentic discovery was made.
In 1914, Stefan Meyer, Viktor F. Hess, and Friedrich Paneth (working in Vienna) made measurements of alpha radiation from various substances, including Ac. They observed the possibility of a minor alpha branch of this nuclide, though follow-up work could not be done due to the outbreak of World War I. Their observations were not precise and sure enough for them to announce the discovery of element 87, though it is likely that they did indeed observe the decay of Ac to Fr.
Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odesa, and he did not pursue the element further.
The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal.
In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery.
In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life.
Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%.
Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but it was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s.
Fr is the result of the alpha decay of Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 10 uranium atoms. Only about one ounce of francium is present naturally in the earth's crust.
Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211.
The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 10 francium atoms have been held at a time, including large amounts of Fr in addition to Fr and Fr.
Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions.
Fr can also be isolated from samples of its parent Ac, the francium being milked via elution with NH4Cl–CrO3 from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate.
In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh. | [
{
"paragraph_id": 0,
"text": "Francium is a chemical element; it has symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain in which it appears), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s; thus, the element is classed as an alkali metal.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 (in the family of uranium-235) continually forms and decays. As little as 200–500 g exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around 8.0 °C (46.4 °F); a value of 27 °C (81 °F) is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave 20 ± 1.5 °C (68.0 ± 2.7 °F). A calculation based on the melting temperatures of binary ionic crystals gives 24.861 ± 0.517 °C (76.750 ± 0.931 °F). The estimated boiling point of 620 °C (1,148 °F) is also uncertain; the estimates 598 °C (1,108 °F) and 677 °C (1,251 °F), as well as the extrapolation from Mendeleev's method of 640 °C (1,184 °F), have also been suggested. The density of francium is expected to be around 2.48 g/cm (Mendeleev's method extrapolates 2.4 g/cm).",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr ion should be more polarizable than the Cs ion.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "As a result of francium being very unstable, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation.",
"title": "Compounds"
},
{
"paragraph_id": 7,
"text": "Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u (178.7 J mol K).",
"title": "Compounds"
},
{
"paragraph_id": 8,
"text": "Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halogens. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure.",
"title": "Compounds"
},
{
"paragraph_id": 9,
"text": "Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO2) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. The relativistic destabilisation of the 6p3/2 spinor may make francium compounds in oxidation states higher than +1 possible, such as [FrF6]; but this has not been experimentally confirmed.",
"title": "Compounds"
},
{
"paragraph_id": 10,
"text": "The only double salt known of francium has the formula Fr9Bi2I9.",
"title": "Compounds"
},
{
"paragraph_id": 11,
"text": "There are 37 known isotopes of francium ranging in atomic mass from 197 to 233. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common.",
"title": "Isotopes"
},
{
"paragraph_id": 12,
"text": "Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy).",
"title": "Isotopes"
},
{
"paragraph_id": 13,
"text": "Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). Although all primordial Np is extinct, the neptunium decay series continues to exist naturally in tiny traces due to (n,2n) knockout reactions in natural U.",
"title": "Isotopes"
},
{
"paragraph_id": 14,
"text": "The least stable ground state isotope is francium-215, with a half-life of 90 ns: it undergoes a 9.54 MeV alpha decay to astatine-211.",
"title": "Isotopes"
},
{
"paragraph_id": 15,
"text": "Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory.",
"title": "Applications"
},
{
"paragraph_id": 17,
"text": "As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium. Research teams attempted to locate and isolate this missing element, and at least four false claims were made that the element had been found before an authentic discovery was made.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1914, Stefan Meyer, Viktor F. Hess, and Friedrich Paneth (working in Vienna) made measurements of alpha radiation from various substances, including Ac. They observed the possibility of a minor alpha branch of this nuclide, though follow-up work could not be done due to the outbreak of World War I. Their observations were not precise and sure enough for them to announce the discovery of element 87, though it is likely that they did indeed observe the decay of Ac to Fr.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odesa, and he did not pursue the element further.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but it was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Fr is the result of the alpha decay of Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 10 uranium atoms. Only about one ounce of francium is present naturally in the earth's crust.",
"title": "Occurrence"
},
{
"paragraph_id": 26,
"text": "Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 10 francium atoms have been held at a time, including large amounts of Fr in addition to Fr and Fr.",
"title": "Production"
},
{
"paragraph_id": 28,
"text": "Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions.",
"title": "Production"
},
{
"paragraph_id": 29,
"text": "Fr can also be isolated from samples of its parent Ac, the francium being milked via elution with NH4Cl–CrO3 from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate.",
"title": "Production"
},
{
"paragraph_id": 30,
"text": "In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh.",
"title": "Production"
},
{
"paragraph_id": 31,
"text": "",
"title": "External links"
}
]
| Francium is a chemical element; it has symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223, has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element. Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s1; thus, the element is classed as an alkali metal. Bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element. Francium was discovered by Marguerite Perey in France in 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 continually forms and decays. As little as 200–500 g exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms. | 2001-05-17T14:26:16Z | 2023-12-27T19:19:08Z | [
"Template:Short description",
"Template:Main",
"Template:Use mdy dates",
"Template:Convert",
"Template:Pp-move-indef",
"Template:ISBN",
"Template:Subject bar",
"Template:Anchor",
"Template:Multiple image",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite magazine",
"Template:Ullmann",
"Template:Periodic table (navbox)",
"Template:NoteTag",
"Template:NUBASE2020",
"Template:E",
"Template:Cite conference",
"Template:Infobox francium",
"Template:NoteFoot",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite report",
"Template:Authority control",
"Template:-",
"Template:Featured article"
]
| https://en.wikipedia.org/wiki/Francium |
10,822 | Fermium | Fermium is a synthetic chemical element; it has symbol Fm and atomic number 100. It is an actinide and the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities, although pure fermium metal has not yet been prepared. A total of 20 isotopes are known, with Fm being the longest-lived with a half-life of 100.5 days.
It was discovered in the debris of the first hydrogen bomb explosion in 1952, and named after Enrico Fermi, one of the pioneers of nuclear physics. Its chemistry is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. Owing to the small amounts of produced fermium and all of its isotopes having relatively short half-lives, there are currently no uses for it outside basic scientific research.
Fermium was first discovered in the fallout from the 'Ivy Mike' nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, 94Pu: this could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two β decays. At the time, the absorption of neutrons by a heavy nucleus was thought to be a rare process, but the identification of 94Pu raised the possibility that still more neutrons could have been absorbed by the uranium nuclei, leading to new elements.
Element 99 (einsteinium) was quickly discovered on filter papers which had been flown through the cloud from the explosion (the same sampling technique that had been used to discover 94Pu). It was then identified in December 1952 by Albert Ghiorso and co-workers at the University of California at Berkeley. They discovered the isotope Es (half-life 20.5 days) that was made by the capture of 15 neutrons by uranium-238 nuclei – which then underwent seven successive beta decays:
Some U atoms, however, could capture another amount of neutrons (most likely, 16 or 17).
The discovery of fermium (Z = 100) required more material, as the yield was expected to be at least an order of magnitude lower than that of element 99, and so contaminated coral from the Enewetak atoll (where the test had taken place) was shipped to the University of California Radiation Laboratory in Berkeley, California, for processing and analysis. About two months after the test, a new component was isolated emitting high-energy α-particles (7.1 MeV) with a half-life of about a day. With such a short half-life, it could only arise from the β decay of an isotope of einsteinium, and so had to be an isotope of the new element 100: it was quickly identified as Fm (t = 20.07(7) hours).
The discovery of the new elements, and the new data on neutron capture, was initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team was able to prepare elements 99 and 100 by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on the elements. The "Ivy Mike" studies were declassified and published in 1955.
The Berkeley team had been worried that another group might discover lighter isotopes of element 100 through ion-bombardment techniques before they could publish their classified research, and this proved to be the case. A group at the Nobel Institute for Physics in Stockholm independently discovered the element, producing an isotope later confirmed to be Fm (t1/2 = 30 minutes) by bombarding a 92U target with oxygen-16 ions, and published their work in May 1954. Nevertheless, the priority of the Berkeley team was generally recognized, and with it the prerogative to name the new element in honour of Enrico Fermi, the developer of the first artificial self-sustained nuclear reactor. Fermi was still alive when the name was proposed, but had died by the time it became official.
There are 20 isotopes of fermium listed in NUBASE 2016, with atomic weights of 241 to 260, of which Fm is the longest-lived with a half-life of 100.5 days. Fm has a half-life of 3 days, while Fm of 5.3 h, Fm of 25.4 h, Fm of 3.2 h, Fm of 20.1 h, and Fm of 2.6 hours. All the remaining ones have half-lives ranging from 30 minutes to less than a millisecond. The neutron capture product of fermium-257, Fm, undergoes spontaneous fission with a half-life of just 370(14) microseconds; Fm and Fm are also unstable with respect to spontaneous fission (t1/2 = 1.5(3) s and 4 ms respectively). This means that neutron capture cannot be used to create nuclides with a mass number greater than 257, unless carried out in a nuclear explosion. As Fm is an α-emitter, decaying to Cf, and no known fermium isotopes undergo beta minus decay to the next element, mendelevium, fermium is also the last element that can be prepared by a neutron-capture process. Because of this impediment in forming heavier isotopes, these short-lived isotopes Fm constitute the so-called "fermium gap."
Fermium is produced by the bombardment of lighter actinides with neutrons in a nuclear reactor. Fermium-257 is the heaviest isotope that is obtained via neutron capture, and can only be produced in picogram quantities. The major source is the 85 MW High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA, which is dedicated to the production of transcurium (Z > 96) elements. Lower mass fermium isotopes are available in greater quantities, though these isotopes (Fm and Fm) are comparatively short-lived. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium and einsteinium, and picogram quantities of fermium. However, nanogram quantities of fermium can be prepared for specific experiments. The quantities of fermium produced in 20–200 kiloton thermonuclear explosions is believed to be of the order of milligrams, although it is mixed in with a huge quantity of debris; 4.0 picograms of Fm was recovered from 10 kilograms of debris from the "Hutch" test (16 July 1969). The Hutch experiment produced an estimated total of 250 micrograms of Fm.
After production, the fermium must be separated from other actinides and from lanthanide fission products. This is usually achieved by ion-exchange chromatography, with the standard process using a cation exchanger such as Dowex 50 or TEVA eluted with a solution of ammonium α-hydroxyisobutyrate. Smaller cations form more stable complexes with the α-hydroxyisobutyrate anion, and so are preferentially eluted from the column. A rapid fractional crystallization method has also been described.
Although the most stable isotope of fermium is Fm, with a half-life of 100.5 days, most studies are conducted on Fm (t1/2 = 20.07(7) hours), since this isotope can be easily isolated as required as the decay product of Es (t1/2 = 39.8(12) days).
The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project, one of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was as follows: synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, i.e. about 10 neutrons/(cm·s). In comparison, the flux of the HFIR reactor is 5×10 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the U.S. The laboratory was receiving samples for analysis, as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, those were not found after a series of megaton explosions conducted between 1954 and 1956 at the atoll.
The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge. They were less successful in terms of yield, which was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Isolation of the products was found to be rather problematic, as the explosions were spreading debris through melting and vaporizing rocks under the great depth of 300–600 meters, and drilling to such depth in order to extract the products was both slow and inefficient in terms of collected volumes.
Among the nine underground tests, which were carried between 1962 and 1969 and codenamed Anacostia (5.2 kilotons, 1962), Kennebec (<5 kilotons, 1963), Par (38 kilotons, 1964), Barbel (<20 kilotons, 1964), Tweed (<20 kilotons, 1965), Cyclamen (13 kilotons, 1966), Kankakee (20-200 kilotons, 1966), Vulcan (25 kilotons, 1966) and Hutch (20-200 kilotons, 1969), the last one was most powerful and had the highest yield of transuranium elements. In the dependence on the atomic mass number, the yield showed a saw-tooth behavior with the lower values for odd isotopes, due to their higher fission rates. The major practical problem of the entire proposal, however, was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4×10 of the total amount and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test. This observation demonstrated the highly nonlinear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. In order to accelerate sample collection after explosion, shafts were drilled at the site not after but before the test, so that explosion would expel radioactive material from the epicenter, through the shafts, to collecting volumes near the surface. This method was tried in the Anacostia and Kennebec tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling; whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.
Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. For example, 6×10 atoms of Fm could be recovered after the Hutch detonation. They were then used in the studies of thermal-neutron induced fission of Fm and in discovery of a new fermium isotope Fm. Also, the rare Cm isotope was synthesized in large quantities, which is very difficult to produce in nuclear reactors from its progenitor Cm; the half-life of Cm (64 minutes) is much too short for months-long reactor irradiations, but is very "long" on the explosion timescale.
Because of the short half-life of all isotopes of fermium, any primordial fermium, that is fermium that could be present on the Earth during its formation, has decayed by now. Synthesis of fermium from naturally occurring actinides uranium and thorium in the Earth crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, most fermium is produced on Earth in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and is present only within a few months from the time of the synthesis. The transuranic elements from americium to fermium did occur naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
The chemistry of fermium has only been studied in solution using tracer techniques, and no solid compounds have been prepared. Under normal conditions, fermium exists in solution as the Fm ion, which has a hydration number of 16.9 and an acid dissociation constant of 1.6×10 (pKa = 3.8). Fm forms complexes with a wide variety of organic ligands with hard donor atoms such as oxygen, and these complexes are usually more stable than those of the preceding actinides. It also forms anionic complexes with ligands such as chloride or nitrate and, again, these complexes appear to be more stable than those formed by einsteinium or californium. It is believed that the bonding in the complexes of the later actinides is mostly ionic in character: the Fm ion is expected to be smaller than the preceding An ions because of the higher effective nuclear charge of fermium, and hence fermium would be expected to form shorter and stronger metal–ligand bonds.
Fermium(III) can be fairly easily reduced to fermium(II), for example with samarium(II) chloride, with which fermium(II) coprecipitates. In the precipitate, the compound fermium(II) chloride (FmCl2) was produced, though it was not purified or studied in isolation. The electrode potential has been estimated to be similar to that of the ytterbium(III)/(II) couple, or about −1.15 V with respect to the standard hydrogen electrode, a value which agrees with theoretical calculations. The Fm/Fm couple has an electrode potential of −2.37(10) V based on polarographic measurements.
Although few people come in contact with fermium, the International Commission on Radiological Protection has set annual exposure limits for the two most stable isotopes. For fermium-253, the ingestion limit was set at 10 becquerels (1 Bq is equivalent to one decay per second), and the inhalation limit at 10 Bq; for fermium-257, at 10 Bq and 4,000 Bq respectively. | [
{
"paragraph_id": 0,
"text": "Fermium is a synthetic chemical element; it has symbol Fm and atomic number 100. It is an actinide and the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities, although pure fermium metal has not yet been prepared. A total of 20 isotopes are known, with Fm being the longest-lived with a half-life of 100.5 days.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It was discovered in the debris of the first hydrogen bomb explosion in 1952, and named after Enrico Fermi, one of the pioneers of nuclear physics. Its chemistry is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. Owing to the small amounts of produced fermium and all of its isotopes having relatively short half-lives, there are currently no uses for it outside basic scientific research.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fermium was first discovered in the fallout from the 'Ivy Mike' nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, 94Pu: this could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two β decays. At the time, the absorption of neutrons by a heavy nucleus was thought to be a rare process, but the identification of 94Pu raised the possibility that still more neutrons could have been absorbed by the uranium nuclei, leading to new elements.",
"title": "Discovery"
},
{
"paragraph_id": 3,
"text": "Element 99 (einsteinium) was quickly discovered on filter papers which had been flown through the cloud from the explosion (the same sampling technique that had been used to discover 94Pu). It was then identified in December 1952 by Albert Ghiorso and co-workers at the University of California at Berkeley. They discovered the isotope Es (half-life 20.5 days) that was made by the capture of 15 neutrons by uranium-238 nuclei – which then underwent seven successive beta decays:",
"title": "Discovery"
},
{
"paragraph_id": 4,
"text": "Some U atoms, however, could capture another amount of neutrons (most likely, 16 or 17).",
"title": "Discovery"
},
{
"paragraph_id": 5,
"text": "The discovery of fermium (Z = 100) required more material, as the yield was expected to be at least an order of magnitude lower than that of element 99, and so contaminated coral from the Enewetak atoll (where the test had taken place) was shipped to the University of California Radiation Laboratory in Berkeley, California, for processing and analysis. About two months after the test, a new component was isolated emitting high-energy α-particles (7.1 MeV) with a half-life of about a day. With such a short half-life, it could only arise from the β decay of an isotope of einsteinium, and so had to be an isotope of the new element 100: it was quickly identified as Fm (t = 20.07(7) hours).",
"title": "Discovery"
},
{
"paragraph_id": 6,
"text": "The discovery of the new elements, and the new data on neutron capture, was initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team was able to prepare elements 99 and 100 by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on the elements. The \"Ivy Mike\" studies were declassified and published in 1955.",
"title": "Discovery"
},
{
"paragraph_id": 7,
"text": "The Berkeley team had been worried that another group might discover lighter isotopes of element 100 through ion-bombardment techniques before they could publish their classified research, and this proved to be the case. A group at the Nobel Institute for Physics in Stockholm independently discovered the element, producing an isotope later confirmed to be Fm (t1/2 = 30 minutes) by bombarding a 92U target with oxygen-16 ions, and published their work in May 1954. Nevertheless, the priority of the Berkeley team was generally recognized, and with it the prerogative to name the new element in honour of Enrico Fermi, the developer of the first artificial self-sustained nuclear reactor. Fermi was still alive when the name was proposed, but had died by the time it became official.",
"title": "Discovery"
},
{
"paragraph_id": 8,
"text": "There are 20 isotopes of fermium listed in NUBASE 2016, with atomic weights of 241 to 260, of which Fm is the longest-lived with a half-life of 100.5 days. Fm has a half-life of 3 days, while Fm of 5.3 h, Fm of 25.4 h, Fm of 3.2 h, Fm of 20.1 h, and Fm of 2.6 hours. All the remaining ones have half-lives ranging from 30 minutes to less than a millisecond. The neutron capture product of fermium-257, Fm, undergoes spontaneous fission with a half-life of just 370(14) microseconds; Fm and Fm are also unstable with respect to spontaneous fission (t1/2 = 1.5(3) s and 4 ms respectively). This means that neutron capture cannot be used to create nuclides with a mass number greater than 257, unless carried out in a nuclear explosion. As Fm is an α-emitter, decaying to Cf, and no known fermium isotopes undergo beta minus decay to the next element, mendelevium, fermium is also the last element that can be prepared by a neutron-capture process. Because of this impediment in forming heavier isotopes, these short-lived isotopes Fm constitute the so-called \"fermium gap.\"",
"title": "Isotopes"
},
{
"paragraph_id": 9,
"text": "Fermium is produced by the bombardment of lighter actinides with neutrons in a nuclear reactor. Fermium-257 is the heaviest isotope that is obtained via neutron capture, and can only be produced in picogram quantities. The major source is the 85 MW High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, USA, which is dedicated to the production of transcurium (Z > 96) elements. Lower mass fermium isotopes are available in greater quantities, though these isotopes (Fm and Fm) are comparatively short-lived. In a \"typical processing campaign\" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium and einsteinium, and picogram quantities of fermium. However, nanogram quantities of fermium can be prepared for specific experiments. The quantities of fermium produced in 20–200 kiloton thermonuclear explosions is believed to be of the order of milligrams, although it is mixed in with a huge quantity of debris; 4.0 picograms of Fm was recovered from 10 kilograms of debris from the \"Hutch\" test (16 July 1969). The Hutch experiment produced an estimated total of 250 micrograms of Fm.",
"title": "Production"
},
{
"paragraph_id": 10,
"text": "After production, the fermium must be separated from other actinides and from lanthanide fission products. This is usually achieved by ion-exchange chromatography, with the standard process using a cation exchanger such as Dowex 50 or TEVA eluted with a solution of ammonium α-hydroxyisobutyrate. Smaller cations form more stable complexes with the α-hydroxyisobutyrate anion, and so are preferentially eluted from the column. A rapid fractional crystallization method has also been described.",
"title": "Production"
},
{
"paragraph_id": 11,
"text": "Although the most stable isotope of fermium is Fm, with a half-life of 100.5 days, most studies are conducted on Fm (t1/2 = 20.07(7) hours), since this isotope can be easily isolated as required as the decay product of Es (t1/2 = 39.8(12) days).",
"title": "Production"
},
{
"paragraph_id": 12,
"text": "The analysis of the debris at the 10-megaton Ivy Mike nuclear test was a part of long-term project, one of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was as follows: synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful neutron sources, providing densities of the order 10 neutrons/cm within a microsecond, i.e. about 10 neutrons/(cm·s). In comparison, the flux of the HFIR reactor is 5×10 neutrons/(cm·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the U.S. The laboratory was receiving samples for analysis, as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, those were not found after a series of megaton explosions conducted between 1954 and 1956 at the atoll.",
"title": "Synthesis in nuclear explosions"
},
{
"paragraph_id": 13,
"text": "The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge. They were less successful in terms of yield, which was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Isolation of the products was found to be rather problematic, as the explosions were spreading debris through melting and vaporizing rocks under the great depth of 300–600 meters, and drilling to such depth in order to extract the products was both slow and inefficient in terms of collected volumes.",
"title": "Synthesis in nuclear explosions"
},
{
"paragraph_id": 14,
"text": "Among the nine underground tests, which were carried between 1962 and 1969 and codenamed Anacostia (5.2 kilotons, 1962), Kennebec (<5 kilotons, 1963), Par (38 kilotons, 1964), Barbel (<20 kilotons, 1964), Tweed (<20 kilotons, 1965), Cyclamen (13 kilotons, 1966), Kankakee (20-200 kilotons, 1966), Vulcan (25 kilotons, 1966) and Hutch (20-200 kilotons, 1969), the last one was most powerful and had the highest yield of transuranium elements. In the dependence on the atomic mass number, the yield showed a saw-tooth behavior with the lower values for odd isotopes, due to their higher fission rates. The major practical problem of the entire proposal, however, was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4×10 of the total amount and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 10 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test. This observation demonstrated the highly nonlinear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. In order to accelerate sample collection after explosion, shafts were drilled at the site not after but before the test, so that explosion would expel radioactive material from the epicenter, through the shafts, to collecting volumes near the surface. This method was tried in the Anacostia and Kennebec tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling; whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.",
"title": "Synthesis in nuclear explosions"
},
{
"paragraph_id": 15,
"text": "Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories. For example, 6×10 atoms of Fm could be recovered after the Hutch detonation. They were then used in the studies of thermal-neutron induced fission of Fm and in discovery of a new fermium isotope Fm. Also, the rare Cm isotope was synthesized in large quantities, which is very difficult to produce in nuclear reactors from its progenitor Cm; the half-life of Cm (64 minutes) is much too short for months-long reactor irradiations, but is very \"long\" on the explosion timescale.",
"title": "Synthesis in nuclear explosions"
},
{
"paragraph_id": 16,
"text": "Because of the short half-life of all isotopes of fermium, any primordial fermium, that is fermium that could be present on the Earth during its formation, has decayed by now. Synthesis of fermium from naturally occurring actinides uranium and thorium in the Earth crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, most fermium is produced on Earth in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and is present only within a few months from the time of the synthesis. The transuranic elements from americium to fermium did occur naturally in the natural nuclear fission reactor at Oklo, but no longer do so.",
"title": "Natural occurrence"
},
{
"paragraph_id": 17,
"text": "The chemistry of fermium has only been studied in solution using tracer techniques, and no solid compounds have been prepared. Under normal conditions, fermium exists in solution as the Fm ion, which has a hydration number of 16.9 and an acid dissociation constant of 1.6×10 (pKa = 3.8). Fm forms complexes with a wide variety of organic ligands with hard donor atoms such as oxygen, and these complexes are usually more stable than those of the preceding actinides. It also forms anionic complexes with ligands such as chloride or nitrate and, again, these complexes appear to be more stable than those formed by einsteinium or californium. It is believed that the bonding in the complexes of the later actinides is mostly ionic in character: the Fm ion is expected to be smaller than the preceding An ions because of the higher effective nuclear charge of fermium, and hence fermium would be expected to form shorter and stronger metal–ligand bonds.",
"title": "Chemistry"
},
{
"paragraph_id": 18,
"text": "Fermium(III) can be fairly easily reduced to fermium(II), for example with samarium(II) chloride, with which fermium(II) coprecipitates. In the precipitate, the compound fermium(II) chloride (FmCl2) was produced, though it was not purified or studied in isolation. The electrode potential has been estimated to be similar to that of the ytterbium(III)/(II) couple, or about −1.15 V with respect to the standard hydrogen electrode, a value which agrees with theoretical calculations. The Fm/Fm couple has an electrode potential of −2.37(10) V based on polarographic measurements.",
"title": "Chemistry"
},
{
"paragraph_id": 19,
"text": "Although few people come in contact with fermium, the International Commission on Radiological Protection has set annual exposure limits for the two most stable isotopes. For fermium-253, the ingestion limit was set at 10 becquerels (1 Bq is equivalent to one decay per second), and the inhalation limit at 10 Bq; for fermium-257, at 10 Bq and 4,000 Bq respectively.",
"title": "Toxicity"
}
]
| Fermium is a synthetic chemical element; it has symbol Fm and atomic number 100. It is an actinide and the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities, although pure fermium metal has not yet been prepared. A total of 20 isotopes are known, with 257Fm being the longest-lived with a half-life of 100.5 days. It was discovered in the debris of the first hydrogen bomb explosion in 1952, and named after Enrico Fermi, one of the pioneers of nuclear physics. Its chemistry is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. Owing to the small amounts of produced fermium and all of its isotopes having relatively short half-lives, there are currently no uses for it outside basic scientific research. | 2001-05-17T14:26:40Z | 2023-12-25T09:36:36Z | [
"Template:NumBlk",
"Template:Clear",
"Template:NUBASE2016",
"Template:Webarchive",
"Template:Commons",
"Template:Good article",
"Template:Infobox fermium",
"Template:ISBN",
"Template:Notelist",
"Template:Greenwood&Earnshaw1st",
"Template:Nowrap",
"Template:Main",
"Template:E",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Use dmy dates",
"Template:Nuclide",
"Template:Periodic table (navbox)",
"Template:Doi",
"Template:Wiktionary",
"Template:Cite book",
"Template:Authority control",
"Template:Distinguish",
"Template:Efn"
]
| https://en.wikipedia.org/wiki/Fermium |
10,823 | Frédéric Chopin | Frédéric François Chopin (born Fryderyk Franciszek Chopin; 1 March 1810 – 17 October 1849) was a Polish composer and virtuoso pianist of the Romantic period, who wrote primarily for solo piano. He has maintained worldwide renown as a leading musician of his era, one whose "poetic genius was based on a professional technique that was without equal in his generation".
Chopin was born in Żelazowa Wola and grew up in Warsaw, which in 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the November 1830 Uprising. At 21, he settled in Paris. Thereafter he gave only 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself by selling his compositions and by giving piano lessons, for which he was in high demand. Chopin formed a friendship with Franz Liszt and was admired by many of his musical contemporaries, including Robert Schumann. After a failed engagement to Maria Wodzińska from 1836 to 1837, he maintained an often troubled relationship with the French writer Aurore Dupin (known by her pen name George Sand). A brief and unhappy visit to Mallorca with Sand in 1838–39 would prove one of his most productive periods of composition. In his final years, he was supported financially by his admirer Jane Stirling. For most of his life, Chopin was in poor health. He died in Paris in 1849 at the age of 39.
Chopin's compositions are mostly for solo piano, though he also wrote two piano concertos, some chamber music, and 19 songs set to Polish lyrics. His piano pieces are technically demanding and expanded the limits of the instrument; his own performances were noted for their nuance and sensitivity. Chopin's major piano works include mazurkas, waltzes, nocturnes, polonaises, the instrumental ballade (which Chopin created as an instrumental genre), études, impromptus, scherzi, preludes, and sonatas, some published only posthumously. Among the influences on his style of composition were Polish folk music, the classical tradition of J. S. Bach, Mozart, and Schubert, and the atmosphere of the Paris salons, of which he was a frequent guest. His innovations in style, harmony, and musical form, and his association of music with nationalism, were influential throughout and after the late Romantic period.
Chopin's music, his status as one of music's earliest celebrities, his indirect association with political insurrection, his high-profile love life, and his early death have made him a leading symbol of the Romantic era. His works remain popular, and he has been the subject of numerous films and biographies of varying historical fidelity. Among his many memorials is the Fryderyk Chopin Institute, which was created by the Parliament of Poland to research and promote his life and works. It hosts the International Chopin Piano Competition, a prestigious competition devoted entirely to his works.
Frédéric Chopin was born in Żelazowa Wola, 46 kilometres (29 miles) west of Warsaw, in what was then the Duchy of Warsaw, a Polish state established by Napoleon. The parish baptismal record, which is dated 23 April 1810, gives his birthday as 22 February 1810, and cites his given names in the Latin form Fridericus Franciscus (in Polish, he was Fryderyk Franciszek). The composer and his family used the birthdate 1 March, which is now generally accepted as the correct date.
His father, Nicolas Chopin, was a Frenchman from Lorraine who had emigrated to Poland in 1787 at the age of sixteen. He married Justyna Krzyżanowska, a poor relative of the Skarbeks, one of the families for whom he worked. Chopin was baptised in the same church where his parents had married, in Brochów. His eighteen-year-old godfather, for whom he was named, was Fryderyk Skarbek, a pupil of Nicolas Chopin. Chopin was the second child of Nicholas and Justyna and their only son; he had an elder sister, Ludwika, and two younger sisters, Izabela and Emilia, whose death at the age of 14 was probably from tuberculosis. Nicolas Chopin was devoted to his adopted homeland, and insisted on the use of the Polish language in the household.
In October 1810, six months after Chopin's birth, the family moved to Warsaw, where his father acquired a post teaching French at the Warsaw Lyceum, then housed in the Saxon Palace. Chopin lived with his family on the Palace grounds. The father played the flute and violin; the mother played the piano and gave lessons to boys in the boarding house that the Chopins kept. Chopin was of slight build, and even in early childhood was prone to illnesses.
Chopin may have had some piano instruction from his mother, but his first professional music tutor, from 1816 to 1821, was the Czech pianist Wojciech Żywny. His elder sister Ludwika also took lessons from Żywny, and occasionally played duets with her brother. It quickly became apparent that he was a child prodigy. By the age of seven he had begun giving public concerts, and in 1817 he composed two polonaises, in G minor and B-flat major. His next work, a polonaise in A-flat major of 1821, dedicated to Żywny, is his earliest surviving musical manuscript.
In 1817 the Saxon Palace was requisitioned by Warsaw's Russian governor for military use, and the Warsaw Lyceum was reestablished in the Kazimierz Palace (today the rectorate of Warsaw University). Chopin and his family moved to a building, which still survives, adjacent to the Kazimierz Palace. During this period, he was sometimes invited to the Belweder Palace as playmate to the son of the ruler of Russian Poland, Grand Duke Konstantin Pavlovich of Russia; he played the piano for Konstantin Pavlovich and composed a march for him. Julian Ursyn Niemcewicz, in his dramatic eclogue, "Nasze Przebiegi" ("Our Discourses", 1818), attested to "little Chopin's" popularity.
From September 1823 to 1826, Chopin attended the Warsaw Lyceum, where he received organ lessons from the Czech musician Wilhelm Würfel during his first year. In the autumn of 1826 he began a three-year course under the Silesian composer Józef Elsner at the Warsaw Conservatory, studying music theory, figured bass, and composition. Throughout this period he continued to compose and to give recitals in concerts and salons in Warsaw. He was engaged by the inventors of the "aeolomelodicon" (a combination of piano and mechanical organ), and on this instrument in May 1825 he performed his own improvisation and part of a concerto by Moscheles. The success of this concert led to an invitation to give a recital on a similar instrument (the "aeolopantaleon") before Tsar Alexander I, who was visiting Warsaw; the Tsar presented him with a diamond ring. At a subsequent aeolopantaleon concert on 10 June 1825, Chopin performed his Rondo Op. 1. This was the first of his works to be commercially published and earned him his first mention in the foreign press, when the Leipzig Allgemeine Musikalische Zeitung praised his "wealth of musical ideas".
From 1824 until 1828 Chopin spent his vacations away from Warsaw, at a number of locations. In 1824 and 1825, at Szafarnia, he was a guest of Dominik Dziewanowski, the father of a schoolmate. Here, for the first time, he encountered Polish rural folk music. His letters home from Szafarnia (to which he gave the title "The Szafarnia Courier"), written in a very modern and lively Polish, amused his family with their spoofing of the Warsaw newspapers and demonstrated the youngster's literary gift.
In 1827, soon after the death of Chopin's youngest sister Emilia, the family moved from the Warsaw University building, adjacent to the Kazimierz Palace, to lodgings just across the street from the university, in the south annex of the Krasiński Palace on Krakowskie Przedmieście, where Chopin lived until he left Warsaw in 1830. Here his parents continued running their boarding house for male students. Four boarders at his parents' apartments became Chopin's intimates: Tytus Woyciechowski, Jan Nepomucen Białobłocki, Jan Matuszyński, and Julian Fontana. The latter two would become part of his Paris milieu.
Chopin was friendly with members of Warsaw's young artistic and intellectual world, including Fontana, Józef Bohdan Zaleski, and Stefan Witwicki. Chopin's final Conservatory report (July 1829) read: "Chopin F., third-year student, exceptional talent, musical genius." In 1829 the artist Ambroży Mieroszewski executed a set of portraits of Chopin family members, including the first known portrait of the composer.
Letters from Chopin to Woyciechowski in the period 1829–30 (when Chopin was about twenty) contain apparent homoerotic references to dreams and to offered embraces.
Now I am going to wash myself. Please do not embrace me as I have not washed yet. And you? Even if I were to anoint myself with fragrant oils from Byzantium, you would not embrace me – not unless forced to by magnetism. But there are forces in Nature! Today you will dream that you are embracing me! You have to pay for the nightmare you caused me last night.
According to Adam Zamoyski, such expressions "were, and to some extent still are, common currency in Polish and carry no greater implication than the 'love'" concluding letters today. "The spirit of the times, pervaded by the Romantic movement in art and literature, favoured extreme expression of feeling ... Whilst the possibility cannot be ruled out entirely, it is unlikely that the two were ever lovers." Chopin's biographer Alan Walker considers that, insofar as such expressions could be perceived as homosexual in nature, they would not denote more than a passing phase in Chopin's life, or be the result – in Walker's words – of a "mental twist". The musicologist Jeffrey Kallberg notes that concepts of sexual practice and identity were very different in Chopin's time, so modern interpretation is problematic. Other writers believe that these are clear, or potential, demonstrations of homosexual impulses on Chopin's part.
Probably in early 1829 Chopin met the singer Konstancja Gładkowska and developed an intense affection for her, although it is not clear that he ever addressed her directly on the matter. In a letter to Woyciechowski of 3 October 1829 he refers to his "ideal, whom I have served faithfully for six months, though without ever saying a word to her about my feelings; whom I dream of, who inspired the Adagio of my Concerto". All of Chopin's biographers, following the lead of Frederick Niecks, agree that this "ideal" was Gładkowska. After what would be Chopin's farewell concert in Warsaw in October 1830, which included the concerto, played by the composer, and Gładkowska singing an aria by Gioachino Rossini, the two exchanged rings, and two weeks later she wrote in his album some affectionate lines bidding him farewell. After Chopin left Warsaw, he and Gładkowska did not meet and apparently did not correspond.
In September 1828 Chopin, while still a student, visited Berlin with a family friend, zoologist Feliks Jarocki, enjoying operas directed by Gaspare Spontini and attending concerts by Carl Friedrich Zelter, Felix Mendelssohn, and other celebrities. On an 1829 return trip to Berlin, he was a guest of Prince Antoni Radziwiłł, governor of the Grand Duchy of Posen – himself an accomplished composer and aspiring cellist. For the prince and his pianist daughter Wanda, he composed his Introduction and Polonaise brillante in C major for cello and piano, Op. 3.
Back in Warsaw that year, Chopin heard Niccolò Paganini play the violin, and composed a set of variations, Souvenir de Paganini. It may have been this experience that encouraged him to commence writing his first Études (1829–32), exploring the capacities of his own instrument. After completing his studies at the Warsaw Conservatory, he made his debut in Vienna. He gave two piano concerts and received many favourable reviews – in addition to some commenting (in Chopin's own words) that he was "too delicate for those accustomed to the piano-bashing of local artists". In the first of these concerts, he premiered his Variations on "Là ci darem la mano", Op. 2 (variations on a duet from Mozart's opera Don Giovanni) for piano and orchestra. He returned to Warsaw in September 1829, where he premiered his Piano Concerto No. 2 in F minor, Op. 21 on 17 March 1830.
Chopin's successes as a composer and performer opened the door to western Europe for him, and on 2 November 1830, he set out, in the words of Zdzisław Jachimecki, "into the wide world, with no very clearly defined aim, forever". With Woyciechowski, he headed for Austria again, intending to go on to Italy. Later that month, in Warsaw, the November 1830 Uprising broke out, and Woyciechowski returned to Poland to enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh God! ... You are there, and yet you do not take vengeance!". Jachimecki ascribes to these events the composer's maturing "into an inspired national bard who intuited the past, present and future of his native Poland".
When he left Warsaw on 2 November 1830, Chopin had intended to go to Italy, but violent unrest there made that a dangerous destination. His next choice was Paris; difficulties obtaining a visa from Russian authorities resulted in his obtaining transit permission from the French. In later years he would quote the passport's endorsement "Passeport en passant par Paris à Londres" ("In transit to London via Paris"), joking that he was in the city "only in passing". Chopin arrived in Paris on 5 October 1831; he would never return to Poland, thus becoming one of many expatriates of the Polish Great Emigration. In France, he used the French versions of his given names, and after receiving French citizenship in 1835, he travelled on a French passport. However, Chopin remained close to his fellow Poles in exile as friends and confidants and he never felt fully comfortable speaking French. Chopin's biographer Adam Zamoyski writes that he never considered himself to be French, despite his father's French origins, and always saw himself as a Pole.
In Paris, Chopin encountered artists and other distinguished figures and found many opportunities to exercise his talents and achieve celebrity. During his years in Paris, he was to become acquainted with, among many others, Hector Berlioz, Franz Liszt, Ferdinand Hiller, Heinrich Heine, Eugène Delacroix, Alfred de Vigny, and Friedrich Kalkbrenner, who introduced him to the piano manufacturer Camille Pleyel. This was the beginning of a long and close association between the composer and Pleyel's instruments. Chopin was also acquainted with the poet Adam Mickiewicz, principal of the Polish Literary Society, some of whose verses he set as songs. He also was more than once guest of Marquis Astolphe de Custine, one of his fervent admirers, playing his works in Custine's salon.
Two Polish friends in Paris were also to play important roles in Chopin's life there. A fellow student at the Warsaw Conservatory, Julian Fontana, had originally tried unsuccessfully to establish himself in England; Fontana was to become, in the words of the music historian Jim Samson, Chopin's "general factotum and copyist". Albert Grzymała, who in Paris became a wealthy financier and society figure, often acted as Chopin's adviser and, in Zamoyski's words, "gradually began to fill the role of elder brother in [his] life".
On 7 December 1831, Chopin received the first major endorsement from an outstanding contemporary when Robert Schumann, reviewing the Op. 2 Variations in the Allgemeine musikalische Zeitung (his first published article on music), declared: "Hats off, gentlemen! A genius." On 25 February 1832 Chopin gave a debut Paris concert in the "salons de MM Pleyel" at 9 rue Cadet, which drew universal admiration. The critic François-Joseph Fétis wrote in the Revue et gazette musicale: "Here is a young man who ... taking no model, has found, if not a complete renewal of piano music, ... an abundance of original ideas of a kind to be found nowhere else ..." After this concert, Chopin realised that his essentially intimate keyboard technique was not optimal for large concert spaces. Later that year he was introduced to the wealthy Rothschild banking family, whose patronage also opened doors for him to other private salons (social gatherings of the aristocracy and artistic and literary elite). By the end of 1832 Chopin had established himself among the Parisian musical elite and had earned the respect of his peers such as Hiller, Liszt, and Berlioz. He no longer depended financially upon his father, and in the winter of 1832, he began earning a handsome income from publishing his works and teaching piano to affluent students from all over Europe. This freed him from the strains of public concert-giving, which he disliked.
Chopin seldom performed publicly in Paris. In later years he generally gave a single annual concert at the Salle Pleyel, a venue that seated three hundred. He played more frequently at salons but preferred playing at his own Paris apartment for small groups of friends. The musicologist Arthur Hedley has observed that "As a pianist Chopin was unique in acquiring a reputation of the highest order on the basis of a minimum of public appearances – few more than thirty in the course of his lifetime." The list of musicians who took part in some of his concerts indicates the richness of Parisian artistic life during this period. Examples include a concert on 23 March 1833, in which Chopin, Liszt, and Hiller performed (on pianos) a concerto by J. S. Bach for three keyboards; and, on 3 March 1838, a concert in which Chopin, his pupil Adolphe Gutmann, Charles-Valentin Alkan, and Alkan's teacher Joseph Zimmermann performed Alkan's arrangement, for eight hands, of two movements from Beethoven's 7th symphony. Chopin was also involved in the composition of Liszt's Hexameron; he wrote the sixth (and final) variation on Bellini's theme. Chopin's music soon found success with publishers, and in 1833 he contracted with Maurice Schlesinger, who arranged for it to be published not only in France but, through his family connections, also in Germany and England.
In the spring of 1834, Chopin attended the Lower Rhenish Music Festival in Aix-la-Chapelle with Hiller, and it was there that Chopin met Felix Mendelssohn. After the festival, the three visited Düsseldorf, where Mendelssohn had been appointed musical director. They spent what Mendelssohn described as "a very agreeable day", playing and discussing music at his piano, and met Friedrich Wilhelm Schadow, director of the Academy of Art, and some of his eminent pupils such as Lessing, Bendemann, Hildebrandt and Sohn. In 1835 Chopin went to Carlsbad, where he spent time with his parents; it was the last time he would see them. On his way back to Paris, he met old friends from Warsaw, the Wodzińskis, their sons, and their daughters, amongst which Maria, whom he occasionally had given piano lessons in Poland. This meeting prompted him to stay for two weeks in Dresden, when he had previously intended to return to Paris via Leipzig. The sixteen-year-old girl's portrait of the composer has been considered, along with Delacroix's, as among the best likenesses of Chopin. In October he finally reached Leipzig, where he met Schumann, Clara Wieck, and Mendelssohn, who organised for him a performance of his own oratorio St. Paul, and who considered him "a perfect musician". In July 1836 Chopin travelled to Marienbad and Dresden to be with the Wodziński family, and in September he proposed to Maria, whose mother Countess Wodzińska approved in principle. Chopin went on to Leipzig, where he presented Schumann with his G minor Ballade. At the end of 1836, he sent Maria an album in which his sister Ludwika had inscribed seven of his songs, and his 1835 Nocturne in C-sharp minor, Op. 27, No. 1. The anodyne thanks he received from Maria proved to be the last letter he was to have from her. Chopin placed the letters he had received from Maria and her mother into a large envelope, wrote on it the words "My sorrow" ("Moja bieda"), and to the end of his life retained in a desk drawer this keepsake of the second love of his life.
Although it is not known exactly when Chopin first met Franz Liszt after arriving in Paris, on 12 December 1831 he mentioned in a letter to his friend Woyciechowski that "I have met Rossini, Cherubini, Baillot, etc. – also Kalkbrenner. You would not believe how curious I was about Herz, Liszt, Hiller, etc." Liszt was in attendance at Chopin's Parisian debut on 26 February 1832 at the Salle Pleyel, which led him to remark: "The most vigorous applause seemed not to suffice to our enthusiasm in the presence of this talented musician, who revealed a new phase of poetic sentiment combined with such happy innovation in the form of his art."
The two became friends, and for many years lived close to each other in Paris, Chopin at 38 Rue de la Chaussée-d'Antin, and Liszt at the Hôtel de France on the Rue Laffitte, a few blocks away. They performed together on seven occasions between 1833 and 1841. The first, on 2 April 1833, was at a benefit concert organised by Hector Berlioz for his bankrupt Shakespearean actress wife Harriet Smithson, during which they played George Onslow's Sonata in F minor for piano duet. Later joint appearances included a benefit concert for the Benevolent Association of Polish Ladies in Paris. Their last appearance together in public was for a charity concert conducted for the Beethoven Monument in Bonn, held at the Salle Pleyel and the Paris Conservatory on 25 and 26 April 1841.
Although the two displayed great respect and admiration for each other, their friendship was uneasy and had some qualities of a love–hate relationship. Harold C. Schonberg believes that Chopin displayed a "tinge of jealousy and spite" towards Liszt's virtuosity on the piano, and others have also argued that he had become enchanted with Liszt's theatricality, showmanship, and success. Liszt was the dedicatee of Chopin's Op. 10 Études, and his performance of them prompted the composer to write to Hiller, "I should like to rob him of the way he plays my studies." However, Chopin expressed annoyance in 1843 when Liszt performed one of his nocturnes with the addition of numerous intricate embellishments, at which Chopin remarked that he should play the music as written or not play it at all, forcing an apology. Most biographers of Chopin state that after this the two had little to do with each other, although in his letters dated as late as 1848 he still referred to him as "my friend Liszt". Some commentators point to events in the two men's romantic lives which led to a rift between them; there are claims that Liszt had displayed jealousy of his mistress Marie d'Agoult's obsession with Chopin, while others believe that Chopin had become concerned about Liszt's growing relationship with George Sand.
In 1836, at a party hosted by Marie d'Agoult, Chopin met the French author George Sand (born [Amantine] Aurore [Lucile] Dupin). Short (under five feet, or 152 cm), dark, big-eyed and a cigar smoker, she initially repelled Chopin, who remarked, "What an unattractive person la Sand is. Is she really a woman?" However, by early 1837 Maria Wodzińska's mother had made it clear to Chopin in correspondence that a marriage with her daughter was unlikely to proceed. It is thought that she was influenced by his poor health and possibly also by rumours about his associations with women such as d'Agoult and Sand. Chopin finally placed the letters from Maria and her mother in a package on which he wrote, in Polish, "My Sorrow". Sand, in a letter to Grzymała of June 1838, admitted strong feelings for the composer and debated whether to abandon a current affair to begin a relationship with Chopin; she asked Grzymała to assess Chopin's relationship with Maria Wodzińska, without realising that the affair, at least from Maria's side, was over.
In June 1837 Chopin visited London incognito in the company of the piano manufacturer Camille Pleyel, where he played at a musical soirée at the house of English piano maker James Broadwood. On his return to Paris his association with Sand began in earnest, and by the end of June 1838 they had become lovers. Sand, who was six years older than the composer and had had a series of lovers, wrote at this time: "I must say I was confused and amazed at the effect this little creature had on me ... I have still not recovered from my astonishment, and if I were a proud person I should be feeling humiliated at having been carried away ..." The two spent a miserable winter on Majorca (8 November 1838 to 13 February 1839), where, together with Sand's two children, they had journeyed in the hope of improving Chopin's health and that of Sand's 15-year-old son Maurice, and also to escape the threats of Sand's former lover Félicien Mallefille. After discovering that the couple were not married, the deeply traditional Catholic people of Majorca became inhospitable, making accommodation difficult to find. This compelled the group to take lodgings in a former Carthusian monastery in Valldemossa, which gave little shelter from the cold winter weather.
On 3 December 1838, Chopin complained about his bad health and the incompetence of the doctors in Majorca, commenting: "Three doctors have visited me ... The first said I was dead; the second said I was dying; and the third said I was about to die." He also had problems having his Pleyel piano sent to him, having to rely in the meantime on a piano made in Palma by Juan Bauza. The Pleyel piano finally arrived from Paris in December, just shortly before Chopin and Sand left the island. Chopin wrote to Pleyel in January 1839: "I am sending you my Preludes [Op. 28]. I finished them on your little piano, which arrived in the best possible condition in spite of the sea, the bad weather and the Palma customs." Chopin was also able to undertake work while in Majorca on his Ballade No. 2, Op. 38; on two Polonaises, Op. 40; and on the Scherzo No. 3, Op. 39.
Although this period had been productive, the bad weather had such a detrimental effect on Chopin's health that Sand determined to leave the island. To avoid further customs duties, Sand sold the piano to a local French couple, the Canuts. The group travelled first to Barcelona, then to Marseilles, where they stayed for a few months while Chopin convalesced. While in Marseilles, Chopin made a rare appearance at the organ during a requiem mass for the tenor Adolphe Nourrit on 24 April 1839, playing a transcription of Franz Schubert's lied Die Sterne (D. 939). George Sand gives a description of Chopin's playing in a letter of 28 April 1839:
Chopin sacrificed himself by playing the organ at the Elevation – and what an organ! Anyhow our boy made the best of it by using the less discordant stops, and he played Schubert's Die Sterne, not with a passionate and glowing tone that Nourrit used, but with a plaintive sound as soft as an echo from another world. Two or three at most among those present felt its meaning and had tears in their eyes.
In May 1839 they headed to Sand's estate at Nohant for the summer, where they spent most of the following summers until 1846. In autumn they returned to Paris, where Chopin's apartment at 5 rue Tronchet was close to Sand's rented accommodation on the rue Pigalle. He frequently visited Sand in the evenings, but both retained some independence. (In 1842 he and Sand moved to the Square d'Orléans, living in adjacent buildings.)
On 26 July 1840 Chopin and Sand were present at the dress rehearsal of Berlioz's Grande symphonie funèbre et triomphale, composed to commemorate the tenth anniversary of the July Revolution. Chopin was reportedly unimpressed with the composition.
During the summers at Nohant, particularly in the years 1839–43 (except 1840), Chopin found quiet, productive days during which he composed many works, including his Polonaise in A-flat major, Op. 53. Sand compellingly describes Chopin's creative process: an inspiration, its painstaking elaboration – sometimes amid tormented weeping and complaining, with hundreds of changes in concept – only to return finally to the initial idea.
Among the visitors to Nohant were Delacroix and the mezzo-soprano Pauline Viardot, whom Chopin had advised on piano technique and composition. Delacroix gives an account of staying at Nohant in a letter of 7 June 1842:
The hosts could not be more pleasant in entertaining me. When we are not all together at dinner, lunch, playing billiards, or walking, each of us stays in his room, reading or lounging around on a couch. Sometimes, through the window which opens on the garden, a gust of music wafts up from Chopin at work. All this mingles with the songs of nightingales and the fragrance of roses.
From 1842 onwards, Chopin showed signs of serious illness. After a solo recital in Paris on 21 February 1842, he wrote to Grzymała: "I have to lie in bed all day long, my mouth and tonsils are aching so much." He was forced by illness to decline a written invitation from Alkan to participate in a repeat performance of the Beethoven 7th Symphony arrangement at Érard's on 1 March 1843. Late in 1844, Charles Hallé visited Chopin and found him "hardly able to move, bent like a half-opened penknife and evidently in great pain", although his spirits returned when he started to play the piano for his visitor. Chopin's health continued to deteriorate, particularly from this time onwards. Modern research suggests that apart from any other illnesses, he may also have suffered from temporal lobe epilepsy.
Chopin's output as a composer throughout this period declined in quantity year by year. Whereas in 1841 he had written a dozen works, only six were written in 1842 and six shorter pieces in 1843. In 1844 he wrote only the Op. 58 sonata. 1845 saw the completion of three mazurkas (Op. 59). Although these works were more refined than many of his earlier compositions, Zamoyski concludes that "his powers of concentration were failing and his inspiration was beset by anguish, both emotional and intellectual". Chopin's relations with Sand were soured in 1846 by problems involving her daughter Solange and Solange's fiancé, the young fortune-hunting sculptor Auguste Clésinger. The composer frequently took Solange's side in quarrels with her mother; he also faced jealousy from Sand's son Maurice. Moreover, Chopin was indifferent to Sand's radical political pursuits, including her enthusiasm for the February Revolution of 1848.
As the composer's illness progressed, Sand had become less of a lover and more of a nurse to Chopin, whom she called her "third child". In letters to third parties she vented her impatience, referring to him as a "child", a "poor angel", a "sufferer", and a "beloved little corpse". In 1847 Sand published her novel Lucrezia Floriani, whose main characters – a rich actress and a prince in weak health – could be interpreted as Sand and Chopin. In Chopin's presence, Sand read the manuscript aloud to Delacroix, who was both shocked and mystified by its implications, writing that "Madame Sand was perfectly at ease and Chopin could hardly stop making admiring comments". That year their relationship ended following an angry correspondence which, in Sand's words, made "a strange conclusion to nine years of exclusive friendship". Grzymała, who had followed their romance from the beginning, commented, "If [Chopin] had not had the misfortune of meeting G. S. [George Sand], who poisoned his whole being, he would have lived to be Cherubini's age." Chopin would die two years later at thirty-nine; the composer Luigi Cherubini had died in Paris in 1842 at the age of 81.
Chopin's public popularity as a virtuoso began to wane, as did the number of his pupils, and this, together with the political strife and instability of the time, caused him to struggle financially. In February 1848, with the cellist Auguste Franchomme, he gave his last Paris concert, which included three movements of the Cello Sonata Op. 65.
In April, during the 1848 Revolution in Paris, he left for London, where he performed at several concerts and numerous receptions in great houses. This tour was suggested to him by his Scottish pupil Jane Stirling and her elder sister. Stirling also made all the logistical arrangements and provided much of the necessary funding.
In London, Chopin took lodgings at Dover Street, where the firm of Broadwood provided him with a grand piano. At his first engagement, on 15 May at Stafford House, the audience included Queen Victoria and Prince Albert. The Prince, who was himself a talented musician, moved close to the keyboard to view Chopin's technique. Broadwood also arranged concerts for him; among those attending were the author William Makepeace Thackeray and the singer Jenny Lind. Chopin was also sought after for piano lessons, for which he charged the high fee of one guinea per hour, and for private recitals for which the fee was 20 guineas. At a concert on 7 July he shared the platform with Viardot, who sang arrangements of some of his mazurkas to Spanish texts. A few days later, he performed for Thomas Carlyle and his wife Jane at their home in Chelsea. On 28 August he played at a concert in Manchester's Gentlemen's Concert Hall, sharing the stage with Marietta Alboni and Lorenzo Salvi.
In late summer he was invited by Jane Stirling to visit Scotland, where he stayed at Calder House near Edinburgh and at Johnstone Castle in Renfrewshire, both owned by members of Stirling's family. She clearly had a notion of going beyond mere friendship, and Chopin was obliged to make it clear to her that this could not be so. He wrote at this time to Grzymała: "My Scottish ladies are kind, but such bores", and responding to a rumour about his involvement, answered that he was "closer to the grave than the nuptial bed". He gave a public concert in Glasgow on 27 September, and another in Edinburgh at the Hopetoun Rooms on Queen Street (now Erskine House) on 4 October. In late October 1848, while staying at 10 Warriston Crescent in Edinburgh with the Polish physician Adam Łyszczyński, he wrote out his last will and testament – "a kind of disposition to be made of my stuff in the future, if I should drop dead somewhere", he wrote to Grzymała.
Chopin made his last public appearance on a concert platform at London's Guildhall on 16 November 1848, when, in a final patriotic gesture, he played for the benefit of Polish refugees. This gesture proved to be a mistake, as most of the participants were more interested in the dancing and refreshments than in Chopin's piano artistry, which drained him. By this time he was very seriously ill, weighing under 99 pounds (less than 45 kg), and his doctors were aware that his sickness was at a terminal stage.
At the end of November Chopin returned to Paris. He passed the winter in unremitting illness, but gave occasional lessons and was visited by friends, including Delacroix and Franchomme. Occasionally he played, or accompanied the singing of Delfina Potocka, for his friends. During the summer of 1849, his friends found him an apartment in Chaillot, out of the centre of the city, for which the rent was secretly subsidised by an admirer, Princess Yekaterina Dmitrievna Soutzos-Obreskova. He was visited here by Jenny Lind in June 1849.
With his health further deteriorating, Chopin desired to have a family member with him. In June 1849 his sister Ludwika came to Paris with her husband and daughter, and in September, supported by a loan from Jane Stirling, he took an apartment at the Hôtel Baudard de Saint-James on the Place Vendôme. After 15 October, when his condition took a marked turn for the worse, only a handful of his closest friends remained with him. Viardot remarked sardonically, though, that "all the grand Parisian ladies considered it de rigueur to faint in his room".
Some of his friends provided music at his request; among them, Potocka sang and Franchomme played the cello. Chopin bequeathed his unfinished notes on a piano tuition method, Projet de méthode, to Alkan for completion. On 17 October, after midnight, the physician leaned over him and asked whether he was suffering greatly. "No longer", he replied. He died a few minutes before two a.m. He was 39. Those present at the deathbed appear to have included his sister Ludwika, Fr. Aleksander Jełowicki, Princess Marcelina Czartoryska, Sand's daughter Solange, and his close friend Thomas Albrecht. Later that morning, Solange's husband Clésinger made Chopin's death mask and a cast of his left hand.
The funeral, held at the Church of the Madeleine in Paris, was delayed almost two weeks until 30 October. Entrance was restricted to ticket holders, as many people were expected to attend. Over 3,000 people arrived without invitations, from as far as London, Berlin and Vienna, and were excluded.
Mozart's Requiem was sung at the funeral; the soloists were the soprano Jeanne-Anaïs Castellan, the mezzo-soprano Pauline Viardot, the tenor Alexis Dupont, and the bass Luigi Lablache; Chopin's Preludes No. 4 in E minor and No. 6 in B minor were also played. The organist was Alfred Lefébure-Wély. The funeral procession to Père Lachaise Cemetery, which included Chopin's sister Ludwika, was led by the aged Prince Adam Czartoryski. The pallbearers included Delacroix, Franchomme, and Camille Pleyel. At the graveside, the Funeral March from Chopin's Piano Sonata No. 2 was played, in Reber's instrumentation.
Chopin's tombstone, featuring the muse of music, Euterpe, weeping over a broken lyre, was designed and sculpted by Clésinger and installed on the anniversary of his death in 1850. The expenses of the monument, amounting to 4,500 francs, were covered by Jane Stirling, who also paid for the return of the composer's sister Ludwika to Warsaw. As requested by Chopin, Ludwika took his heart (which had been removed by his doctor Jean Cruveilhier and preserved in alcohol in a vase) back to Poland in 1850. She also took a collection of 200 letters from Sand to Chopin; after 1851 these were returned to Sand, who destroyed them.
Chopin's disease and the cause of his death have been topics of debate. His death certificate gave the cause as tuberculosis, and his physician, Cruveilhier, was then the leading French authority on this disease. Other possibilities advanced have included cystic fibrosis, cirrhosis, and alpha 1-antitrypsin deficiency. A visual examination of Chopin's preserved heart (the jar was not opened), conducted in 2014 and first published in the American Journal of Medicine in 2017, suggested that the likely cause of his death was a rare case of pericarditis caused by complications of chronic tuberculosis.
Over 230 works of Chopin survive; some compositions from early childhood have been lost. All his known works involve the piano, and only a few range beyond solo piano music, as either piano concertos, songs or chamber music.
Chopin was educated in the tradition of Beethoven, Haydn, Mozart, and Clementi; he used Clementi's piano method with his students. He was also influenced by Hummel's development of virtuoso, yet Mozartian, piano technique. He cited Bach and Mozart as the two most important composers in shaping his musical outlook. Chopin's early works are in the style of the "brilliant" keyboard pieces of his era as exemplified by the works of Ignaz Moscheles, Friedrich Kalkbrenner, and others. Less direct in the earlier period are the influences of Polish folk music and of Italian opera. Much of what became his typical style of ornamentation (for example, his fioriture) is taken from singing. His melodic lines were increasingly reminiscent of the modes and features of the music of his native country, such as drones.
Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic potential of the concept of the concert étude, already being developed in the 1820s and 1830s by Liszt, Clementi, and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837).
Chopin also endowed popular dance forms with a greater range of melody and expression. Chopin's mazurkas, while originating in the traditional Polish dance (the mazurek), differed from the traditional variety in that they were written for the concert hall rather than the dance hall; as J. Barrie Jones puts it, "it was Chopin who put the mazurka on the European musical map". The series of seven polonaises published in his lifetime (another nine were published posthumously), beginning with the Op. 26 pair (published 1836), set a new standard for music in the form. His waltzes were also written specifically for the salon recital rather than the ballroom and are frequently at rather faster tempos than their dance-floor equivalents.
Some of Chopin's well-known pieces have acquired descriptive titles, such as the Revolutionary Étude (Op. 10, No. 12), and the Minute Waltz (Op. 64, No. 1). However, except for his Funeral March, the composer never named an instrumental work beyond genre and number, leaving all potential extramusical associations to the listener; the names by which many of his pieces are known were invented by others. There is no evidence to suggest that the Revolutionary Étude was written with the failed Polish uprising against Russia in mind; it merely appeared at that time. The Funeral March, the third movement of his Sonata No. 2 (Op. 35), the one case where he did give a title, was written before the rest of the sonata, but no specific event or death is known to have inspired it.
The last opus number that Chopin himself used was 65, allocated to the Cello Sonata in G minor. He expressed a deathbed wish that all his unpublished manuscripts be destroyed. At the request of the composer's mother and sisters, however, his musical executor Julian Fontana selected 23 unpublished piano pieces and grouped them into eight further opus numbers (Opp. 66–73), published in 1855. In 1857, 17 Polish songs that Chopin wrote at various stages of his life were collected and published as Op. 74, though their order within the opus did not reflect the order of composition.
Works published since 1857 have received alternative catalogue designations instead of opus numbers. The most up-to-date catalogue is maintained by the Fryderyk Chopin Institute at its Internet Chopin Information Centre. The older Kobylańska Catalogue (usually represented by the initials 'KK'), named for its compiler, the Polish musicologist Krystyna Kobylańska, is still considered an important scholarly reference. The most recent catalogue of posthumously published works is that of the National Edition of the Works of Fryderyk Chopin, represented by the initials 'WN'.
Chopin's original publishers included Maurice Schlesinger and Camille Pleyel. His works soon began to appear in popular 19th-century piano anthologies. The first collected edition was by Breitkopf & Härtel (1878–1902). Among modern scholarly editions of Chopin's works are the version under the name of Paderewski, published between 1937 and 1966, and the more recent Polish National Edition, edited by Jan Ekier and published between 1967 and 2010. The latter is recommended to contestants of the Chopin Competition. Both editions contain detailed explanations and discussions regarding choices and sources.
Chopin published his music in France, England, and the German states (i.e. he worked with as many as three separate publishers for each piece or set of pieces) due to the copyright laws of the time. Thus there are often three different "first editions" of each work. Each edition is different from the others; Chopin edited them separately, and at times he did some revision to the music while editing it. Furthermore, Chopin provided his publishers with varying sources, including autographs, annotated proofsheets, and scribal copies. Only recently have these differences gained greater recognition.
Improvisation stands at the centre of Chopin's creative processes. However, this does not imply impulsive rambling: Nicholas Temperley writes that "improvisation is designed for an audience, and its starting-point is that audience's expectations, which include the current conventions of musical form". The works for piano and orchestra, including the two concertos, are held by Temperley to be "merely vehicles for brilliant piano playing ... formally longwinded and extremely conservative". After the piano concertos (which are both early, dating from 1830), Chopin made no attempts at large-scale multi-movement forms, save for his late sonatas for piano and cello; "instead he achieved near-perfection in pieces of simple general design but subtle and complex cell-structure". Rosen suggests that an important aspect of Chopin's individuality is his flexible handling of the four-bar phrase as a structural unit.
J. Barrie Jones suggests that "amongst the works that Chopin intended for concert use, the four ballades and four scherzi stand supreme", and adds that "the Barcarolle Op. 60 stands apart as an example of Chopin's rich harmonic palette coupled with an Italianate warmth of melody". Temperley opines that these works, which contain "immense variety of mood, thematic material and structural detail", are based on an extended "departure and return" form; "the more the middle section is extended, and the further it departs in key, mood and theme, from the opening idea, the more important and dramatic is the reprise when it at last comes".
Chopin's mazurkas and waltzes are all in straightforward ternary or episodic form, sometimes with a coda. The mazurkas often show more folk features than many of his other works, sometimes including modal scales and harmonies and the use of drone basses. However, some also show unusual sophistication, for example, Op. 63 No. 3, which includes a canon at one beat's distance, a great rarity in music.
Chopin's polonaises show a marked advance on those of his Polish predecessors in the form (who included his teachers Żywny and Elsner). As with the traditional polonaise, Chopin's works are in triple time and typically display a martial rhythm in their melodies, accompaniments, and cadences. Unlike most of their precursors, they also require a formidable playing technique.
His nocturnes are more structured, and of greater emotional depth, than those of Field, whom Chopin met in 1833. Many of the Chopin nocturnes have middle sections marked by agitated expression (and often making very difficult demands on the performer), which heightens their dramatic character.
Chopin's études are largely in straightforward ternary form. He used them to teach his own technique of piano playing – for instance playing double thirds (Op. 25, No. 6), playing in octaves (Op. 25, No. 10), and playing repeated notes (Op. 10, No. 7).
The preludes, many of which are very brief, were described by Schumann as "the beginnings of studies". Inspired by J. S. Bach's The Well-Tempered Clavier, Chopin's preludes move up the circle of fifths (rather than Bach's chromatic scale sequence) to create a prelude in each major and minor tonality. The preludes were perhaps not intended to be played as a group, and may even have been used by him and later pianists as generic preludes to others of his pieces, or even to music by other composers. This is suggested by Kenneth Hamilton, who has noted a 1922 recording by Ferruccio Busoni in which the Prelude Op. 28 No. 7 is followed by the Étude Op. 10 No. 5.
The two mature Chopin piano sonatas (No. 2, Op. 35, written in 1839 and No. 3, Op. 58, written in 1844) are in four movements. In Op. 35, Chopin combined within a formal large musical structure many elements of his virtuosic piano technique – "a kind of dialogue between the public pianism of the brilliant style and the German sonata principle". This sonata has been considered as showing the influences of both Bach and Beethoven. The Prelude from Bach's Suite No. 6 in D major for cello (BWV 1012) is quoted; and there are references to two sonatas of Beethoven: the Sonata Opus 111, and the Sonata Opus 26, which, like Chopin's Op. 35, has a funeral march as its slow movement. The last movement of Chopin's Op. 35, a brief (75-bar) perpetuum mobile in which the hands play in unmodified octave unison throughout, was found shocking and unmusical by contemporaries, including Schumann. The Op. 58 sonata is closer to the German tradition, including many passages of complex counterpoint, "worthy of Brahms" according to Samson.
Chopin's harmonic innovations may have arisen partly from his keyboard improvisation technique. In his works, Temperley says, "novel harmonic effects often result from the combination of ordinary appoggiaturas or passing notes with melodic figures of accompaniment", and cadences are delayed by the use of chords outside the home key (neapolitan sixths and diminished sevenths) or by sudden shifts to remote keys. Chord progressions sometimes anticipate the shifting tonality of later composers such as Claude Debussy, as does Chopin's use of modal harmony.
In 1841 Léon Escudier wrote of a recital given by Chopin that year, "One may say that Chopin is the creator of a school of piano and a school of composition. In truth, nothing equals the lightness, the sweetness with which the composer preludes on the piano; moreover nothing may be compared to his works full of originality, distinction and grace." Chopin refused to conform to a standard method of playing and believed that there was no set technique for playing well. His style was based extensively on his use of a very independent finger technique. In his Projet de méthode he wrote: "Everything is a matter of knowing good fingering ... we need no less to use the rest of the hand, the wrist, the forearm and the upper arm." He further stated: "One needs only to study a certain position of the hand in relation to the keys to obtain with ease the most beautiful quality of sound, to know how to play short notes and long notes, and [to attain] unlimited dexterity." The consequences of this approach to technique in Chopin's music include the frequent use of the entire range of the keyboard, passages in double octaves and other chord groupings, swiftly repeated notes, the use of grace notes, and the use of contrasting rhythms (four against three, for example) between the hands.
Jonathan Bellman writes that modern concert performance style – set in the "conservatory" tradition of late 19th- and 20th-century music schools, and suitable for large auditoria or recordings – militates against what is known of Chopin's more intimate performance technique. The composer himself said to a pupil that "concerts are never real music, you have to give up the idea of hearing in them all the most beautiful things of art". Contemporary accounts indicate that in performance, Chopin avoided rigid procedures sometimes incorrectly attributed to him, such as "always crescendo to a high note", but that he was concerned with expressive phrasing, rhythmic consistency and sensitive colouring. Berlioz wrote in 1853 that Chopin "has created a kind of chromatic embroidery ... whose effect is so strange and piquant as to be impossible to describe ... virtually nobody but Chopin himself can play this music and give it this unusual turn". Hiller wrote that "What in the hands of others was elegant embellishment, in his hands became a colourful wreath of flowers."
Chopin's music is frequently played with rubato, "the practice in performance of disregarding strict time, 'robbing' some note-values for expressive effect". There are differing opinions as to how much, and what type, of rubato is appropriate for his works. Charles Rosen comments that "most of the written-out indications of rubato in Chopin are to be found in his mazurkas ... It is probable that Chopin used the older form of rubato so important to Mozart ... [where] the melody note in the right hand is delayed until after the note in the bass ... An allied form of this rubato is the arpeggiation of the chords thereby delaying the melody note; according to Chopin's pupil Karol Mikuli, Chopin was firmly opposed to this practice."
Chopin's pupil Friederike Müller [de] wrote:
[His] playing was always noble and beautiful; his tones sang, whether in full forte or softest piano. He took infinite pains to teach his pupils this legato, cantabile style of playing. His most severe criticism was 'He – or she – does not know how to join two notes together.' He also demanded the strictest adherence to rhythm. He hated all lingering and dragging, misplaced rubatos, as well as exaggerated ritardandos [...] and it is precisely in this respect that people make such terrible errors in playing his works.
When living in Warsaw, Chopin composed and played on an instrument built by the piano-maker Fryderyk Buchholtz. Later in Paris Chopin purchased a piano from Pleyel. He rated Pleyel's pianos as "non plus ultra" ("nothing better"). Franz Liszt befriended Chopin in Paris and described the sound of Chopin's Pleyel as being "the marriage of crystal and water". While in London in 1848, Chopin mentioned his pianos in his letters: "I have a large drawing-room with three pianos, a Pleyel, a Broadwood and an Erard."
With his mazurkas and polonaises, Chopin has been credited with introducing to music a new sense of nationalism. Schumann, in his 1836 review of the piano concertos, highlighted the composer's strong feelings for his native Poland, writing:
Now that the Poles are in deep mourning [after the failure of the November Uprising of 1830], their appeal to us artists is even stronger ... If the mighty autocrat in the north [i.e. Nicholas I of Russia] could know that in Chopin's works, in the simple strains of his mazurkas, there lurks a dangerous enemy, he would place a ban on his music. Chopin's works are cannon buried in flowers!
The biography of Chopin published in 1863 under the name of Franz Liszt (but probably written by Carolyne zu Sayn-Wittgenstein) states that Chopin "must be ranked first among the first musicians ... individualizing in themselves the poetic sense of an entire nation".
Some modern commentators have argued against exaggerating Chopin's primacy as a "nationalist" or "patriotic" composer. George Golos refers to earlier "nationalist" composers in Central Europe, including Poland's Michał Kleofas Ogiński and Franciszek Lessel, who utilised polonaise and mazurka forms. Barbara Milewski suggests that Chopin's experience of Polish music came more from "urbanised" Warsaw versions than from folk music, and that attempts by Jachimecki and others to demonstrate genuine folk music in his works are without basis. Richard Taruskin impugns Schumann's attitude toward Chopin's works as patronising, and comments that Chopin "felt his Polish patriotism deeply and sincerely" but consciously modelled his works on the tradition of Bach, Beethoven, Schubert, and Field.
A reconciliation of these views is suggested by William Atwood:
Undoubtedly [Chopin's] use of traditional musical forms like the polonaise and mazurka roused nationalistic sentiments and a sense of cohesiveness amongst those Poles scattered across Europe and the New World ... While some sought solace in [them], others found them a source of strength in their continuing struggle for freedom. Although Chopin's music undoubtedly came to him intuitively rather than through any conscious patriotic design, it served all the same to symbolize the will of the Polish people ...
Jones comments that "Chopin's unique position as a composer, despite the fact that virtually everything he wrote was for the piano, has rarely been questioned." He also notes that Chopin was fortunate to arrive in Paris in 1831 – "the artistic environment, the publishers who were willing to print his music, the wealthy and aristocratic who paid what Chopin asked for their lessons" – and these factors, as well as his musical genius, also fuelled his contemporary and later reputation. While his illness and his love affairs conform to some of the stereotypes of romanticism, the rarity of his public recitals (as opposed to performances at fashionable Paris soirées) led Arthur Hutchings to suggest that "his lack of Byronic flamboyance [and] his aristocratic reclusiveness make him exceptional" among his romantic contemporaries such as Liszt and Henri Herz.
Chopin's qualities as a pianist and composer were recognised by many of his fellow musicians. Schumann named a piece for him in his suite Carnaval, and Chopin later dedicated his Ballade No. 2 in F major to Schumann. Elements of Chopin's music can be found in many of Liszt's later works. Liszt later transcribed for piano six of Chopin's Polish songs. A less fraught friendship was with Alkan, with whom he discussed elements of folk music, and who was deeply affected by Chopin's death.
In Paris, Chopin had a number of pupils, including Friedericke Müller, who left memoirs of his teaching and the prodigy Carl Filtsch, to whom both Chopin and Sand became dedicated, Chopin giving him three lessons a week; Filtsch was the only pupil to whom Chopin gave lessons in composition, and, exceptionally, he on several occasions shared a concert platform with him. Two of Chopin's long-standing pupils, Karol Mikuli and Georges Mathias, were themselves piano teachers and passed on details of his playing to their students, some of whom (such as Raoul Koczalski) were to make recordings of his music. Other pianists and composers influenced by Chopin's style include Louis Moreau Gottschalk, Édouard Wolff, and Pierre Zimmermann. Debussy dedicated his own 1915 piano Études to the memory of Chopin; he frequently played Chopin's music during his studies at the Paris Conservatoire, and undertook the editing of Chopin's piano music for the publisher Jacques Durand.
Polish composers of the following generation included virtuosi such as Moritz Moszkowski; but, in the opinion of J. Barrie Jones, his "one worthy successor" among his compatriots was Karol Szymanowski. Edvard Grieg, Antonín Dvořák, Isaac Albéniz, Pyotr Ilyich Tchaikovsky, and Sergei Rachmaninoff, among others, are regarded by critics as having been influenced by Chopin's use of national modes and idioms. Alexander Scriabin was devoted to the music of Chopin, and his early published works include nineteen mazurkas as well as numerous études and preludes; his teacher Nikolai Zverev drilled him in Chopin's works to improve his virtuosity as a performer. In the 20th century, composers who paid homage to (or in some cases parodied) the music of Chopin included George Crumb, Leopold Godowsky, Bohuslav Martinů, Darius Milhaud, Igor Stravinsky, and Heitor Villa-Lobos.
Chopin's music was used in the 1909 ballet Chopiniana, choreographed by Michel Fokine and orchestrated by Alexander Glazunov. Sergei Diaghilev commissioned additional orchestrations – from Stravinsky, Anatoly Lyadov, Sergei Taneyev, and Nikolai Tcherepnin – for later productions, which used the title Les Sylphides. Other noted composers have created orchestrations for the ballet, including Benjamin Britten, Roy Douglas, Alexander Gretchaninov, Gordon Jacob, and Maurice Ravel, whose score is lost.
Musicologist Erinn Knyt writes: "In the nineteenth century Chopin and his music were commonly viewed as effeminate, androgynous, childish, sickly, and 'ethnically other.'" Music historian Jeffrey Kallberg says that in Chopin's time, "listeners to the genre of the piano nocturne often couched their reactions in feminine imagery", and he cites many examples of such reactions to Chopin's nocturnes. One reason for this may be "demographic" – there were more female than male piano players, and playing such "romantic" pieces was seen by male critics as a female domestic pastime. Such genderization was not commonly applied to other genres among Chopin's works, such as the scherzo or the polonaise. The cultural historian Edward Said has cited the demonstrations by pianist and writer Charles Rosen, in the latter's book The Romantic Generation, of Chopin's skills in "planning, polyphony, and sheer harmonic creativity", as effectively overthrowing any legend of Chopin "as a swooning, 'inspired', small-scale salon composer".
Chopin's music remains very popular and is regularly performed, recorded and broadcast worldwide. The world's oldest monographic music competition, the International Chopin Piano Competition, founded in 1927, is held every five years in Warsaw. The Fryderyk Chopin Institute of Poland lists on its website over eighty societies worldwide devoted to the composer and his music. The Institute site also lists over 1500 performances of Chopin works on YouTube as of March 2021.
The British Library notes that "Chopin's works have been recorded by all the great pianists of the recording era." The earliest recording was an 1895 performance by Paul Pabst of the Nocturne in E major, Op. 62, No. 2. The British Library site makes available a number of historic recordings, including some by Alfred Cortot, Ignaz Friedman, Vladimir Horowitz, Benno Moiseiwitsch, Ignacy Jan Paderewski, Arthur Rubinstein, Xaver Scharwenka, Josef Hofmann, Vladimir de Pachmann, Moriz Rosenthal and many others. A select discography of recordings of Chopin works by pianists representing the various pedagogic traditions stemming from Chopin is given by James Methuen-Campbell in his work tracing the lineage and character of those traditions.
Numerous recordings of Chopin's works are available. On the occasion of the composer's bicentenary, the critics of The New York Times recommended performances by the following contemporary pianists (among many others): Yundi Li, Seong-Jin Cho, Martha Argerich, Vladimir Ashkenazy, Emanuel Ax, Evgeny Kissin, Ivan Moravec, Murray Perahia, Maurizio Pollini, and Krystian Zimerman. The Warsaw Chopin Society organises the Grand prix du disque de F. Chopin for notable Chopin recordings, held every five years.
Chopin has figured extensively in Polish literature, both in serious critical studies of his life and music and in fictional treatments. The earliest manifestation was probably an 1830 sonnet on Chopin by Leon Ulrich. French writers on Chopin (apart from Sand) have included Marcel Proust and André Gide, and he has also featured in works of Gottfried Benn and Boris Pasternak. There are numerous biographies of Chopin in English (see bibliography for some of these).
Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version of some of its events: Chopin. First produced in Milan in 1901, the music – based on Chopin's own – was assembled by Giacomo Orefice, with a libretto by Angiolo Orvieto [it].
Playwright, pianist, and actor Hershey Felder wrote and performs Monsieur Chopin, a one-man, one-act musical play.
Chopin's life and romantic tribulations have been fictionalised in numerous films. As early as 1919, Chopin's relationships with three women – his youth sweetheart Mariolka, then Polish singer Sonja Radkowska, and later George Sand – were portrayed in the German silent film Nocturno der Liebe (1919), with Chopin's music serving as a backdrop. The 1945 biographical film A Song to Remember earned Cornel Wilde an Academy Award nomination as Best Actor for his portrayal of the composer. Other film treatments have included La valse de l'adieu (1928) by Henry Roussel, with Pierre Blanchar as Chopin; Impromptu (1991), starring Hugh Grant as Chopin; La note bleue (1991); and Chopin: Desire for Love (2002).
Chopin's life was covered in a 1999 BBC Omnibus documentary by András Schiff and Mischa Scorer, in a 2010 documentary realised by Angelo Bozzolini and Roberto Prosseda for Italian television, and in a BBC Four documentary Chopin – The Women Behind The Music (2010).
Music scores | [
{
"paragraph_id": 0,
"text": "Frédéric François Chopin (born Fryderyk Franciszek Chopin; 1 March 1810 – 17 October 1849) was a Polish composer and virtuoso pianist of the Romantic period, who wrote primarily for solo piano. He has maintained worldwide renown as a leading musician of his era, one whose \"poetic genius was based on a professional technique that was without equal in his generation\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chopin was born in Żelazowa Wola and grew up in Warsaw, which in 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the November 1830 Uprising. At 21, he settled in Paris. Thereafter he gave only 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself by selling his compositions and by giving piano lessons, for which he was in high demand. Chopin formed a friendship with Franz Liszt and was admired by many of his musical contemporaries, including Robert Schumann. After a failed engagement to Maria Wodzińska from 1836 to 1837, he maintained an often troubled relationship with the French writer Aurore Dupin (known by her pen name George Sand). A brief and unhappy visit to Mallorca with Sand in 1838–39 would prove one of his most productive periods of composition. In his final years, he was supported financially by his admirer Jane Stirling. For most of his life, Chopin was in poor health. He died in Paris in 1849 at the age of 39.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chopin's compositions are mostly for solo piano, though he also wrote two piano concertos, some chamber music, and 19 songs set to Polish lyrics. His piano pieces are technically demanding and expanded the limits of the instrument; his own performances were noted for their nuance and sensitivity. Chopin's major piano works include mazurkas, waltzes, nocturnes, polonaises, the instrumental ballade (which Chopin created as an instrumental genre), études, impromptus, scherzi, preludes, and sonatas, some published only posthumously. Among the influences on his style of composition were Polish folk music, the classical tradition of J. S. Bach, Mozart, and Schubert, and the atmosphere of the Paris salons, of which he was a frequent guest. His innovations in style, harmony, and musical form, and his association of music with nationalism, were influential throughout and after the late Romantic period.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Chopin's music, his status as one of music's earliest celebrities, his indirect association with political insurrection, his high-profile love life, and his early death have made him a leading symbol of the Romantic era. His works remain popular, and he has been the subject of numerous films and biographies of varying historical fidelity. Among his many memorials is the Fryderyk Chopin Institute, which was created by the Parliament of Poland to research and promote his life and works. It hosts the International Chopin Piano Competition, a prestigious competition devoted entirely to his works.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Frédéric Chopin was born in Żelazowa Wola, 46 kilometres (29 miles) west of Warsaw, in what was then the Duchy of Warsaw, a Polish state established by Napoleon. The parish baptismal record, which is dated 23 April 1810, gives his birthday as 22 February 1810, and cites his given names in the Latin form Fridericus Franciscus (in Polish, he was Fryderyk Franciszek). The composer and his family used the birthdate 1 March, which is now generally accepted as the correct date.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "His father, Nicolas Chopin, was a Frenchman from Lorraine who had emigrated to Poland in 1787 at the age of sixteen. He married Justyna Krzyżanowska, a poor relative of the Skarbeks, one of the families for whom he worked. Chopin was baptised in the same church where his parents had married, in Brochów. His eighteen-year-old godfather, for whom he was named, was Fryderyk Skarbek, a pupil of Nicolas Chopin. Chopin was the second child of Nicholas and Justyna and their only son; he had an elder sister, Ludwika, and two younger sisters, Izabela and Emilia, whose death at the age of 14 was probably from tuberculosis. Nicolas Chopin was devoted to his adopted homeland, and insisted on the use of the Polish language in the household.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "In October 1810, six months after Chopin's birth, the family moved to Warsaw, where his father acquired a post teaching French at the Warsaw Lyceum, then housed in the Saxon Palace. Chopin lived with his family on the Palace grounds. The father played the flute and violin; the mother played the piano and gave lessons to boys in the boarding house that the Chopins kept. Chopin was of slight build, and even in early childhood was prone to illnesses.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "Chopin may have had some piano instruction from his mother, but his first professional music tutor, from 1816 to 1821, was the Czech pianist Wojciech Żywny. His elder sister Ludwika also took lessons from Żywny, and occasionally played duets with her brother. It quickly became apparent that he was a child prodigy. By the age of seven he had begun giving public concerts, and in 1817 he composed two polonaises, in G minor and B-flat major. His next work, a polonaise in A-flat major of 1821, dedicated to Żywny, is his earliest surviving musical manuscript.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "In 1817 the Saxon Palace was requisitioned by Warsaw's Russian governor for military use, and the Warsaw Lyceum was reestablished in the Kazimierz Palace (today the rectorate of Warsaw University). Chopin and his family moved to a building, which still survives, adjacent to the Kazimierz Palace. During this period, he was sometimes invited to the Belweder Palace as playmate to the son of the ruler of Russian Poland, Grand Duke Konstantin Pavlovich of Russia; he played the piano for Konstantin Pavlovich and composed a march for him. Julian Ursyn Niemcewicz, in his dramatic eclogue, \"Nasze Przebiegi\" (\"Our Discourses\", 1818), attested to \"little Chopin's\" popularity.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "From September 1823 to 1826, Chopin attended the Warsaw Lyceum, where he received organ lessons from the Czech musician Wilhelm Würfel during his first year. In the autumn of 1826 he began a three-year course under the Silesian composer Józef Elsner at the Warsaw Conservatory, studying music theory, figured bass, and composition. Throughout this period he continued to compose and to give recitals in concerts and salons in Warsaw. He was engaged by the inventors of the \"aeolomelodicon\" (a combination of piano and mechanical organ), and on this instrument in May 1825 he performed his own improvisation and part of a concerto by Moscheles. The success of this concert led to an invitation to give a recital on a similar instrument (the \"aeolopantaleon\") before Tsar Alexander I, who was visiting Warsaw; the Tsar presented him with a diamond ring. At a subsequent aeolopantaleon concert on 10 June 1825, Chopin performed his Rondo Op. 1. This was the first of his works to be commercially published and earned him his first mention in the foreign press, when the Leipzig Allgemeine Musikalische Zeitung praised his \"wealth of musical ideas\".",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "From 1824 until 1828 Chopin spent his vacations away from Warsaw, at a number of locations. In 1824 and 1825, at Szafarnia, he was a guest of Dominik Dziewanowski, the father of a schoolmate. Here, for the first time, he encountered Polish rural folk music. His letters home from Szafarnia (to which he gave the title \"The Szafarnia Courier\"), written in a very modern and lively Polish, amused his family with their spoofing of the Warsaw newspapers and demonstrated the youngster's literary gift.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "In 1827, soon after the death of Chopin's youngest sister Emilia, the family moved from the Warsaw University building, adjacent to the Kazimierz Palace, to lodgings just across the street from the university, in the south annex of the Krasiński Palace on Krakowskie Przedmieście, where Chopin lived until he left Warsaw in 1830. Here his parents continued running their boarding house for male students. Four boarders at his parents' apartments became Chopin's intimates: Tytus Woyciechowski, Jan Nepomucen Białobłocki, Jan Matuszyński, and Julian Fontana. The latter two would become part of his Paris milieu.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "Chopin was friendly with members of Warsaw's young artistic and intellectual world, including Fontana, Józef Bohdan Zaleski, and Stefan Witwicki. Chopin's final Conservatory report (July 1829) read: \"Chopin F., third-year student, exceptional talent, musical genius.\" In 1829 the artist Ambroży Mieroszewski executed a set of portraits of Chopin family members, including the first known portrait of the composer.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "Letters from Chopin to Woyciechowski in the period 1829–30 (when Chopin was about twenty) contain apparent homoerotic references to dreams and to offered embraces.",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "Now I am going to wash myself. Please do not embrace me as I have not washed yet. And you? Even if I were to anoint myself with fragrant oils from Byzantium, you would not embrace me – not unless forced to by magnetism. But there are forces in Nature! Today you will dream that you are embracing me! You have to pay for the nightmare you caused me last night.",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "According to Adam Zamoyski, such expressions \"were, and to some extent still are, common currency in Polish and carry no greater implication than the 'love'\" concluding letters today. \"The spirit of the times, pervaded by the Romantic movement in art and literature, favoured extreme expression of feeling ... Whilst the possibility cannot be ruled out entirely, it is unlikely that the two were ever lovers.\" Chopin's biographer Alan Walker considers that, insofar as such expressions could be perceived as homosexual in nature, they would not denote more than a passing phase in Chopin's life, or be the result – in Walker's words – of a \"mental twist\". The musicologist Jeffrey Kallberg notes that concepts of sexual practice and identity were very different in Chopin's time, so modern interpretation is problematic. Other writers believe that these are clear, or potential, demonstrations of homosexual impulses on Chopin's part.",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "Probably in early 1829 Chopin met the singer Konstancja Gładkowska and developed an intense affection for her, although it is not clear that he ever addressed her directly on the matter. In a letter to Woyciechowski of 3 October 1829 he refers to his \"ideal, whom I have served faithfully for six months, though without ever saying a word to her about my feelings; whom I dream of, who inspired the Adagio of my Concerto\". All of Chopin's biographers, following the lead of Frederick Niecks, agree that this \"ideal\" was Gładkowska. After what would be Chopin's farewell concert in Warsaw in October 1830, which included the concerto, played by the composer, and Gładkowska singing an aria by Gioachino Rossini, the two exchanged rings, and two weeks later she wrote in his album some affectionate lines bidding him farewell. After Chopin left Warsaw, he and Gładkowska did not meet and apparently did not correspond.",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "In September 1828 Chopin, while still a student, visited Berlin with a family friend, zoologist Feliks Jarocki, enjoying operas directed by Gaspare Spontini and attending concerts by Carl Friedrich Zelter, Felix Mendelssohn, and other celebrities. On an 1829 return trip to Berlin, he was a guest of Prince Antoni Radziwiłł, governor of the Grand Duchy of Posen – himself an accomplished composer and aspiring cellist. For the prince and his pianist daughter Wanda, he composed his Introduction and Polonaise brillante in C major for cello and piano, Op. 3.",
"title": "Life"
},
{
"paragraph_id": 18,
"text": "Back in Warsaw that year, Chopin heard Niccolò Paganini play the violin, and composed a set of variations, Souvenir de Paganini. It may have been this experience that encouraged him to commence writing his first Études (1829–32), exploring the capacities of his own instrument. After completing his studies at the Warsaw Conservatory, he made his debut in Vienna. He gave two piano concerts and received many favourable reviews – in addition to some commenting (in Chopin's own words) that he was \"too delicate for those accustomed to the piano-bashing of local artists\". In the first of these concerts, he premiered his Variations on \"Là ci darem la mano\", Op. 2 (variations on a duet from Mozart's opera Don Giovanni) for piano and orchestra. He returned to Warsaw in September 1829, where he premiered his Piano Concerto No. 2 in F minor, Op. 21 on 17 March 1830.",
"title": "Life"
},
{
"paragraph_id": 19,
"text": "Chopin's successes as a composer and performer opened the door to western Europe for him, and on 2 November 1830, he set out, in the words of Zdzisław Jachimecki, \"into the wide world, with no very clearly defined aim, forever\". With Woyciechowski, he headed for Austria again, intending to go on to Italy. Later that month, in Warsaw, the November 1830 Uprising broke out, and Woyciechowski returned to Poland to enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, \"I curse the moment of my departure.\" When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: \"Oh God! ... You are there, and yet you do not take vengeance!\". Jachimecki ascribes to these events the composer's maturing \"into an inspired national bard who intuited the past, present and future of his native Poland\".",
"title": "Life"
},
{
"paragraph_id": 20,
"text": "When he left Warsaw on 2 November 1830, Chopin had intended to go to Italy, but violent unrest there made that a dangerous destination. His next choice was Paris; difficulties obtaining a visa from Russian authorities resulted in his obtaining transit permission from the French. In later years he would quote the passport's endorsement \"Passeport en passant par Paris à Londres\" (\"In transit to London via Paris\"), joking that he was in the city \"only in passing\". Chopin arrived in Paris on 5 October 1831; he would never return to Poland, thus becoming one of many expatriates of the Polish Great Emigration. In France, he used the French versions of his given names, and after receiving French citizenship in 1835, he travelled on a French passport. However, Chopin remained close to his fellow Poles in exile as friends and confidants and he never felt fully comfortable speaking French. Chopin's biographer Adam Zamoyski writes that he never considered himself to be French, despite his father's French origins, and always saw himself as a Pole.",
"title": "Life"
},
{
"paragraph_id": 21,
"text": "In Paris, Chopin encountered artists and other distinguished figures and found many opportunities to exercise his talents and achieve celebrity. During his years in Paris, he was to become acquainted with, among many others, Hector Berlioz, Franz Liszt, Ferdinand Hiller, Heinrich Heine, Eugène Delacroix, Alfred de Vigny, and Friedrich Kalkbrenner, who introduced him to the piano manufacturer Camille Pleyel. This was the beginning of a long and close association between the composer and Pleyel's instruments. Chopin was also acquainted with the poet Adam Mickiewicz, principal of the Polish Literary Society, some of whose verses he set as songs. He also was more than once guest of Marquis Astolphe de Custine, one of his fervent admirers, playing his works in Custine's salon.",
"title": "Life"
},
{
"paragraph_id": 22,
"text": "Two Polish friends in Paris were also to play important roles in Chopin's life there. A fellow student at the Warsaw Conservatory, Julian Fontana, had originally tried unsuccessfully to establish himself in England; Fontana was to become, in the words of the music historian Jim Samson, Chopin's \"general factotum and copyist\". Albert Grzymała, who in Paris became a wealthy financier and society figure, often acted as Chopin's adviser and, in Zamoyski's words, \"gradually began to fill the role of elder brother in [his] life\".",
"title": "Life"
},
{
"paragraph_id": 23,
"text": "On 7 December 1831, Chopin received the first major endorsement from an outstanding contemporary when Robert Schumann, reviewing the Op. 2 Variations in the Allgemeine musikalische Zeitung (his first published article on music), declared: \"Hats off, gentlemen! A genius.\" On 25 February 1832 Chopin gave a debut Paris concert in the \"salons de MM Pleyel\" at 9 rue Cadet, which drew universal admiration. The critic François-Joseph Fétis wrote in the Revue et gazette musicale: \"Here is a young man who ... taking no model, has found, if not a complete renewal of piano music, ... an abundance of original ideas of a kind to be found nowhere else ...\" After this concert, Chopin realised that his essentially intimate keyboard technique was not optimal for large concert spaces. Later that year he was introduced to the wealthy Rothschild banking family, whose patronage also opened doors for him to other private salons (social gatherings of the aristocracy and artistic and literary elite). By the end of 1832 Chopin had established himself among the Parisian musical elite and had earned the respect of his peers such as Hiller, Liszt, and Berlioz. He no longer depended financially upon his father, and in the winter of 1832, he began earning a handsome income from publishing his works and teaching piano to affluent students from all over Europe. This freed him from the strains of public concert-giving, which he disliked.",
"title": "Life"
},
{
"paragraph_id": 24,
"text": "Chopin seldom performed publicly in Paris. In later years he generally gave a single annual concert at the Salle Pleyel, a venue that seated three hundred. He played more frequently at salons but preferred playing at his own Paris apartment for small groups of friends. The musicologist Arthur Hedley has observed that \"As a pianist Chopin was unique in acquiring a reputation of the highest order on the basis of a minimum of public appearances – few more than thirty in the course of his lifetime.\" The list of musicians who took part in some of his concerts indicates the richness of Parisian artistic life during this period. Examples include a concert on 23 March 1833, in which Chopin, Liszt, and Hiller performed (on pianos) a concerto by J. S. Bach for three keyboards; and, on 3 March 1838, a concert in which Chopin, his pupil Adolphe Gutmann, Charles-Valentin Alkan, and Alkan's teacher Joseph Zimmermann performed Alkan's arrangement, for eight hands, of two movements from Beethoven's 7th symphony. Chopin was also involved in the composition of Liszt's Hexameron; he wrote the sixth (and final) variation on Bellini's theme. Chopin's music soon found success with publishers, and in 1833 he contracted with Maurice Schlesinger, who arranged for it to be published not only in France but, through his family connections, also in Germany and England.",
"title": "Life"
},
{
"paragraph_id": 25,
"text": "In the spring of 1834, Chopin attended the Lower Rhenish Music Festival in Aix-la-Chapelle with Hiller, and it was there that Chopin met Felix Mendelssohn. After the festival, the three visited Düsseldorf, where Mendelssohn had been appointed musical director. They spent what Mendelssohn described as \"a very agreeable day\", playing and discussing music at his piano, and met Friedrich Wilhelm Schadow, director of the Academy of Art, and some of his eminent pupils such as Lessing, Bendemann, Hildebrandt and Sohn. In 1835 Chopin went to Carlsbad, where he spent time with his parents; it was the last time he would see them. On his way back to Paris, he met old friends from Warsaw, the Wodzińskis, their sons, and their daughters, amongst which Maria, whom he occasionally had given piano lessons in Poland. This meeting prompted him to stay for two weeks in Dresden, when he had previously intended to return to Paris via Leipzig. The sixteen-year-old girl's portrait of the composer has been considered, along with Delacroix's, as among the best likenesses of Chopin. In October he finally reached Leipzig, where he met Schumann, Clara Wieck, and Mendelssohn, who organised for him a performance of his own oratorio St. Paul, and who considered him \"a perfect musician\". In July 1836 Chopin travelled to Marienbad and Dresden to be with the Wodziński family, and in September he proposed to Maria, whose mother Countess Wodzińska approved in principle. Chopin went on to Leipzig, where he presented Schumann with his G minor Ballade. At the end of 1836, he sent Maria an album in which his sister Ludwika had inscribed seven of his songs, and his 1835 Nocturne in C-sharp minor, Op. 27, No. 1. The anodyne thanks he received from Maria proved to be the last letter he was to have from her. Chopin placed the letters he had received from Maria and her mother into a large envelope, wrote on it the words \"My sorrow\" (\"Moja bieda\"), and to the end of his life retained in a desk drawer this keepsake of the second love of his life.",
"title": "Life"
},
{
"paragraph_id": 26,
"text": "Although it is not known exactly when Chopin first met Franz Liszt after arriving in Paris, on 12 December 1831 he mentioned in a letter to his friend Woyciechowski that \"I have met Rossini, Cherubini, Baillot, etc. – also Kalkbrenner. You would not believe how curious I was about Herz, Liszt, Hiller, etc.\" Liszt was in attendance at Chopin's Parisian debut on 26 February 1832 at the Salle Pleyel, which led him to remark: \"The most vigorous applause seemed not to suffice to our enthusiasm in the presence of this talented musician, who revealed a new phase of poetic sentiment combined with such happy innovation in the form of his art.\"",
"title": "Life"
},
{
"paragraph_id": 27,
"text": "The two became friends, and for many years lived close to each other in Paris, Chopin at 38 Rue de la Chaussée-d'Antin, and Liszt at the Hôtel de France on the Rue Laffitte, a few blocks away. They performed together on seven occasions between 1833 and 1841. The first, on 2 April 1833, was at a benefit concert organised by Hector Berlioz for his bankrupt Shakespearean actress wife Harriet Smithson, during which they played George Onslow's Sonata in F minor for piano duet. Later joint appearances included a benefit concert for the Benevolent Association of Polish Ladies in Paris. Their last appearance together in public was for a charity concert conducted for the Beethoven Monument in Bonn, held at the Salle Pleyel and the Paris Conservatory on 25 and 26 April 1841.",
"title": "Life"
},
{
"paragraph_id": 28,
"text": "Although the two displayed great respect and admiration for each other, their friendship was uneasy and had some qualities of a love–hate relationship. Harold C. Schonberg believes that Chopin displayed a \"tinge of jealousy and spite\" towards Liszt's virtuosity on the piano, and others have also argued that he had become enchanted with Liszt's theatricality, showmanship, and success. Liszt was the dedicatee of Chopin's Op. 10 Études, and his performance of them prompted the composer to write to Hiller, \"I should like to rob him of the way he plays my studies.\" However, Chopin expressed annoyance in 1843 when Liszt performed one of his nocturnes with the addition of numerous intricate embellishments, at which Chopin remarked that he should play the music as written or not play it at all, forcing an apology. Most biographers of Chopin state that after this the two had little to do with each other, although in his letters dated as late as 1848 he still referred to him as \"my friend Liszt\". Some commentators point to events in the two men's romantic lives which led to a rift between them; there are claims that Liszt had displayed jealousy of his mistress Marie d'Agoult's obsession with Chopin, while others believe that Chopin had become concerned about Liszt's growing relationship with George Sand.",
"title": "Life"
},
{
"paragraph_id": 29,
"text": "In 1836, at a party hosted by Marie d'Agoult, Chopin met the French author George Sand (born [Amantine] Aurore [Lucile] Dupin). Short (under five feet, or 152 cm), dark, big-eyed and a cigar smoker, she initially repelled Chopin, who remarked, \"What an unattractive person la Sand is. Is she really a woman?\" However, by early 1837 Maria Wodzińska's mother had made it clear to Chopin in correspondence that a marriage with her daughter was unlikely to proceed. It is thought that she was influenced by his poor health and possibly also by rumours about his associations with women such as d'Agoult and Sand. Chopin finally placed the letters from Maria and her mother in a package on which he wrote, in Polish, \"My Sorrow\". Sand, in a letter to Grzymała of June 1838, admitted strong feelings for the composer and debated whether to abandon a current affair to begin a relationship with Chopin; she asked Grzymała to assess Chopin's relationship with Maria Wodzińska, without realising that the affair, at least from Maria's side, was over.",
"title": "Life"
},
{
"paragraph_id": 30,
"text": "In June 1837 Chopin visited London incognito in the company of the piano manufacturer Camille Pleyel, where he played at a musical soirée at the house of English piano maker James Broadwood. On his return to Paris his association with Sand began in earnest, and by the end of June 1838 they had become lovers. Sand, who was six years older than the composer and had had a series of lovers, wrote at this time: \"I must say I was confused and amazed at the effect this little creature had on me ... I have still not recovered from my astonishment, and if I were a proud person I should be feeling humiliated at having been carried away ...\" The two spent a miserable winter on Majorca (8 November 1838 to 13 February 1839), where, together with Sand's two children, they had journeyed in the hope of improving Chopin's health and that of Sand's 15-year-old son Maurice, and also to escape the threats of Sand's former lover Félicien Mallefille. After discovering that the couple were not married, the deeply traditional Catholic people of Majorca became inhospitable, making accommodation difficult to find. This compelled the group to take lodgings in a former Carthusian monastery in Valldemossa, which gave little shelter from the cold winter weather.",
"title": "Life"
},
{
"paragraph_id": 31,
"text": "On 3 December 1838, Chopin complained about his bad health and the incompetence of the doctors in Majorca, commenting: \"Three doctors have visited me ... The first said I was dead; the second said I was dying; and the third said I was about to die.\" He also had problems having his Pleyel piano sent to him, having to rely in the meantime on a piano made in Palma by Juan Bauza. The Pleyel piano finally arrived from Paris in December, just shortly before Chopin and Sand left the island. Chopin wrote to Pleyel in January 1839: \"I am sending you my Preludes [Op. 28]. I finished them on your little piano, which arrived in the best possible condition in spite of the sea, the bad weather and the Palma customs.\" Chopin was also able to undertake work while in Majorca on his Ballade No. 2, Op. 38; on two Polonaises, Op. 40; and on the Scherzo No. 3, Op. 39.",
"title": "Life"
},
{
"paragraph_id": 32,
"text": "Although this period had been productive, the bad weather had such a detrimental effect on Chopin's health that Sand determined to leave the island. To avoid further customs duties, Sand sold the piano to a local French couple, the Canuts. The group travelled first to Barcelona, then to Marseilles, where they stayed for a few months while Chopin convalesced. While in Marseilles, Chopin made a rare appearance at the organ during a requiem mass for the tenor Adolphe Nourrit on 24 April 1839, playing a transcription of Franz Schubert's lied Die Sterne (D. 939). George Sand gives a description of Chopin's playing in a letter of 28 April 1839:",
"title": "Life"
},
{
"paragraph_id": 33,
"text": "Chopin sacrificed himself by playing the organ at the Elevation – and what an organ! Anyhow our boy made the best of it by using the less discordant stops, and he played Schubert's Die Sterne, not with a passionate and glowing tone that Nourrit used, but with a plaintive sound as soft as an echo from another world. Two or three at most among those present felt its meaning and had tears in their eyes.",
"title": "Life"
},
{
"paragraph_id": 34,
"text": "In May 1839 they headed to Sand's estate at Nohant for the summer, where they spent most of the following summers until 1846. In autumn they returned to Paris, where Chopin's apartment at 5 rue Tronchet was close to Sand's rented accommodation on the rue Pigalle. He frequently visited Sand in the evenings, but both retained some independence. (In 1842 he and Sand moved to the Square d'Orléans, living in adjacent buildings.)",
"title": "Life"
},
{
"paragraph_id": 35,
"text": "On 26 July 1840 Chopin and Sand were present at the dress rehearsal of Berlioz's Grande symphonie funèbre et triomphale, composed to commemorate the tenth anniversary of the July Revolution. Chopin was reportedly unimpressed with the composition.",
"title": "Life"
},
{
"paragraph_id": 36,
"text": "During the summers at Nohant, particularly in the years 1839–43 (except 1840), Chopin found quiet, productive days during which he composed many works, including his Polonaise in A-flat major, Op. 53. Sand compellingly describes Chopin's creative process: an inspiration, its painstaking elaboration – sometimes amid tormented weeping and complaining, with hundreds of changes in concept – only to return finally to the initial idea.",
"title": "Life"
},
{
"paragraph_id": 37,
"text": "Among the visitors to Nohant were Delacroix and the mezzo-soprano Pauline Viardot, whom Chopin had advised on piano technique and composition. Delacroix gives an account of staying at Nohant in a letter of 7 June 1842:",
"title": "Life"
},
{
"paragraph_id": 38,
"text": "The hosts could not be more pleasant in entertaining me. When we are not all together at dinner, lunch, playing billiards, or walking, each of us stays in his room, reading or lounging around on a couch. Sometimes, through the window which opens on the garden, a gust of music wafts up from Chopin at work. All this mingles with the songs of nightingales and the fragrance of roses.",
"title": "Life"
},
{
"paragraph_id": 39,
"text": "From 1842 onwards, Chopin showed signs of serious illness. After a solo recital in Paris on 21 February 1842, he wrote to Grzymała: \"I have to lie in bed all day long, my mouth and tonsils are aching so much.\" He was forced by illness to decline a written invitation from Alkan to participate in a repeat performance of the Beethoven 7th Symphony arrangement at Érard's on 1 March 1843. Late in 1844, Charles Hallé visited Chopin and found him \"hardly able to move, bent like a half-opened penknife and evidently in great pain\", although his spirits returned when he started to play the piano for his visitor. Chopin's health continued to deteriorate, particularly from this time onwards. Modern research suggests that apart from any other illnesses, he may also have suffered from temporal lobe epilepsy.",
"title": "Life"
},
{
"paragraph_id": 40,
"text": "Chopin's output as a composer throughout this period declined in quantity year by year. Whereas in 1841 he had written a dozen works, only six were written in 1842 and six shorter pieces in 1843. In 1844 he wrote only the Op. 58 sonata. 1845 saw the completion of three mazurkas (Op. 59). Although these works were more refined than many of his earlier compositions, Zamoyski concludes that \"his powers of concentration were failing and his inspiration was beset by anguish, both emotional and intellectual\". Chopin's relations with Sand were soured in 1846 by problems involving her daughter Solange and Solange's fiancé, the young fortune-hunting sculptor Auguste Clésinger. The composer frequently took Solange's side in quarrels with her mother; he also faced jealousy from Sand's son Maurice. Moreover, Chopin was indifferent to Sand's radical political pursuits, including her enthusiasm for the February Revolution of 1848.",
"title": "Life"
},
{
"paragraph_id": 41,
"text": "As the composer's illness progressed, Sand had become less of a lover and more of a nurse to Chopin, whom she called her \"third child\". In letters to third parties she vented her impatience, referring to him as a \"child\", a \"poor angel\", a \"sufferer\", and a \"beloved little corpse\". In 1847 Sand published her novel Lucrezia Floriani, whose main characters – a rich actress and a prince in weak health – could be interpreted as Sand and Chopin. In Chopin's presence, Sand read the manuscript aloud to Delacroix, who was both shocked and mystified by its implications, writing that \"Madame Sand was perfectly at ease and Chopin could hardly stop making admiring comments\". That year their relationship ended following an angry correspondence which, in Sand's words, made \"a strange conclusion to nine years of exclusive friendship\". Grzymała, who had followed their romance from the beginning, commented, \"If [Chopin] had not had the misfortune of meeting G. S. [George Sand], who poisoned his whole being, he would have lived to be Cherubini's age.\" Chopin would die two years later at thirty-nine; the composer Luigi Cherubini had died in Paris in 1842 at the age of 81.",
"title": "Life"
},
{
"paragraph_id": 42,
"text": "Chopin's public popularity as a virtuoso began to wane, as did the number of his pupils, and this, together with the political strife and instability of the time, caused him to struggle financially. In February 1848, with the cellist Auguste Franchomme, he gave his last Paris concert, which included three movements of the Cello Sonata Op. 65.",
"title": "Life"
},
{
"paragraph_id": 43,
"text": "In April, during the 1848 Revolution in Paris, he left for London, where he performed at several concerts and numerous receptions in great houses. This tour was suggested to him by his Scottish pupil Jane Stirling and her elder sister. Stirling also made all the logistical arrangements and provided much of the necessary funding.",
"title": "Life"
},
{
"paragraph_id": 44,
"text": "In London, Chopin took lodgings at Dover Street, where the firm of Broadwood provided him with a grand piano. At his first engagement, on 15 May at Stafford House, the audience included Queen Victoria and Prince Albert. The Prince, who was himself a talented musician, moved close to the keyboard to view Chopin's technique. Broadwood also arranged concerts for him; among those attending were the author William Makepeace Thackeray and the singer Jenny Lind. Chopin was also sought after for piano lessons, for which he charged the high fee of one guinea per hour, and for private recitals for which the fee was 20 guineas. At a concert on 7 July he shared the platform with Viardot, who sang arrangements of some of his mazurkas to Spanish texts. A few days later, he performed for Thomas Carlyle and his wife Jane at their home in Chelsea. On 28 August he played at a concert in Manchester's Gentlemen's Concert Hall, sharing the stage with Marietta Alboni and Lorenzo Salvi.",
"title": "Life"
},
{
"paragraph_id": 45,
"text": "In late summer he was invited by Jane Stirling to visit Scotland, where he stayed at Calder House near Edinburgh and at Johnstone Castle in Renfrewshire, both owned by members of Stirling's family. She clearly had a notion of going beyond mere friendship, and Chopin was obliged to make it clear to her that this could not be so. He wrote at this time to Grzymała: \"My Scottish ladies are kind, but such bores\", and responding to a rumour about his involvement, answered that he was \"closer to the grave than the nuptial bed\". He gave a public concert in Glasgow on 27 September, and another in Edinburgh at the Hopetoun Rooms on Queen Street (now Erskine House) on 4 October. In late October 1848, while staying at 10 Warriston Crescent in Edinburgh with the Polish physician Adam Łyszczyński, he wrote out his last will and testament – \"a kind of disposition to be made of my stuff in the future, if I should drop dead somewhere\", he wrote to Grzymała.",
"title": "Life"
},
{
"paragraph_id": 46,
"text": "Chopin made his last public appearance on a concert platform at London's Guildhall on 16 November 1848, when, in a final patriotic gesture, he played for the benefit of Polish refugees. This gesture proved to be a mistake, as most of the participants were more interested in the dancing and refreshments than in Chopin's piano artistry, which drained him. By this time he was very seriously ill, weighing under 99 pounds (less than 45 kg), and his doctors were aware that his sickness was at a terminal stage.",
"title": "Life"
},
{
"paragraph_id": 47,
"text": "At the end of November Chopin returned to Paris. He passed the winter in unremitting illness, but gave occasional lessons and was visited by friends, including Delacroix and Franchomme. Occasionally he played, or accompanied the singing of Delfina Potocka, for his friends. During the summer of 1849, his friends found him an apartment in Chaillot, out of the centre of the city, for which the rent was secretly subsidised by an admirer, Princess Yekaterina Dmitrievna Soutzos-Obreskova. He was visited here by Jenny Lind in June 1849.",
"title": "Life"
},
{
"paragraph_id": 48,
"text": "With his health further deteriorating, Chopin desired to have a family member with him. In June 1849 his sister Ludwika came to Paris with her husband and daughter, and in September, supported by a loan from Jane Stirling, he took an apartment at the Hôtel Baudard de Saint-James on the Place Vendôme. After 15 October, when his condition took a marked turn for the worse, only a handful of his closest friends remained with him. Viardot remarked sardonically, though, that \"all the grand Parisian ladies considered it de rigueur to faint in his room\".",
"title": "Life"
},
{
"paragraph_id": 49,
"text": "Some of his friends provided music at his request; among them, Potocka sang and Franchomme played the cello. Chopin bequeathed his unfinished notes on a piano tuition method, Projet de méthode, to Alkan for completion. On 17 October, after midnight, the physician leaned over him and asked whether he was suffering greatly. \"No longer\", he replied. He died a few minutes before two a.m. He was 39. Those present at the deathbed appear to have included his sister Ludwika, Fr. Aleksander Jełowicki, Princess Marcelina Czartoryska, Sand's daughter Solange, and his close friend Thomas Albrecht. Later that morning, Solange's husband Clésinger made Chopin's death mask and a cast of his left hand.",
"title": "Life"
},
{
"paragraph_id": 50,
"text": "The funeral, held at the Church of the Madeleine in Paris, was delayed almost two weeks until 30 October. Entrance was restricted to ticket holders, as many people were expected to attend. Over 3,000 people arrived without invitations, from as far as London, Berlin and Vienna, and were excluded.",
"title": "Life"
},
{
"paragraph_id": 51,
"text": "Mozart's Requiem was sung at the funeral; the soloists were the soprano Jeanne-Anaïs Castellan, the mezzo-soprano Pauline Viardot, the tenor Alexis Dupont, and the bass Luigi Lablache; Chopin's Preludes No. 4 in E minor and No. 6 in B minor were also played. The organist was Alfred Lefébure-Wély. The funeral procession to Père Lachaise Cemetery, which included Chopin's sister Ludwika, was led by the aged Prince Adam Czartoryski. The pallbearers included Delacroix, Franchomme, and Camille Pleyel. At the graveside, the Funeral March from Chopin's Piano Sonata No. 2 was played, in Reber's instrumentation.",
"title": "Life"
},
{
"paragraph_id": 52,
"text": "Chopin's tombstone, featuring the muse of music, Euterpe, weeping over a broken lyre, was designed and sculpted by Clésinger and installed on the anniversary of his death in 1850. The expenses of the monument, amounting to 4,500 francs, were covered by Jane Stirling, who also paid for the return of the composer's sister Ludwika to Warsaw. As requested by Chopin, Ludwika took his heart (which had been removed by his doctor Jean Cruveilhier and preserved in alcohol in a vase) back to Poland in 1850. She also took a collection of 200 letters from Sand to Chopin; after 1851 these were returned to Sand, who destroyed them.",
"title": "Life"
},
{
"paragraph_id": 53,
"text": "Chopin's disease and the cause of his death have been topics of debate. His death certificate gave the cause as tuberculosis, and his physician, Cruveilhier, was then the leading French authority on this disease. Other possibilities advanced have included cystic fibrosis, cirrhosis, and alpha 1-antitrypsin deficiency. A visual examination of Chopin's preserved heart (the jar was not opened), conducted in 2014 and first published in the American Journal of Medicine in 2017, suggested that the likely cause of his death was a rare case of pericarditis caused by complications of chronic tuberculosis.",
"title": "Life"
},
{
"paragraph_id": 54,
"text": "Over 230 works of Chopin survive; some compositions from early childhood have been lost. All his known works involve the piano, and only a few range beyond solo piano music, as either piano concertos, songs or chamber music.",
"title": "Music"
},
{
"paragraph_id": 55,
"text": "Chopin was educated in the tradition of Beethoven, Haydn, Mozart, and Clementi; he used Clementi's piano method with his students. He was also influenced by Hummel's development of virtuoso, yet Mozartian, piano technique. He cited Bach and Mozart as the two most important composers in shaping his musical outlook. Chopin's early works are in the style of the \"brilliant\" keyboard pieces of his era as exemplified by the works of Ignaz Moscheles, Friedrich Kalkbrenner, and others. Less direct in the earlier period are the influences of Polish folk music and of Italian opera. Much of what became his typical style of ornamentation (for example, his fioriture) is taken from singing. His melodic lines were increasingly reminiscent of the modes and features of the music of his native country, such as drones.",
"title": "Music"
},
{
"paragraph_id": 56,
"text": "Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic potential of the concept of the concert étude, already being developed in the 1820s and 1830s by Liszt, Clementi, and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837).",
"title": "Music"
},
{
"paragraph_id": 57,
"text": "Chopin also endowed popular dance forms with a greater range of melody and expression. Chopin's mazurkas, while originating in the traditional Polish dance (the mazurek), differed from the traditional variety in that they were written for the concert hall rather than the dance hall; as J. Barrie Jones puts it, \"it was Chopin who put the mazurka on the European musical map\". The series of seven polonaises published in his lifetime (another nine were published posthumously), beginning with the Op. 26 pair (published 1836), set a new standard for music in the form. His waltzes were also written specifically for the salon recital rather than the ballroom and are frequently at rather faster tempos than their dance-floor equivalents.",
"title": "Music"
},
{
"paragraph_id": 58,
"text": "Some of Chopin's well-known pieces have acquired descriptive titles, such as the Revolutionary Étude (Op. 10, No. 12), and the Minute Waltz (Op. 64, No. 1). However, except for his Funeral March, the composer never named an instrumental work beyond genre and number, leaving all potential extramusical associations to the listener; the names by which many of his pieces are known were invented by others. There is no evidence to suggest that the Revolutionary Étude was written with the failed Polish uprising against Russia in mind; it merely appeared at that time. The Funeral March, the third movement of his Sonata No. 2 (Op. 35), the one case where he did give a title, was written before the rest of the sonata, but no specific event or death is known to have inspired it.",
"title": "Music"
},
{
"paragraph_id": 59,
"text": "The last opus number that Chopin himself used was 65, allocated to the Cello Sonata in G minor. He expressed a deathbed wish that all his unpublished manuscripts be destroyed. At the request of the composer's mother and sisters, however, his musical executor Julian Fontana selected 23 unpublished piano pieces and grouped them into eight further opus numbers (Opp. 66–73), published in 1855. In 1857, 17 Polish songs that Chopin wrote at various stages of his life were collected and published as Op. 74, though their order within the opus did not reflect the order of composition.",
"title": "Music"
},
{
"paragraph_id": 60,
"text": "Works published since 1857 have received alternative catalogue designations instead of opus numbers. The most up-to-date catalogue is maintained by the Fryderyk Chopin Institute at its Internet Chopin Information Centre. The older Kobylańska Catalogue (usually represented by the initials 'KK'), named for its compiler, the Polish musicologist Krystyna Kobylańska, is still considered an important scholarly reference. The most recent catalogue of posthumously published works is that of the National Edition of the Works of Fryderyk Chopin, represented by the initials 'WN'.",
"title": "Music"
},
{
"paragraph_id": 61,
"text": "Chopin's original publishers included Maurice Schlesinger and Camille Pleyel. His works soon began to appear in popular 19th-century piano anthologies. The first collected edition was by Breitkopf & Härtel (1878–1902). Among modern scholarly editions of Chopin's works are the version under the name of Paderewski, published between 1937 and 1966, and the more recent Polish National Edition, edited by Jan Ekier and published between 1967 and 2010. The latter is recommended to contestants of the Chopin Competition. Both editions contain detailed explanations and discussions regarding choices and sources.",
"title": "Music"
},
{
"paragraph_id": 62,
"text": "Chopin published his music in France, England, and the German states (i.e. he worked with as many as three separate publishers for each piece or set of pieces) due to the copyright laws of the time. Thus there are often three different \"first editions\" of each work. Each edition is different from the others; Chopin edited them separately, and at times he did some revision to the music while editing it. Furthermore, Chopin provided his publishers with varying sources, including autographs, annotated proofsheets, and scribal copies. Only recently have these differences gained greater recognition.",
"title": "Music"
},
{
"paragraph_id": 63,
"text": "Improvisation stands at the centre of Chopin's creative processes. However, this does not imply impulsive rambling: Nicholas Temperley writes that \"improvisation is designed for an audience, and its starting-point is that audience's expectations, which include the current conventions of musical form\". The works for piano and orchestra, including the two concertos, are held by Temperley to be \"merely vehicles for brilliant piano playing ... formally longwinded and extremely conservative\". After the piano concertos (which are both early, dating from 1830), Chopin made no attempts at large-scale multi-movement forms, save for his late sonatas for piano and cello; \"instead he achieved near-perfection in pieces of simple general design but subtle and complex cell-structure\". Rosen suggests that an important aspect of Chopin's individuality is his flexible handling of the four-bar phrase as a structural unit.",
"title": "Music"
},
{
"paragraph_id": 64,
"text": "J. Barrie Jones suggests that \"amongst the works that Chopin intended for concert use, the four ballades and four scherzi stand supreme\", and adds that \"the Barcarolle Op. 60 stands apart as an example of Chopin's rich harmonic palette coupled with an Italianate warmth of melody\". Temperley opines that these works, which contain \"immense variety of mood, thematic material and structural detail\", are based on an extended \"departure and return\" form; \"the more the middle section is extended, and the further it departs in key, mood and theme, from the opening idea, the more important and dramatic is the reprise when it at last comes\".",
"title": "Music"
},
{
"paragraph_id": 65,
"text": "Chopin's mazurkas and waltzes are all in straightforward ternary or episodic form, sometimes with a coda. The mazurkas often show more folk features than many of his other works, sometimes including modal scales and harmonies and the use of drone basses. However, some also show unusual sophistication, for example, Op. 63 No. 3, which includes a canon at one beat's distance, a great rarity in music.",
"title": "Music"
},
{
"paragraph_id": 66,
"text": "Chopin's polonaises show a marked advance on those of his Polish predecessors in the form (who included his teachers Żywny and Elsner). As with the traditional polonaise, Chopin's works are in triple time and typically display a martial rhythm in their melodies, accompaniments, and cadences. Unlike most of their precursors, they also require a formidable playing technique.",
"title": "Music"
},
{
"paragraph_id": 67,
"text": "His nocturnes are more structured, and of greater emotional depth, than those of Field, whom Chopin met in 1833. Many of the Chopin nocturnes have middle sections marked by agitated expression (and often making very difficult demands on the performer), which heightens their dramatic character.",
"title": "Music"
},
{
"paragraph_id": 68,
"text": "Chopin's études are largely in straightforward ternary form. He used them to teach his own technique of piano playing – for instance playing double thirds (Op. 25, No. 6), playing in octaves (Op. 25, No. 10), and playing repeated notes (Op. 10, No. 7).",
"title": "Music"
},
{
"paragraph_id": 69,
"text": "The preludes, many of which are very brief, were described by Schumann as \"the beginnings of studies\". Inspired by J. S. Bach's The Well-Tempered Clavier, Chopin's preludes move up the circle of fifths (rather than Bach's chromatic scale sequence) to create a prelude in each major and minor tonality. The preludes were perhaps not intended to be played as a group, and may even have been used by him and later pianists as generic preludes to others of his pieces, or even to music by other composers. This is suggested by Kenneth Hamilton, who has noted a 1922 recording by Ferruccio Busoni in which the Prelude Op. 28 No. 7 is followed by the Étude Op. 10 No. 5.",
"title": "Music"
},
{
"paragraph_id": 70,
"text": "The two mature Chopin piano sonatas (No. 2, Op. 35, written in 1839 and No. 3, Op. 58, written in 1844) are in four movements. In Op. 35, Chopin combined within a formal large musical structure many elements of his virtuosic piano technique – \"a kind of dialogue between the public pianism of the brilliant style and the German sonata principle\". This sonata has been considered as showing the influences of both Bach and Beethoven. The Prelude from Bach's Suite No. 6 in D major for cello (BWV 1012) is quoted; and there are references to two sonatas of Beethoven: the Sonata Opus 111, and the Sonata Opus 26, which, like Chopin's Op. 35, has a funeral march as its slow movement. The last movement of Chopin's Op. 35, a brief (75-bar) perpetuum mobile in which the hands play in unmodified octave unison throughout, was found shocking and unmusical by contemporaries, including Schumann. The Op. 58 sonata is closer to the German tradition, including many passages of complex counterpoint, \"worthy of Brahms\" according to Samson.",
"title": "Music"
},
{
"paragraph_id": 71,
"text": "Chopin's harmonic innovations may have arisen partly from his keyboard improvisation technique. In his works, Temperley says, \"novel harmonic effects often result from the combination of ordinary appoggiaturas or passing notes with melodic figures of accompaniment\", and cadences are delayed by the use of chords outside the home key (neapolitan sixths and diminished sevenths) or by sudden shifts to remote keys. Chord progressions sometimes anticipate the shifting tonality of later composers such as Claude Debussy, as does Chopin's use of modal harmony.",
"title": "Music"
},
{
"paragraph_id": 72,
"text": "In 1841 Léon Escudier wrote of a recital given by Chopin that year, \"One may say that Chopin is the creator of a school of piano and a school of composition. In truth, nothing equals the lightness, the sweetness with which the composer preludes on the piano; moreover nothing may be compared to his works full of originality, distinction and grace.\" Chopin refused to conform to a standard method of playing and believed that there was no set technique for playing well. His style was based extensively on his use of a very independent finger technique. In his Projet de méthode he wrote: \"Everything is a matter of knowing good fingering ... we need no less to use the rest of the hand, the wrist, the forearm and the upper arm.\" He further stated: \"One needs only to study a certain position of the hand in relation to the keys to obtain with ease the most beautiful quality of sound, to know how to play short notes and long notes, and [to attain] unlimited dexterity.\" The consequences of this approach to technique in Chopin's music include the frequent use of the entire range of the keyboard, passages in double octaves and other chord groupings, swiftly repeated notes, the use of grace notes, and the use of contrasting rhythms (four against three, for example) between the hands.",
"title": "Music"
},
{
"paragraph_id": 73,
"text": "Jonathan Bellman writes that modern concert performance style – set in the \"conservatory\" tradition of late 19th- and 20th-century music schools, and suitable for large auditoria or recordings – militates against what is known of Chopin's more intimate performance technique. The composer himself said to a pupil that \"concerts are never real music, you have to give up the idea of hearing in them all the most beautiful things of art\". Contemporary accounts indicate that in performance, Chopin avoided rigid procedures sometimes incorrectly attributed to him, such as \"always crescendo to a high note\", but that he was concerned with expressive phrasing, rhythmic consistency and sensitive colouring. Berlioz wrote in 1853 that Chopin \"has created a kind of chromatic embroidery ... whose effect is so strange and piquant as to be impossible to describe ... virtually nobody but Chopin himself can play this music and give it this unusual turn\". Hiller wrote that \"What in the hands of others was elegant embellishment, in his hands became a colourful wreath of flowers.\"",
"title": "Music"
},
{
"paragraph_id": 74,
"text": "Chopin's music is frequently played with rubato, \"the practice in performance of disregarding strict time, 'robbing' some note-values for expressive effect\". There are differing opinions as to how much, and what type, of rubato is appropriate for his works. Charles Rosen comments that \"most of the written-out indications of rubato in Chopin are to be found in his mazurkas ... It is probable that Chopin used the older form of rubato so important to Mozart ... [where] the melody note in the right hand is delayed until after the note in the bass ... An allied form of this rubato is the arpeggiation of the chords thereby delaying the melody note; according to Chopin's pupil Karol Mikuli, Chopin was firmly opposed to this practice.\"",
"title": "Music"
},
{
"paragraph_id": 75,
"text": "Chopin's pupil Friederike Müller [de] wrote:",
"title": "Music"
},
{
"paragraph_id": 76,
"text": "[His] playing was always noble and beautiful; his tones sang, whether in full forte or softest piano. He took infinite pains to teach his pupils this legato, cantabile style of playing. His most severe criticism was 'He – or she – does not know how to join two notes together.' He also demanded the strictest adherence to rhythm. He hated all lingering and dragging, misplaced rubatos, as well as exaggerated ritardandos [...] and it is precisely in this respect that people make such terrible errors in playing his works.",
"title": "Music"
},
{
"paragraph_id": 77,
"text": "When living in Warsaw, Chopin composed and played on an instrument built by the piano-maker Fryderyk Buchholtz. Later in Paris Chopin purchased a piano from Pleyel. He rated Pleyel's pianos as \"non plus ultra\" (\"nothing better\"). Franz Liszt befriended Chopin in Paris and described the sound of Chopin's Pleyel as being \"the marriage of crystal and water\". While in London in 1848, Chopin mentioned his pianos in his letters: \"I have a large drawing-room with three pianos, a Pleyel, a Broadwood and an Erard.\"",
"title": "Music"
},
{
"paragraph_id": 78,
"text": "With his mazurkas and polonaises, Chopin has been credited with introducing to music a new sense of nationalism. Schumann, in his 1836 review of the piano concertos, highlighted the composer's strong feelings for his native Poland, writing:",
"title": "Music"
},
{
"paragraph_id": 79,
"text": "Now that the Poles are in deep mourning [after the failure of the November Uprising of 1830], their appeal to us artists is even stronger ... If the mighty autocrat in the north [i.e. Nicholas I of Russia] could know that in Chopin's works, in the simple strains of his mazurkas, there lurks a dangerous enemy, he would place a ban on his music. Chopin's works are cannon buried in flowers!",
"title": "Music"
},
{
"paragraph_id": 80,
"text": "The biography of Chopin published in 1863 under the name of Franz Liszt (but probably written by Carolyne zu Sayn-Wittgenstein) states that Chopin \"must be ranked first among the first musicians ... individualizing in themselves the poetic sense of an entire nation\".",
"title": "Music"
},
{
"paragraph_id": 81,
"text": "Some modern commentators have argued against exaggerating Chopin's primacy as a \"nationalist\" or \"patriotic\" composer. George Golos refers to earlier \"nationalist\" composers in Central Europe, including Poland's Michał Kleofas Ogiński and Franciszek Lessel, who utilised polonaise and mazurka forms. Barbara Milewski suggests that Chopin's experience of Polish music came more from \"urbanised\" Warsaw versions than from folk music, and that attempts by Jachimecki and others to demonstrate genuine folk music in his works are without basis. Richard Taruskin impugns Schumann's attitude toward Chopin's works as patronising, and comments that Chopin \"felt his Polish patriotism deeply and sincerely\" but consciously modelled his works on the tradition of Bach, Beethoven, Schubert, and Field.",
"title": "Music"
},
{
"paragraph_id": 82,
"text": "A reconciliation of these views is suggested by William Atwood:",
"title": "Music"
},
{
"paragraph_id": 83,
"text": "Undoubtedly [Chopin's] use of traditional musical forms like the polonaise and mazurka roused nationalistic sentiments and a sense of cohesiveness amongst those Poles scattered across Europe and the New World ... While some sought solace in [them], others found them a source of strength in their continuing struggle for freedom. Although Chopin's music undoubtedly came to him intuitively rather than through any conscious patriotic design, it served all the same to symbolize the will of the Polish people ...",
"title": "Music"
},
{
"paragraph_id": 84,
"text": "Jones comments that \"Chopin's unique position as a composer, despite the fact that virtually everything he wrote was for the piano, has rarely been questioned.\" He also notes that Chopin was fortunate to arrive in Paris in 1831 – \"the artistic environment, the publishers who were willing to print his music, the wealthy and aristocratic who paid what Chopin asked for their lessons\" – and these factors, as well as his musical genius, also fuelled his contemporary and later reputation. While his illness and his love affairs conform to some of the stereotypes of romanticism, the rarity of his public recitals (as opposed to performances at fashionable Paris soirées) led Arthur Hutchings to suggest that \"his lack of Byronic flamboyance [and] his aristocratic reclusiveness make him exceptional\" among his romantic contemporaries such as Liszt and Henri Herz.",
"title": "Music"
},
{
"paragraph_id": 85,
"text": "Chopin's qualities as a pianist and composer were recognised by many of his fellow musicians. Schumann named a piece for him in his suite Carnaval, and Chopin later dedicated his Ballade No. 2 in F major to Schumann. Elements of Chopin's music can be found in many of Liszt's later works. Liszt later transcribed for piano six of Chopin's Polish songs. A less fraught friendship was with Alkan, with whom he discussed elements of folk music, and who was deeply affected by Chopin's death.",
"title": "Music"
},
{
"paragraph_id": 86,
"text": "In Paris, Chopin had a number of pupils, including Friedericke Müller, who left memoirs of his teaching and the prodigy Carl Filtsch, to whom both Chopin and Sand became dedicated, Chopin giving him three lessons a week; Filtsch was the only pupil to whom Chopin gave lessons in composition, and, exceptionally, he on several occasions shared a concert platform with him. Two of Chopin's long-standing pupils, Karol Mikuli and Georges Mathias, were themselves piano teachers and passed on details of his playing to their students, some of whom (such as Raoul Koczalski) were to make recordings of his music. Other pianists and composers influenced by Chopin's style include Louis Moreau Gottschalk, Édouard Wolff, and Pierre Zimmermann. Debussy dedicated his own 1915 piano Études to the memory of Chopin; he frequently played Chopin's music during his studies at the Paris Conservatoire, and undertook the editing of Chopin's piano music for the publisher Jacques Durand.",
"title": "Music"
},
{
"paragraph_id": 87,
"text": "Polish composers of the following generation included virtuosi such as Moritz Moszkowski; but, in the opinion of J. Barrie Jones, his \"one worthy successor\" among his compatriots was Karol Szymanowski. Edvard Grieg, Antonín Dvořák, Isaac Albéniz, Pyotr Ilyich Tchaikovsky, and Sergei Rachmaninoff, among others, are regarded by critics as having been influenced by Chopin's use of national modes and idioms. Alexander Scriabin was devoted to the music of Chopin, and his early published works include nineteen mazurkas as well as numerous études and preludes; his teacher Nikolai Zverev drilled him in Chopin's works to improve his virtuosity as a performer. In the 20th century, composers who paid homage to (or in some cases parodied) the music of Chopin included George Crumb, Leopold Godowsky, Bohuslav Martinů, Darius Milhaud, Igor Stravinsky, and Heitor Villa-Lobos.",
"title": "Music"
},
{
"paragraph_id": 88,
"text": "Chopin's music was used in the 1909 ballet Chopiniana, choreographed by Michel Fokine and orchestrated by Alexander Glazunov. Sergei Diaghilev commissioned additional orchestrations – from Stravinsky, Anatoly Lyadov, Sergei Taneyev, and Nikolai Tcherepnin – for later productions, which used the title Les Sylphides. Other noted composers have created orchestrations for the ballet, including Benjamin Britten, Roy Douglas, Alexander Gretchaninov, Gordon Jacob, and Maurice Ravel, whose score is lost.",
"title": "Music"
},
{
"paragraph_id": 89,
"text": "Musicologist Erinn Knyt writes: \"In the nineteenth century Chopin and his music were commonly viewed as effeminate, androgynous, childish, sickly, and 'ethnically other.'\" Music historian Jeffrey Kallberg says that in Chopin's time, \"listeners to the genre of the piano nocturne often couched their reactions in feminine imagery\", and he cites many examples of such reactions to Chopin's nocturnes. One reason for this may be \"demographic\" – there were more female than male piano players, and playing such \"romantic\" pieces was seen by male critics as a female domestic pastime. Such genderization was not commonly applied to other genres among Chopin's works, such as the scherzo or the polonaise. The cultural historian Edward Said has cited the demonstrations by pianist and writer Charles Rosen, in the latter's book The Romantic Generation, of Chopin's skills in \"planning, polyphony, and sheer harmonic creativity\", as effectively overthrowing any legend of Chopin \"as a swooning, 'inspired', small-scale salon composer\".",
"title": "Music"
},
{
"paragraph_id": 90,
"text": "Chopin's music remains very popular and is regularly performed, recorded and broadcast worldwide. The world's oldest monographic music competition, the International Chopin Piano Competition, founded in 1927, is held every five years in Warsaw. The Fryderyk Chopin Institute of Poland lists on its website over eighty societies worldwide devoted to the composer and his music. The Institute site also lists over 1500 performances of Chopin works on YouTube as of March 2021.",
"title": "Music"
},
{
"paragraph_id": 91,
"text": "The British Library notes that \"Chopin's works have been recorded by all the great pianists of the recording era.\" The earliest recording was an 1895 performance by Paul Pabst of the Nocturne in E major, Op. 62, No. 2. The British Library site makes available a number of historic recordings, including some by Alfred Cortot, Ignaz Friedman, Vladimir Horowitz, Benno Moiseiwitsch, Ignacy Jan Paderewski, Arthur Rubinstein, Xaver Scharwenka, Josef Hofmann, Vladimir de Pachmann, Moriz Rosenthal and many others. A select discography of recordings of Chopin works by pianists representing the various pedagogic traditions stemming from Chopin is given by James Methuen-Campbell in his work tracing the lineage and character of those traditions.",
"title": "Recordings"
},
{
"paragraph_id": 92,
"text": "Numerous recordings of Chopin's works are available. On the occasion of the composer's bicentenary, the critics of The New York Times recommended performances by the following contemporary pianists (among many others): Yundi Li, Seong-Jin Cho, Martha Argerich, Vladimir Ashkenazy, Emanuel Ax, Evgeny Kissin, Ivan Moravec, Murray Perahia, Maurizio Pollini, and Krystian Zimerman. The Warsaw Chopin Society organises the Grand prix du disque de F. Chopin for notable Chopin recordings, held every five years.",
"title": "Recordings"
},
{
"paragraph_id": 93,
"text": "Chopin has figured extensively in Polish literature, both in serious critical studies of his life and music and in fictional treatments. The earliest manifestation was probably an 1830 sonnet on Chopin by Leon Ulrich. French writers on Chopin (apart from Sand) have included Marcel Proust and André Gide, and he has also featured in works of Gottfried Benn and Boris Pasternak. There are numerous biographies of Chopin in English (see bibliography for some of these).",
"title": "In literature, stage, film and television"
},
{
"paragraph_id": 94,
"text": "Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version of some of its events: Chopin. First produced in Milan in 1901, the music – based on Chopin's own – was assembled by Giacomo Orefice, with a libretto by Angiolo Orvieto [it].",
"title": "In literature, stage, film and television"
},
{
"paragraph_id": 95,
"text": "Playwright, pianist, and actor Hershey Felder wrote and performs Monsieur Chopin, a one-man, one-act musical play.",
"title": "In literature, stage, film and television"
},
{
"paragraph_id": 96,
"text": "Chopin's life and romantic tribulations have been fictionalised in numerous films. As early as 1919, Chopin's relationships with three women – his youth sweetheart Mariolka, then Polish singer Sonja Radkowska, and later George Sand – were portrayed in the German silent film Nocturno der Liebe (1919), with Chopin's music serving as a backdrop. The 1945 biographical film A Song to Remember earned Cornel Wilde an Academy Award nomination as Best Actor for his portrayal of the composer. Other film treatments have included La valse de l'adieu (1928) by Henry Roussel, with Pierre Blanchar as Chopin; Impromptu (1991), starring Hugh Grant as Chopin; La note bleue (1991); and Chopin: Desire for Love (2002).",
"title": "In literature, stage, film and television"
},
{
"paragraph_id": 97,
"text": "Chopin's life was covered in a 1999 BBC Omnibus documentary by András Schiff and Mischa Scorer, in a 2010 documentary realised by Angelo Bozzolini and Roberto Prosseda for Italian television, and in a BBC Four documentary Chopin – The Women Behind The Music (2010).",
"title": "In literature, stage, film and television"
},
{
"paragraph_id": 98,
"text": "Music scores",
"title": "External links"
}
]
| Frédéric François Chopin was a Polish composer and virtuoso pianist of the Romantic period, who wrote primarily for solo piano. He has maintained worldwide renown as a leading musician of his era, one whose "poetic genius was based on a professional technique that was without equal in his generation". Chopin was born in Żelazowa Wola and grew up in Warsaw, which in 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the November 1830 Uprising. At 21, he settled in Paris. Thereafter he gave only 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself by selling his compositions and by giving piano lessons, for which he was in high demand. Chopin formed a friendship with Franz Liszt and was admired by many of his musical contemporaries, including Robert Schumann. After a failed engagement to Maria Wodzińska from 1836 to 1837, he maintained an often troubled relationship with the French writer Aurore Dupin. A brief and unhappy visit to Mallorca with Sand in 1838–39 would prove one of his most productive periods of composition. In his final years, he was supported financially by his admirer Jane Stirling. For most of his life, Chopin was in poor health. He died in Paris in 1849 at the age of 39. Chopin's compositions are mostly for solo piano, though he also wrote two piano concertos, some chamber music, and 19 songs set to Polish lyrics. His piano pieces are technically demanding and expanded the limits of the instrument; his own performances were noted for their nuance and sensitivity. Chopin's major piano works include mazurkas, waltzes, nocturnes, polonaises, the instrumental ballade, études, impromptus, scherzi, preludes, and sonatas, some published only posthumously. Among the influences on his style of composition were Polish folk music, the classical tradition of J. S. Bach, Mozart, and Schubert, and the atmosphere of the Paris salons, of which he was a frequent guest. His innovations in style, harmony, and musical form, and his association of music with nationalism, were influential throughout and after the late Romantic period. Chopin's music, his status as one of music's earliest celebrities, his indirect association with political insurrection, his high-profile love life, and his early death have made him a leading symbol of the Romantic era. His works remain popular, and he has been the subject of numerous films and biographies of varying historical fidelity. Among his many memorials is the Fryderyk Chopin Institute, which was created by the Parliament of Poland to research and promote his life and works. It hosts the International Chopin Piano Competition, a prestigious competition devoted entirely to his works. | 2001-08-01T09:18:49Z | 2023-12-29T03:51:53Z | [
"Template:Cite journal",
"Template:Div col end",
"Template:Refn",
"Template:Sfn",
"Template:Circa",
"Template:Ill",
"Template:Romantic music",
"Template:Main",
"Template:Webarchive",
"Template:Harvc",
"Template:Internet Archive author",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Harvnb",
"Template:Redirect",
"Template:EngvarB",
"Template:Infobox classical composer",
"Template:See also",
"Template:Cite news",
"Template:Grove Music subscription",
"Template:Short description",
"Template:IMSLP",
"Template:Wikisourcelang",
"Template:Chopin",
"Template:Listen",
"Template:As of",
"Template:Portal bar",
"Template:Authority control",
"Template:Featured article",
"Template:Convert",
"Template:Quote",
"Template:Further",
"Template:Use dmy dates",
"Template:Snd",
"Template:BBC composer page",
"Template:Romanticism",
"Template:Spaced ndash",
"Template:Lang",
"Template:Multiple image",
"Template:Cite Grove",
"Template:Musical nationalism",
"Template:'\"",
"Template:Div col",
"Template:Cite encyclopedia",
"Template:Sister project links"
]
| https://en.wikipedia.org/wiki/Fr%C3%A9d%C3%A9ric_Chopin |
10,825 | Free Democratic Party (Germany) | The Free Democratic Party (German: Freie Demokratische Partei; FDP, German pronunciation: [ɛfdeːˈpeː] ) is a liberal political party in Germany.
The FDP was founded in 1948 by members of former liberal political parties which existed in Germany before World War II, namely the German Democratic Party and the German People's Party. For most of the second half of the 20th century, the FDP held the balance of power in the Bundestag. It has been a junior coalition partner to both the CDU/CSU (1949–1956, 1961–1966, 1982–1998 and 2009–2013) and Social Democratic Party (SPD) (1969–1982, 2021–present). In the 2013 federal election, the FDP failed to win any directly elected seats in the Bundestag and came up short of the 5 percent threshold to qualify for list representation, being left without representation in the Bundestag for the first time in its history. In the 2017 federal election, the FDP regained its representation in the Bundestag, receiving 10.6% of the vote. After the 2021 federal election the FDP became part of governing Scholz cabinet in coalition with the Social Democratic Party and the Greens.
Since the 1980s, the party, consistently with its ordoliberal tradition, has pushed economic liberalism and has aligned itself closely to the promotion of free markets and privatization, and is aligned to the centre or centre-right of the political spectrum. The FDP is a member of the Liberal International, the Alliance of Liberals and Democrats for Europe and Renew Europe.
The history of liberal parties in Germany dates back to 1861, when the German Progress Party (DFP) was founded, being the first political party in the modern sense in Germany. From the establishment of the National Liberal Party in 1867 until the demise of the Weimar Republic in 1933, the liberal-democratic camp was divided into a "national-liberal" and a "left-liberal" line of tradition. After 1918 the national-liberal strain was represented by the German People's Party (DVP), the left-liberal one by the German Democratic Party (DDP, which merged into the German State Party in 1930). Both parties played an important role in government during the Weimar Republic era, but successively lost votes during the rise of the Nazi Party beginning in the late-1920s. After the Nazi seizure of power, both liberal parties agreed to the Enabling Act of 1933 and subsequently dissolved themselves. During the 12 years of Hitler's rule, some former liberals collaborated with the Nazis (e.g. economy minister Hjalmar Schacht), while others resisted actively against Nazism, with some Liberal leaning members and former members of the military joining up with Henning von Tresckow (e.g. the Solf Circle).
Soon after World War II, the Soviet Union pushed for the creation of licensed "anti-fascist" parties in its occupation zone in East Germany. In July 1945, former DDP politicians Wilhelm Külz, Eugen Schiffer and Waldemar Koch called for the establishment of a pan-German liberal party. Their Liberal-Democratic Party (LDP) was soon licensed by the Soviet Military Administration in Germany, under the condition that the new party joined the pro-Soviet "Democratic Bloc".
In September 1945, citizens in Hamburg—including the anti-Nazi resistance circle "Association Free Hamburg"—established the Party of Free Democrats (PFD) as a bourgeois left-wing party and the first liberal Party in the Western occupation zones. The German Democratic Party was revived in some states of the Western occupation zones (in the Southwestern states of Württemberg-Baden and Württemberg-Hohenzollern under the name of Democratic People's Party).
Many former members of DDP and DVP however agreed to finally overcome the traditional split of German liberalism into a national-liberal and a left-liberal branch, aiming for the creation of a united liberal party. In October 1945 a liberal coalition party was founded in the state of Bremen under the name of Bremen Democratic People's Party. In January 1946, liberal state parties of the British occupation zone merged into the Free Democratic Party of the British Zone (FDP). A similar state party in Hesse, called the Liberal Democratic Party, was licensed by the U.S. military government in January 1946. In the state of Bavaria, a Free Democratic Party was founded in May 1946.
In the first post-war state elections in 1946, liberal parties performed well in Württemberg-Baden (16.8%), Bremen (18.3%), Hamburg (18.2%) and Greater Berlin (still undivided; 9.3%). The LDP was especially strong in the October 1946 state elections of the Soviet zone—the last free parliamentary election in East Germany—obtaining an average of 24.6% (highest in Saxony-Anhalt, 29.9%, and Thuringia, 28.5%), thwarting an absolute majority of the Socialist Unity Party of Germany (SED) that was favoured by the Soviet occupation power. This disappointment to the communists however led to a change of electoral laws in the Soviet zone, cutting the autonomy of non-socialist parties including the LDP and forcing it to join the SED-dominated National Front, making it a dependent "bloc party".
The Democratic Party of Germany (DPD) was established in Rothenburg ob der Tauber on 17 March 1947 as a pan-German party of liberals from all four occupation zones. Its leaders were Theodor Heuss (representing the DVP of Württemberg-Baden in the American zone) and Wilhelm Külz (representing the LDP of the Soviet zone). However, the project failed in January 1948 as a result of disputes over Külz's pro-Soviet direction.
The Free Democratic Party was established on 11–12 December 1948 in Heppenheim, in Hesse, as an association of all 13 liberal state parties in the three Western zones of occupation. The proposed name, Liberal Democratic Party, was rejected by the delegates, who voted 64 to 25 in favour of the name Free Democratic Party (FDP).
The party's first chairman was Theodor Heuss, a member of the Democratic People's Party in Württemberg-Baden; his deputy was Franz Blücher of the FDP in the British zone. The place for the party's foundation was chosen deliberately: the "Heppenheim Assembly" was held at the Hotel Halber Mond on 10 October 1847, a meeting of moderate liberals who were preparing for what would be, within a few months, the German revolutions of 1848–1849.
The FDP was founded on 11 December 1948 through the merger of nine regional liberal parties formed in 1945 from the remnants of the pre-1933 German People's Party (DVP) and the German Democratic Party (DDP), which had been active in the Weimar Republic.
In the first elections to the Bundestag on 14 August 1949, the FDP won a vote share of 11.9 percent (with 12 direct mandates, particularly in Baden-Württemberg and Hesse), and thus obtained 52 of 402 seats. It formed a common Bundestag group with the hard-right Deutsche Partei (DP). In September of the same year the FDP chairman Theodor Heuss was elected the first President of the Federal Republic of Germany. In his 1954 re-election, he received the best election result to date of a President with 871 of 1018 votes (85.6 percent) of the Federal Assembly. Adenauer was also elected on the proposal of the new German President with an extremely narrow majority as the first Chancellor. The FDP participated with the CDU/CSU and the German Party in Adenauer's coalition cabinet; they had three ministers: Franz Blücher (Vice-Chancellor), Thomas Dehler (justice), and Eberhard Wildermuth (housing).
On the most important economic, social and German national issues, the FDP agreed with their coalition partners, the CDU/CSU. However, the FDP offered to middle-class voters a secular party that refused the religious schools and accused the opposition parties of clericalization. The FDP said they were known also as a consistent representative of the market economy, while the CDU was then dominated nominally from the Ahlen Programme, which allowed a Third Way between capitalism and socialism. Ludwig Erhard, the "father" of the social market economy, had his followers in the early years of the Federal Republic in the CDU/CSU rather than in the FDP.
The FDP won Hesse's 1950 state election with 31.8 percent, the best result in its history, through appealing to East Germans displaced by the war by including them on their ticket.
Up to the 1950s, several of the FDP's regional organizations were to the right of the CDU/CSU, which initially had ideas of some sort of Christian socialism, and even former office-holders of the Third Reich were courted with nationalist values. The FDP voted in parliament at the end of 1950 against the CDU- and SPD-introduced de-nazification process. At their party conference in Munich in 1951 they demanded the release of all "so-called war criminals" and welcomed the establishment of the "Association of German soldiers" of former Wehrmacht and SS members to advance the integration of the Nazi forces in democracy. The FDP members were seen as part of the "extremist" block along with the German Party in West Germany by the US intelligence officials.
Similarly, a de-Nazification Act could only be passed at the end of 1950 in the Bundestag because the opposition SPD supported the motion along with the governing CDU/CSU; the governing FDP voted along with the hard-right DP and the openly neo-Nazi German Reich Party (DRP) against the law against Nazis.
The 1953 Naumann-Affair, named after Werner Naumann, identified old Nazis trying to infiltrate the party, which had many right-wing and nationalist members in Hesse, North Rhine-Westphalia and Lower Saxony. After the British occupation authorities had arrested seven prominent members of the Naumann circle, the FDP federal board installed a commission of inquiry, chaired by Thomas Dehler, which particularly sharply criticized the situation in the North Rhine-Westphalian FDP. In the following years, the right wing lost power, and the extreme right increasingly sought areas of activity outside the FDP. In the 1953 federal election, the FDP received 9.5 percent of the party votes, 10.8 percent of the primary vote (with 14 direct mandates, particularly in Hamburg, Lower Saxony, Hesse, Württemberg and Bavaria) and 48 of 487 seats.
In the second term of the Bundestag, the South German Liberal Democrats gained influence in the party. Thomas Dehler, a representative of a more social-liberal course took over as party and parliamentary leader. The former Minister of Justice Dehler, who in 1933 suffered persecution by the Nazis, was known for his rhetorical focus. Generally the various regional associations were independent. After the FDP had left in early 1956, the coalition with the CDU in North Rhine-Westphalia and made with SPD and centre a new state government, were a total of 16 members of parliament, including the four federal ministers from the FDP and founded the short-lived Free People's Party, which then up was involved to the end of the legislature instead of FDP in the Federal Government. The FDP first took it to the opposition.
Only one of the smaller post-war parties, the FDP survived despite many problems. In 1957 federal elections they still reached 7.7 percent of the vote to 1990 and their last direct mandate with which they had held 41 of 497 seats in the Bundestag. However, they still remained in opposition because the Union won an absolute majority. The FDP also called for a nuclear-free zone in Central Europe.
Even before the election Dehler was assigned as party chairman. At the federal party in Berlin at the end January 1957 relieved him Reinhold Maier. Dehler's role as Group Chairman took over after the election of the national set very Erich Mende. Mende was also chairman of the party.
In the 1961 federal election, the FDP achieved 12.8 percent nationwide, the best result until then, and the FDP entered a coalition with the CDU again. Although it was committed before the election to continuing to sit in any case in a government together with Adenauer, Chancellor Adenauer was again, however, to withdraw under the proviso, after two years. These events led to the FDP being nicknamed the Umfallerpartei ("pushover party").
In the Spiegel affair, the FDP withdrew their ministers from the federal government. Although the coalition was renewed again under Adenauer in 1962, the FDP withdrew again on the condition in October 1963. This occurred even under the new Chancellor, Ludwig Erhard. This was for Erich Mende turn the occasion to go into the cabinet: he took the rather unimportant Federal Ministry for All-German Affairs.
In the 1965 federal elections the FDP gained 9.5 percent. The coalition with the CDU in 1966 broke on the subject of tax increases and it was followed by a grand coalition between the CDU and the SPD. The opposition also pioneered a course change: the former foreign policy and the attitude to the eastern territories were discussed. The opposition leader for the FDP in the Bundestag was Knut von Kühlmann-Stumm. The new chairman elected delegates in 1968 Walter Scheel, a European-oriented liberals, although it came from the national liberal camp, but with Willi Weyer and Hans-Dietrich Genscher led the new center of the party. This center strove to make the FDP coalition support both major parties. Here, the Liberals approached to by their reorientation in East Germany and politics especially of the SPD.
On 21 October 1969 began the period after the election of a Social-Liberal coalition with the SPD and the German Chancellor Willy Brandt. Walter Scheel was he who initiated the foreign policy reversal. Despite a very small majority he and Willy Brandt sat by the controversial New Ostpolitik. This policy was within the FDP quite controversial, especially since after the entry into the Federal Government defeats in state elections in North Rhine-Westphalia, Lower Saxony and Saarland on 14 June 1970 followed. In Hanover and Saarbrücken, the party left the parliament.
After the federal party congress in Bonn, just a week later supported the policy of the party leadership and Scheel had confirmed in office, founded by Siegfried party rights Zoglmann 11 July 1970 a "non-partisan" organization called the National-Liberal action on the Hohensyburgstraße—to fall with the goal of ending the left-liberal course of the party and Scheel. However, this was not. Zoglmann supported in October 1970 a disapproval resolution of opposition to Treasury Secretary Alexander Möller, Erich Mende, Heinz Starke, and did the same. A little later all three declared their withdrawal from the FDP; Mende and Strong joined the CDU, Zoglmann later founded the German Union (Deutsche Union), which remained a splinter party.
The foreign policy and the socio-political changes were made in 1971 by the Freiburg Thesis, which were as Rowohlt Paperback sold more than 100,000 times, on a theoretical basis, the FDP is committed to "social liberalism" and social reforms. Walter Scheel was first foreign minister and vice chancellor, 1974, he was then second-liberal President and paving the way for inner-party the previous interior minister Hans-Dietrich Genscher free.
From 1969 to 1974 the FDP supported the SPD Chancellor Willy Brandt, who was succeeded by Helmut Schmidt. Already by the end of the 70s there did not seem to be enough similarities between the FDP and the SPD to form a new coalition, but the CDU/CSU chancellor candidate of Franz Josef Strauss in 1980 pushed the parties to run together again. The FDP's policies, however, began to drift apart from the SPD's, especially when it came to the economy. Within the SPD, there was strong grassroots opposition to Chancellor Helmut Schmidt's policies on the NATO Double-Track Decision. However, within the FDP, the conflicts and contrasts were always greater.
In the fall of 1982, the FDP reneged on its coalition agreement with the SPD and instead threw its support behind the CDU/CSU. On 1 October, the FDP and CDU/CSU were able to oust Schmidt and replace him with CDU party chairman Helmut Kohl as the new Chancellor. The coalition change resulted in severe internal conflicts, and the FDP then lost about 20 percent of its 86,500 members, as reflected in the general election in 1983 by a drop from 10.6 percent to 7.0 percent. The members went mostly to the SPD, the Greens and newly formed splinter parties, such as the left-liberal party Liberal Democrats (LD). The exiting members included the former FDP General Secretary and later EU Commissioner Günter Verheugen. At the party convention in November 1982, the Schleswig-Holstein state chairman Uwe Ronneburger challenged Hans-Dietrich Genscher as party chairman. Ronneburger received 186 of the votes—about 40 percent—and was just narrowly defeated by Genscher.
in 1980, FDP members who did not agree with the politics of the FDP youth organization Young Democrats founded the Young Liberals (JuLis). For a time JuLis and the Young Democrats operated side by side, until the JuLis became the sole official youth wing of the FDP in 1983. The Young Democrats split from the FDP and were left as a party-independent youth organization.
At the time of reunification, the FDP's objective was a special economic zone in the former East Germany, but could not prevail against the CDU/CSU, as this would prevent any loss of votes in the five new federal states in the general election in 1990.
In all federal election campaigns since the 1980s, the party sided with the CDU and CSU, the main conservative parties in Germany. Following German reunification in 1990, the FDP merged with the Association of Free Democrats, a grouping of liberals from East Germany and the Liberal Democratic Party of Germany.
During the political upheavals of 1989/1990 in the GDR new liberal parties emerged, like the FDP East Germany or the German Forum Party. They formed the Liberal Democratic Party, who had previously acted as a bloc party on the side of the SED and with Manfred Gerlach also the last Council of State of the GDR presented, the Alliance of Free Democrats (BFD). Within the FDP came in the following years to considerable internal discussions about dealing with the former bloc party. Even before the reunification of Germany united on a joint congress in Hanover, the West German FDP united with the other parties to form the first all-German party. Both party factions brought the FDP a great, albeit short-lived, increase in membership. In the first all-German Bundestag elections, the CDU/CSU/FDP centre-right coalition was confirmed, the FDP received 11.0 percent of the valid votes (79 seats) and won in Genschers city of birth Halle (Saale) the first direct mandate since 1957.
During the 1990s, the FDP won between 6.2 and 11 percent of the vote in Bundestag elections. It last participated in the federal government by representing the junior partner in the government of Chancellor Helmut Kohl of the CDU.
In 1998, the CDU/CSU-FDP coalition lost the federal election, which ended the FDP's nearly three decade reign in government. In its 2002 campaign the FDP made an exception to its party policy of siding with the CDU/CSU when it adopted equidistance to the CDU and SPD. From 1998 until 2009 the FDP remained in the opposition until it became part of a new centre-right coalition government.
In the 2005 general election the party won 9.8 percent of the vote and 61 federal deputies, an unpredicted improvement from prior opinion polls. It is believed that this was partly due to tactical voting by CDU and Christian Social Union of Bavaria (CSU) alliance supporters who hoped for stronger market-oriented economic reforms than the CDU/CSU alliance called for. However, because the CDU did worse than predicted, the FDP and the CDU/CSU alliance were unable to form a coalition government. At other times, for example after the 2002 federal election, a coalition between the FDP and CDU/CSU was impossible primarily because of the weak results of the FDP.
The CDU/CSU parties had achieved the third-worst performance in German postwar history with only 35.2 percent of the votes. Therefore, the FDP was unable to form a coalition with its preferred partners, the CDU/CSU parties. As a result, the party was considered as a potential member of two other political coalitions, following the election. One possibility was a partnership between the FDP, the Social Democratic Party of Germany (SPD) and the Alliance 90/The Greens, known as a "traffic light coalition", named after the colors of the three parties. This coalition was ruled out, because the FDP considered the Social Democrats and the Greens insufficiently committed to market-oriented economic reform. The other possibility was a CDU-FDP-Green coalition, known as a "Jamaica coalition" because of the colours of the three parties. This coalition wasn't concluded either, since the Greens ruled out participation in any coalition with the CDU/CSU. Instead, the CDU formed a Grand coalition with the SPD, and the FDP entered the opposition. FDP leader Guido Westerwelle became the unofficial leader of the opposition by virtue of the FDP's position as the largest opposition party in the Bundestag.
In the 2009 European election, the FDP received 11% of the national vote (2,888,084 votes in total) and returned 12 MEPs.
In the September 2009 federal elections, the FDP increased its share of the vote by 4.8 percentage points to 14.6%, an all-time record. This percentage was enough to offset a decline in the CDU/CSU's vote compared to 2005, to create a CDU-FDP centre-right governing coalition in the Bundestag with a 53% majority of seats. On election night, party leader Westerwelle said his party would work to ensure that civil liberties were respected and that Germany got an "equitable tax system and better education opportunities".
The party also made gains in the two state elections held at the same time, acquiring sufficient seats for a CDU-FDP coalition in the northernmost state, Schleswig-Holstein, and gaining enough votes in left-leaning Brandenburg to clear the 5% hurdle to enter that state's parliament.
However, after reaching its best ever election result in 2009, the FDP's support collapsed. The party's policy pledges were put on hold by Merkel as the recession of 2009 unfolded and with the onset of the European debt crisis in 2010. By the end of 2010, the party's support had dropped to as low as 5%. The FDP retained their seats in the state elections in North Rhine-Westphalia, which was held six months after the federal election, but out of the seven state elections that have been held since 2009, the FDP have lost all their seats in five of them due to failing to cross the 5% threshold.
Support for the party further eroded amid infighting and an internal rebellion over euro-area bailouts during the debt crisis.
Westerwelle stepped down as party leader following the 2011 state elections, in which the party was wiped out in Saxony-Anhalt and Rhineland-Palatinate and lost half its seats in Baden-Württemberg. Westerwelle was replaced in May 2011 by Philipp Rösler. The change in leadership failed to revive the FDP's fortunes, however, and in the next series of state elections, the party lost all its seats in Bremen, Mecklenburg-Vorpommern, and Berlin. In Berlin, the party lost nearly 75% of the support they had had in the previous election.
In March 2012, the FDP lost all their state-level representation in the 2012 Saarland state election. However, this was offset by the Schleswig-Holstein state elections, when they achieved 8% of the vote, which was a severe loss of seats but still over the 5% threshold. In the snap elections in North Rhine-Westphalia a week later, the FDP not only crossed the electoral threshold, but also increased its share of the votes to 2 percentage points higher than in the previous state election. This was attributed to the local leadership of Christian Lindner.
The FDP last won a directly elected seat in 1990, in Halle—the only time it has won a directly elected seat since 1957. The party's inability to win directly elected seats came back to haunt it at the 2013 election, in which it came up just short of the 5% threshold. With no directly elected seats, the FDP was shut out of the Bundestag for the first time since 1949. After the previous chairman Philipp Rösler then resigned, Christian Lindner took over the leadership of the party.
In the 2014 European parliament elections, the FDP received 3.4% of the national vote (986,253 votes in total) and returned 3 MEPs. In the 2014 Brandenburg state election the party experienced a 5.8% down-swing and lost all their representatives in the Brandenburg state parliament. In the 2014 Saxony state election, the party experienced a 5.2% down-swing, again losing all of its seats. In the 2014 Thuringian state election a similar phenomenon was repeated with the party falling below the 5% threshold following a 5.1% drop in popular vote.
The party managed to enter parliament in the 2015 Bremen state election with the party receiving 6.5% of the vote and gaining 6 seats. However, it failed to get into government as a coalition between the Social Democrats and the Greens was created. In the 2016 Mecklenburg-Vorpommern state election the party failed to get into parliament despite increasing its vote share by 0.3%. The party did manage to get into parliament in Baden-Württemberg, gaining 3% of the vote and a total of 12 seats. This represents a five-seat improvement over their previous results. In the 2016 Berlin state election the party gained 4.9% of the vote and 12 seats but still failed to get into government. A red-red-green coalition was instead formed relegating the FDP to the opposition. In the 2016 Rhineland-Palatinate state election, the party managed to enter parliament receiving 6.2% of the vote and 7 seats. It also managed to enter government under a traffic light coalition. In 2016 Saxony-Anhalt state election the party narrowly missed the 5% threshold, receiving 4.9% of the vote and therefore receiving zero seats despite a 1% swing in their favour.
The 2017 North Rhine-Westphalia state election was widely considered a test of the party's future as their chairman Christian Lindner was also leading the party in that state. The party experienced a 4% swing in its favour gaining 6 seats and entering into a coalition with the CDU with a bare majority. In the 2017 Saarland state election the party again failed to gain any seats despite a 1% swing in their favour. The party gained 3 seats and increased its vote share by 3.2% in the 2017 Schleswig-Holstein state election. This success was often credited to their state chairman Wolfgang Kubicki. They also managed to re-enter the government under a Jamaica coalition.
In the 2017 federal election the party scored 10.7% of votes and re-entered the Bundestag, winning 80 seats. After the election, a Jamaica coalition was considered between the CDU, Greens, and FDP. However, FDP chief Christian Lindner walked out of the coalition talks due to a disagreement over European migration policy, saying "It is better not to govern than to govern badly." As a result, the CDU/CSU formed another grand coalition with the SPD.
The FDP won 5.4% and 5 seats in the 2019 European election.
In the October 2019 Thuringian state election, the FDP won seats in the Landtag of Thuringia for the first time since 2009. It exceeded the 5% threshold by just 5 votes. In February 2020, the FDP's Thomas Kemmerich was elected Minister-President of Thuringia by the Landtag with the likely support of the CDU and AfD, becoming the second member of the FDP to serve as head of government in a German state. This was also the first time a head of government had been elected with the support of AfD. Under intense pressure from state and federal politicians, Kemmerich resigned the following day, stating he would seek new elections. The next month, he was replaced by Bodo Ramelow of The Left; the FDP did not run a candidate in the second vote for Minister-President.
In 2021, the FDP returned to the Saxony-Anhalt state parliament after five years of absence. They had similar success in Baden-Württemberg and Mecklenburg-Vorpommern, but faced setbacks in Baden-Württemberg, Berlin and Rhineland-Palatinate.
In the September 2021 federal election, the FDP saw its vote share and number of seats grow, to 11.5% and 92 seats respectively. As a result of the defeat of the CDU/CSU under Armin Laschet, the SPD, Greens, and FDP entered talks to form a traffic light coalition. The agreement was finalised on 24 November, in which the FDP holds four federal ministries in the Scholz cabinet (Finance, Justice, Digital and Transport and Education and Research).
Throughout 2022, the FDP saw poor approval in national opinion polls. In State Parliament elections they also performed poorly. In March, the FDP didn't win any seats in Saarland. In May they lost over half their seats in North Rhine-Westphalia and Schleswig-Holstein. In October, the FDP lost all 11 of their seats in Lower Saxony. It also lost all 12 seats in the 2023 Berlin repeat state election.
In the 2023 Bavarian state election, where Martin Hagen is leading the party, all 11 seats were lost.
The FDP has been described as liberal, conservative-liberal, classical-liberal, and liberal-conservative. The FDP's political position has variously been described as centrist, centre-right, and right-wing.
The FDP is a predominantly classical-liberal inspired party, both in the sense of supporting free market economic policies and in the sense of policies emphasizing the minimization of government interference in individual affairs. The party has also been described by various media sources as neoliberal. Scholars of political science have historically identified the FDP as closer to the CDU/CSU bloc than to the Social Democratic Party of Germany (SPD) on economic issues but closer to the SPD and the Greens on issues such as civil liberties, education, defense, and foreign policy. The FDP itself has oriented itself towards a centrist position between the CDU and the SPD.
The party is a traditional supporter of ordoliberalism, having been influenced by the economic theories of Wilhelm Röpke and Alexander Rüstow. Otto Graf Lambsdorff, who served as Federal Minister of Economics, is a historical FDP grandee who was a proponent of ordoliberalism. In 1971 during its federal social-liberal coalition with the SPD, the FDP published the Freiburger Theses programme, heralding an ideological move towards reformism and social liberalism, and support for environmental protection policy. However, the party's 1977 Kiel Theses and 1985 Liberal Manifesto returned the FDP towards its traditional free-market, ordoliberal approach. Historical members of the party's social-liberal wing included Gerhart Baum and Werner Maihofer.
During the 2017 federal election, the party called for Germany to adopt an immigration channel using a Canada-style points-based immigration system; spend up to 3% of GDP on defense and international security; phase out the solidarity surcharge tax (which was first levied in 1991 to pay for the costs of absorbing East Germany after German reunification); cut taxes by 30 billion euro (twice the amount of the tax cut proposed by the CDU); and improve road infrastructure by spending 2 billion euro annually for each of the next two decades, to be funded by selling government stakes in Deutsche Bahn, Deutsche Telekom, and Deutsche Post. The FDP also called for the improvement of Germany's digital infrastructure, the establishment of a Ministry of Digital Affairs, and greater investment in education. The party also supports allowing dual citizenship (in contrast to the CDU/CSU, which opposes it) but also supports requiring third-generation immigrants to select a single nationality.
The FDP supports the legalization of cannabis in Germany and opposes proposals to heighten Internet surveillance. The FDP supports same-sex marriage in Germany. The FDP supports legalisation of altruistic surrogacy.
The FDP has mixed views on European integration. In its 2009 campaign manifesto, the FDP pledged support for ratification of the Lisbon Treaty as well as EU reforms aimed at enhancing transparency and democratic responsiveness, reducing bureaucracy, establishing stringent curbs on the EU budget, and fully liberalizing the Single Market. At its January 2019 congress ahead of the 2019 European Parliament election, FDP's manifesto called for further EU reforms, including reducing the number of European Commissioners to 18 from the current 28, abolishing the European Economic and Social Committee, and ending the European Parliament's "traveling circus" between Brussels and Strasbourg. Vice chairwoman and Deputy Leader Nicola Beer stated: "We want both more and less Europe."
In 1940s and 1950s, the FDP was the only German party strongly in favour of market economy, while the CDU/CSU was still adhering to a "third way" between capitalism and socialism. At the time, the FDP wanted former Nazis to be reintegrated into society and demanded a release of Nazi war criminals.
The party's membership has historically been largely male; in 1995, less than one-third of the party's members were women, and in the 1980s women made up less than one-tenth of the party's national executive committee. By the 1990s, the percentage of women on the FDP's national executive committee rose to 20%.
The party tends to draw its support from professionals and self-employed Germans. It lacks consistent support from a voting bloc, such as the trade union membership that supports the SPD or the church membership that supports the CDU/CSU, and thus has historically only garnered a small group of Stammwähler (core voters) who consistently vote for the party.
In the 2021 elections, the FDP was the second-most popular party among voters under age 30; among this demographic, the Greens won 22% of the vote, the FDP 19%, the SPD 17%, the CDU/CSU 11%, Die Linke 8%, and the AfD 8%. According to Deutsche Welle in 2021, voters for both the FDP and the Greens are similar in being younger, politically centrist professionals living in cities, unlike left working-class voters and right Christian voters.
In the European Parliament the Free Democratic Party sits in the Renew Europe group with five MEPs.
In the European Committee of the Regions, the Free Democratic Party sits in the Renew Europe CoR group, with one full member for the 2020–2025 mandate.
Below are charts of the results that the FDP has secured in each election to the federal Bundestag. Timelines showing the number of seats and percentage of party list votes won are on the right. | [
{
"paragraph_id": 0,
"text": "The Free Democratic Party (German: Freie Demokratische Partei; FDP, German pronunciation: [ɛfdeːˈpeː] ) is a liberal political party in Germany.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The FDP was founded in 1948 by members of former liberal political parties which existed in Germany before World War II, namely the German Democratic Party and the German People's Party. For most of the second half of the 20th century, the FDP held the balance of power in the Bundestag. It has been a junior coalition partner to both the CDU/CSU (1949–1956, 1961–1966, 1982–1998 and 2009–2013) and Social Democratic Party (SPD) (1969–1982, 2021–present). In the 2013 federal election, the FDP failed to win any directly elected seats in the Bundestag and came up short of the 5 percent threshold to qualify for list representation, being left without representation in the Bundestag for the first time in its history. In the 2017 federal election, the FDP regained its representation in the Bundestag, receiving 10.6% of the vote. After the 2021 federal election the FDP became part of governing Scholz cabinet in coalition with the Social Democratic Party and the Greens.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Since the 1980s, the party, consistently with its ordoliberal tradition, has pushed economic liberalism and has aligned itself closely to the promotion of free markets and privatization, and is aligned to the centre or centre-right of the political spectrum. The FDP is a member of the Liberal International, the Alliance of Liberals and Democrats for Europe and Renew Europe.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The history of liberal parties in Germany dates back to 1861, when the German Progress Party (DFP) was founded, being the first political party in the modern sense in Germany. From the establishment of the National Liberal Party in 1867 until the demise of the Weimar Republic in 1933, the liberal-democratic camp was divided into a \"national-liberal\" and a \"left-liberal\" line of tradition. After 1918 the national-liberal strain was represented by the German People's Party (DVP), the left-liberal one by the German Democratic Party (DDP, which merged into the German State Party in 1930). Both parties played an important role in government during the Weimar Republic era, but successively lost votes during the rise of the Nazi Party beginning in the late-1920s. After the Nazi seizure of power, both liberal parties agreed to the Enabling Act of 1933 and subsequently dissolved themselves. During the 12 years of Hitler's rule, some former liberals collaborated with the Nazis (e.g. economy minister Hjalmar Schacht), while others resisted actively against Nazism, with some Liberal leaning members and former members of the military joining up with Henning von Tresckow (e.g. the Solf Circle).",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Soon after World War II, the Soviet Union pushed for the creation of licensed \"anti-fascist\" parties in its occupation zone in East Germany. In July 1945, former DDP politicians Wilhelm Külz, Eugen Schiffer and Waldemar Koch called for the establishment of a pan-German liberal party. Their Liberal-Democratic Party (LDP) was soon licensed by the Soviet Military Administration in Germany, under the condition that the new party joined the pro-Soviet \"Democratic Bloc\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In September 1945, citizens in Hamburg—including the anti-Nazi resistance circle \"Association Free Hamburg\"—established the Party of Free Democrats (PFD) as a bourgeois left-wing party and the first liberal Party in the Western occupation zones. The German Democratic Party was revived in some states of the Western occupation zones (in the Southwestern states of Württemberg-Baden and Württemberg-Hohenzollern under the name of Democratic People's Party).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Many former members of DDP and DVP however agreed to finally overcome the traditional split of German liberalism into a national-liberal and a left-liberal branch, aiming for the creation of a united liberal party. In October 1945 a liberal coalition party was founded in the state of Bremen under the name of Bremen Democratic People's Party. In January 1946, liberal state parties of the British occupation zone merged into the Free Democratic Party of the British Zone (FDP). A similar state party in Hesse, called the Liberal Democratic Party, was licensed by the U.S. military government in January 1946. In the state of Bavaria, a Free Democratic Party was founded in May 1946.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the first post-war state elections in 1946, liberal parties performed well in Württemberg-Baden (16.8%), Bremen (18.3%), Hamburg (18.2%) and Greater Berlin (still undivided; 9.3%). The LDP was especially strong in the October 1946 state elections of the Soviet zone—the last free parliamentary election in East Germany—obtaining an average of 24.6% (highest in Saxony-Anhalt, 29.9%, and Thuringia, 28.5%), thwarting an absolute majority of the Socialist Unity Party of Germany (SED) that was favoured by the Soviet occupation power. This disappointment to the communists however led to a change of electoral laws in the Soviet zone, cutting the autonomy of non-socialist parties including the LDP and forcing it to join the SED-dominated National Front, making it a dependent \"bloc party\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Democratic Party of Germany (DPD) was established in Rothenburg ob der Tauber on 17 March 1947 as a pan-German party of liberals from all four occupation zones. Its leaders were Theodor Heuss (representing the DVP of Württemberg-Baden in the American zone) and Wilhelm Külz (representing the LDP of the Soviet zone). However, the project failed in January 1948 as a result of disputes over Külz's pro-Soviet direction.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Free Democratic Party was established on 11–12 December 1948 in Heppenheim, in Hesse, as an association of all 13 liberal state parties in the three Western zones of occupation. The proposed name, Liberal Democratic Party, was rejected by the delegates, who voted 64 to 25 in favour of the name Free Democratic Party (FDP).",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The party's first chairman was Theodor Heuss, a member of the Democratic People's Party in Württemberg-Baden; his deputy was Franz Blücher of the FDP in the British zone. The place for the party's foundation was chosen deliberately: the \"Heppenheim Assembly\" was held at the Hotel Halber Mond on 10 October 1847, a meeting of moderate liberals who were preparing for what would be, within a few months, the German revolutions of 1848–1849.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The FDP was founded on 11 December 1948 through the merger of nine regional liberal parties formed in 1945 from the remnants of the pre-1933 German People's Party (DVP) and the German Democratic Party (DDP), which had been active in the Weimar Republic.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the first elections to the Bundestag on 14 August 1949, the FDP won a vote share of 11.9 percent (with 12 direct mandates, particularly in Baden-Württemberg and Hesse), and thus obtained 52 of 402 seats. It formed a common Bundestag group with the hard-right Deutsche Partei (DP). In September of the same year the FDP chairman Theodor Heuss was elected the first President of the Federal Republic of Germany. In his 1954 re-election, he received the best election result to date of a President with 871 of 1018 votes (85.6 percent) of the Federal Assembly. Adenauer was also elected on the proposal of the new German President with an extremely narrow majority as the first Chancellor. The FDP participated with the CDU/CSU and the German Party in Adenauer's coalition cabinet; they had three ministers: Franz Blücher (Vice-Chancellor), Thomas Dehler (justice), and Eberhard Wildermuth (housing).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "On the most important economic, social and German national issues, the FDP agreed with their coalition partners, the CDU/CSU. However, the FDP offered to middle-class voters a secular party that refused the religious schools and accused the opposition parties of clericalization. The FDP said they were known also as a consistent representative of the market economy, while the CDU was then dominated nominally from the Ahlen Programme, which allowed a Third Way between capitalism and socialism. Ludwig Erhard, the \"father\" of the social market economy, had his followers in the early years of the Federal Republic in the CDU/CSU rather than in the FDP.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The FDP won Hesse's 1950 state election with 31.8 percent, the best result in its history, through appealing to East Germans displaced by the war by including them on their ticket.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Up to the 1950s, several of the FDP's regional organizations were to the right of the CDU/CSU, which initially had ideas of some sort of Christian socialism, and even former office-holders of the Third Reich were courted with nationalist values. The FDP voted in parliament at the end of 1950 against the CDU- and SPD-introduced de-nazification process. At their party conference in Munich in 1951 they demanded the release of all \"so-called war criminals\" and welcomed the establishment of the \"Association of German soldiers\" of former Wehrmacht and SS members to advance the integration of the Nazi forces in democracy. The FDP members were seen as part of the \"extremist\" block along with the German Party in West Germany by the US intelligence officials.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Similarly, a de-Nazification Act could only be passed at the end of 1950 in the Bundestag because the opposition SPD supported the motion along with the governing CDU/CSU; the governing FDP voted along with the hard-right DP and the openly neo-Nazi German Reich Party (DRP) against the law against Nazis.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The 1953 Naumann-Affair, named after Werner Naumann, identified old Nazis trying to infiltrate the party, which had many right-wing and nationalist members in Hesse, North Rhine-Westphalia and Lower Saxony. After the British occupation authorities had arrested seven prominent members of the Naumann circle, the FDP federal board installed a commission of inquiry, chaired by Thomas Dehler, which particularly sharply criticized the situation in the North Rhine-Westphalian FDP. In the following years, the right wing lost power, and the extreme right increasingly sought areas of activity outside the FDP. In the 1953 federal election, the FDP received 9.5 percent of the party votes, 10.8 percent of the primary vote (with 14 direct mandates, particularly in Hamburg, Lower Saxony, Hesse, Württemberg and Bavaria) and 48 of 487 seats.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the second term of the Bundestag, the South German Liberal Democrats gained influence in the party. Thomas Dehler, a representative of a more social-liberal course took over as party and parliamentary leader. The former Minister of Justice Dehler, who in 1933 suffered persecution by the Nazis, was known for his rhetorical focus. Generally the various regional associations were independent. After the FDP had left in early 1956, the coalition with the CDU in North Rhine-Westphalia and made with SPD and centre a new state government, were a total of 16 members of parliament, including the four federal ministers from the FDP and founded the short-lived Free People's Party, which then up was involved to the end of the legislature instead of FDP in the Federal Government. The FDP first took it to the opposition.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Only one of the smaller post-war parties, the FDP survived despite many problems. In 1957 federal elections they still reached 7.7 percent of the vote to 1990 and their last direct mandate with which they had held 41 of 497 seats in the Bundestag. However, they still remained in opposition because the Union won an absolute majority. The FDP also called for a nuclear-free zone in Central Europe.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Even before the election Dehler was assigned as party chairman. At the federal party in Berlin at the end January 1957 relieved him Reinhold Maier. Dehler's role as Group Chairman took over after the election of the national set very Erich Mende. Mende was also chairman of the party.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the 1961 federal election, the FDP achieved 12.8 percent nationwide, the best result until then, and the FDP entered a coalition with the CDU again. Although it was committed before the election to continuing to sit in any case in a government together with Adenauer, Chancellor Adenauer was again, however, to withdraw under the proviso, after two years. These events led to the FDP being nicknamed the Umfallerpartei (\"pushover party\").",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In the Spiegel affair, the FDP withdrew their ministers from the federal government. Although the coalition was renewed again under Adenauer in 1962, the FDP withdrew again on the condition in October 1963. This occurred even under the new Chancellor, Ludwig Erhard. This was for Erich Mende turn the occasion to go into the cabinet: he took the rather unimportant Federal Ministry for All-German Affairs.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the 1965 federal elections the FDP gained 9.5 percent. The coalition with the CDU in 1966 broke on the subject of tax increases and it was followed by a grand coalition between the CDU and the SPD. The opposition also pioneered a course change: the former foreign policy and the attitude to the eastern territories were discussed. The opposition leader for the FDP in the Bundestag was Knut von Kühlmann-Stumm. The new chairman elected delegates in 1968 Walter Scheel, a European-oriented liberals, although it came from the national liberal camp, but with Willi Weyer and Hans-Dietrich Genscher led the new center of the party. This center strove to make the FDP coalition support both major parties. Here, the Liberals approached to by their reorientation in East Germany and politics especially of the SPD.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "On 21 October 1969 began the period after the election of a Social-Liberal coalition with the SPD and the German Chancellor Willy Brandt. Walter Scheel was he who initiated the foreign policy reversal. Despite a very small majority he and Willy Brandt sat by the controversial New Ostpolitik. This policy was within the FDP quite controversial, especially since after the entry into the Federal Government defeats in state elections in North Rhine-Westphalia, Lower Saxony and Saarland on 14 June 1970 followed. In Hanover and Saarbrücken, the party left the parliament.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "After the federal party congress in Bonn, just a week later supported the policy of the party leadership and Scheel had confirmed in office, founded by Siegfried party rights Zoglmann 11 July 1970 a \"non-partisan\" organization called the National-Liberal action on the Hohensyburgstraße—to fall with the goal of ending the left-liberal course of the party and Scheel. However, this was not. Zoglmann supported in October 1970 a disapproval resolution of opposition to Treasury Secretary Alexander Möller, Erich Mende, Heinz Starke, and did the same. A little later all three declared their withdrawal from the FDP; Mende and Strong joined the CDU, Zoglmann later founded the German Union (Deutsche Union), which remained a splinter party.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The foreign policy and the socio-political changes were made in 1971 by the Freiburg Thesis, which were as Rowohlt Paperback sold more than 100,000 times, on a theoretical basis, the FDP is committed to \"social liberalism\" and social reforms. Walter Scheel was first foreign minister and vice chancellor, 1974, he was then second-liberal President and paving the way for inner-party the previous interior minister Hans-Dietrich Genscher free.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "From 1969 to 1974 the FDP supported the SPD Chancellor Willy Brandt, who was succeeded by Helmut Schmidt. Already by the end of the 70s there did not seem to be enough similarities between the FDP and the SPD to form a new coalition, but the CDU/CSU chancellor candidate of Franz Josef Strauss in 1980 pushed the parties to run together again. The FDP's policies, however, began to drift apart from the SPD's, especially when it came to the economy. Within the SPD, there was strong grassroots opposition to Chancellor Helmut Schmidt's policies on the NATO Double-Track Decision. However, within the FDP, the conflicts and contrasts were always greater.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the fall of 1982, the FDP reneged on its coalition agreement with the SPD and instead threw its support behind the CDU/CSU. On 1 October, the FDP and CDU/CSU were able to oust Schmidt and replace him with CDU party chairman Helmut Kohl as the new Chancellor. The coalition change resulted in severe internal conflicts, and the FDP then lost about 20 percent of its 86,500 members, as reflected in the general election in 1983 by a drop from 10.6 percent to 7.0 percent. The members went mostly to the SPD, the Greens and newly formed splinter parties, such as the left-liberal party Liberal Democrats (LD). The exiting members included the former FDP General Secretary and later EU Commissioner Günter Verheugen. At the party convention in November 1982, the Schleswig-Holstein state chairman Uwe Ronneburger challenged Hans-Dietrich Genscher as party chairman. Ronneburger received 186 of the votes—about 40 percent—and was just narrowly defeated by Genscher.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "in 1980, FDP members who did not agree with the politics of the FDP youth organization Young Democrats founded the Young Liberals (JuLis). For a time JuLis and the Young Democrats operated side by side, until the JuLis became the sole official youth wing of the FDP in 1983. The Young Democrats split from the FDP and were left as a party-independent youth organization.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "At the time of reunification, the FDP's objective was a special economic zone in the former East Germany, but could not prevail against the CDU/CSU, as this would prevent any loss of votes in the five new federal states in the general election in 1990.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In all federal election campaigns since the 1980s, the party sided with the CDU and CSU, the main conservative parties in Germany. Following German reunification in 1990, the FDP merged with the Association of Free Democrats, a grouping of liberals from East Germany and the Liberal Democratic Party of Germany.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "During the political upheavals of 1989/1990 in the GDR new liberal parties emerged, like the FDP East Germany or the German Forum Party. They formed the Liberal Democratic Party, who had previously acted as a bloc party on the side of the SED and with Manfred Gerlach also the last Council of State of the GDR presented, the Alliance of Free Democrats (BFD). Within the FDP came in the following years to considerable internal discussions about dealing with the former bloc party. Even before the reunification of Germany united on a joint congress in Hanover, the West German FDP united with the other parties to form the first all-German party. Both party factions brought the FDP a great, albeit short-lived, increase in membership. In the first all-German Bundestag elections, the CDU/CSU/FDP centre-right coalition was confirmed, the FDP received 11.0 percent of the valid votes (79 seats) and won in Genschers city of birth Halle (Saale) the first direct mandate since 1957.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "During the 1990s, the FDP won between 6.2 and 11 percent of the vote in Bundestag elections. It last participated in the federal government by representing the junior partner in the government of Chancellor Helmut Kohl of the CDU.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1998, the CDU/CSU-FDP coalition lost the federal election, which ended the FDP's nearly three decade reign in government. In its 2002 campaign the FDP made an exception to its party policy of siding with the CDU/CSU when it adopted equidistance to the CDU and SPD. From 1998 until 2009 the FDP remained in the opposition until it became part of a new centre-right coalition government.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In the 2005 general election the party won 9.8 percent of the vote and 61 federal deputies, an unpredicted improvement from prior opinion polls. It is believed that this was partly due to tactical voting by CDU and Christian Social Union of Bavaria (CSU) alliance supporters who hoped for stronger market-oriented economic reforms than the CDU/CSU alliance called for. However, because the CDU did worse than predicted, the FDP and the CDU/CSU alliance were unable to form a coalition government. At other times, for example after the 2002 federal election, a coalition between the FDP and CDU/CSU was impossible primarily because of the weak results of the FDP.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The CDU/CSU parties had achieved the third-worst performance in German postwar history with only 35.2 percent of the votes. Therefore, the FDP was unable to form a coalition with its preferred partners, the CDU/CSU parties. As a result, the party was considered as a potential member of two other political coalitions, following the election. One possibility was a partnership between the FDP, the Social Democratic Party of Germany (SPD) and the Alliance 90/The Greens, known as a \"traffic light coalition\", named after the colors of the three parties. This coalition was ruled out, because the FDP considered the Social Democrats and the Greens insufficiently committed to market-oriented economic reform. The other possibility was a CDU-FDP-Green coalition, known as a \"Jamaica coalition\" because of the colours of the three parties. This coalition wasn't concluded either, since the Greens ruled out participation in any coalition with the CDU/CSU. Instead, the CDU formed a Grand coalition with the SPD, and the FDP entered the opposition. FDP leader Guido Westerwelle became the unofficial leader of the opposition by virtue of the FDP's position as the largest opposition party in the Bundestag.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In the 2009 European election, the FDP received 11% of the national vote (2,888,084 votes in total) and returned 12 MEPs.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In the September 2009 federal elections, the FDP increased its share of the vote by 4.8 percentage points to 14.6%, an all-time record. This percentage was enough to offset a decline in the CDU/CSU's vote compared to 2005, to create a CDU-FDP centre-right governing coalition in the Bundestag with a 53% majority of seats. On election night, party leader Westerwelle said his party would work to ensure that civil liberties were respected and that Germany got an \"equitable tax system and better education opportunities\".",
"title": "History"
},
{
"paragraph_id": 39,
"text": "The party also made gains in the two state elections held at the same time, acquiring sufficient seats for a CDU-FDP coalition in the northernmost state, Schleswig-Holstein, and gaining enough votes in left-leaning Brandenburg to clear the 5% hurdle to enter that state's parliament.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "However, after reaching its best ever election result in 2009, the FDP's support collapsed. The party's policy pledges were put on hold by Merkel as the recession of 2009 unfolded and with the onset of the European debt crisis in 2010. By the end of 2010, the party's support had dropped to as low as 5%. The FDP retained their seats in the state elections in North Rhine-Westphalia, which was held six months after the federal election, but out of the seven state elections that have been held since 2009, the FDP have lost all their seats in five of them due to failing to cross the 5% threshold.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Support for the party further eroded amid infighting and an internal rebellion over euro-area bailouts during the debt crisis.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Westerwelle stepped down as party leader following the 2011 state elections, in which the party was wiped out in Saxony-Anhalt and Rhineland-Palatinate and lost half its seats in Baden-Württemberg. Westerwelle was replaced in May 2011 by Philipp Rösler. The change in leadership failed to revive the FDP's fortunes, however, and in the next series of state elections, the party lost all its seats in Bremen, Mecklenburg-Vorpommern, and Berlin. In Berlin, the party lost nearly 75% of the support they had had in the previous election.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "In March 2012, the FDP lost all their state-level representation in the 2012 Saarland state election. However, this was offset by the Schleswig-Holstein state elections, when they achieved 8% of the vote, which was a severe loss of seats but still over the 5% threshold. In the snap elections in North Rhine-Westphalia a week later, the FDP not only crossed the electoral threshold, but also increased its share of the votes to 2 percentage points higher than in the previous state election. This was attributed to the local leadership of Christian Lindner.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The FDP last won a directly elected seat in 1990, in Halle—the only time it has won a directly elected seat since 1957. The party's inability to win directly elected seats came back to haunt it at the 2013 election, in which it came up just short of the 5% threshold. With no directly elected seats, the FDP was shut out of the Bundestag for the first time since 1949. After the previous chairman Philipp Rösler then resigned, Christian Lindner took over the leadership of the party.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In the 2014 European parliament elections, the FDP received 3.4% of the national vote (986,253 votes in total) and returned 3 MEPs. In the 2014 Brandenburg state election the party experienced a 5.8% down-swing and lost all their representatives in the Brandenburg state parliament. In the 2014 Saxony state election, the party experienced a 5.2% down-swing, again losing all of its seats. In the 2014 Thuringian state election a similar phenomenon was repeated with the party falling below the 5% threshold following a 5.1% drop in popular vote.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The party managed to enter parliament in the 2015 Bremen state election with the party receiving 6.5% of the vote and gaining 6 seats. However, it failed to get into government as a coalition between the Social Democrats and the Greens was created. In the 2016 Mecklenburg-Vorpommern state election the party failed to get into parliament despite increasing its vote share by 0.3%. The party did manage to get into parliament in Baden-Württemberg, gaining 3% of the vote and a total of 12 seats. This represents a five-seat improvement over their previous results. In the 2016 Berlin state election the party gained 4.9% of the vote and 12 seats but still failed to get into government. A red-red-green coalition was instead formed relegating the FDP to the opposition. In the 2016 Rhineland-Palatinate state election, the party managed to enter parliament receiving 6.2% of the vote and 7 seats. It also managed to enter government under a traffic light coalition. In 2016 Saxony-Anhalt state election the party narrowly missed the 5% threshold, receiving 4.9% of the vote and therefore receiving zero seats despite a 1% swing in their favour.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The 2017 North Rhine-Westphalia state election was widely considered a test of the party's future as their chairman Christian Lindner was also leading the party in that state. The party experienced a 4% swing in its favour gaining 6 seats and entering into a coalition with the CDU with a bare majority. In the 2017 Saarland state election the party again failed to gain any seats despite a 1% swing in their favour. The party gained 3 seats and increased its vote share by 3.2% in the 2017 Schleswig-Holstein state election. This success was often credited to their state chairman Wolfgang Kubicki. They also managed to re-enter the government under a Jamaica coalition.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In the 2017 federal election the party scored 10.7% of votes and re-entered the Bundestag, winning 80 seats. After the election, a Jamaica coalition was considered between the CDU, Greens, and FDP. However, FDP chief Christian Lindner walked out of the coalition talks due to a disagreement over European migration policy, saying \"It is better not to govern than to govern badly.\" As a result, the CDU/CSU formed another grand coalition with the SPD.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "The FDP won 5.4% and 5 seats in the 2019 European election.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "In the October 2019 Thuringian state election, the FDP won seats in the Landtag of Thuringia for the first time since 2009. It exceeded the 5% threshold by just 5 votes. In February 2020, the FDP's Thomas Kemmerich was elected Minister-President of Thuringia by the Landtag with the likely support of the CDU and AfD, becoming the second member of the FDP to serve as head of government in a German state. This was also the first time a head of government had been elected with the support of AfD. Under intense pressure from state and federal politicians, Kemmerich resigned the following day, stating he would seek new elections. The next month, he was replaced by Bodo Ramelow of The Left; the FDP did not run a candidate in the second vote for Minister-President.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "In 2021, the FDP returned to the Saxony-Anhalt state parliament after five years of absence. They had similar success in Baden-Württemberg and Mecklenburg-Vorpommern, but faced setbacks in Baden-Württemberg, Berlin and Rhineland-Palatinate.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "In the September 2021 federal election, the FDP saw its vote share and number of seats grow, to 11.5% and 92 seats respectively. As a result of the defeat of the CDU/CSU under Armin Laschet, the SPD, Greens, and FDP entered talks to form a traffic light coalition. The agreement was finalised on 24 November, in which the FDP holds four federal ministries in the Scholz cabinet (Finance, Justice, Digital and Transport and Education and Research).",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Throughout 2022, the FDP saw poor approval in national opinion polls. In State Parliament elections they also performed poorly. In March, the FDP didn't win any seats in Saarland. In May they lost over half their seats in North Rhine-Westphalia and Schleswig-Holstein. In October, the FDP lost all 11 of their seats in Lower Saxony. It also lost all 12 seats in the 2023 Berlin repeat state election.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "In the 2023 Bavarian state election, where Martin Hagen is leading the party, all 11 seats were lost.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "The FDP has been described as liberal, conservative-liberal, classical-liberal, and liberal-conservative. The FDP's political position has variously been described as centrist, centre-right, and right-wing.",
"title": "Ideology and policies"
},
{
"paragraph_id": 56,
"text": "The FDP is a predominantly classical-liberal inspired party, both in the sense of supporting free market economic policies and in the sense of policies emphasizing the minimization of government interference in individual affairs. The party has also been described by various media sources as neoliberal. Scholars of political science have historically identified the FDP as closer to the CDU/CSU bloc than to the Social Democratic Party of Germany (SPD) on economic issues but closer to the SPD and the Greens on issues such as civil liberties, education, defense, and foreign policy. The FDP itself has oriented itself towards a centrist position between the CDU and the SPD.",
"title": "Ideology and policies"
},
{
"paragraph_id": 57,
"text": "The party is a traditional supporter of ordoliberalism, having been influenced by the economic theories of Wilhelm Röpke and Alexander Rüstow. Otto Graf Lambsdorff, who served as Federal Minister of Economics, is a historical FDP grandee who was a proponent of ordoliberalism. In 1971 during its federal social-liberal coalition with the SPD, the FDP published the Freiburger Theses programme, heralding an ideological move towards reformism and social liberalism, and support for environmental protection policy. However, the party's 1977 Kiel Theses and 1985 Liberal Manifesto returned the FDP towards its traditional free-market, ordoliberal approach. Historical members of the party's social-liberal wing included Gerhart Baum and Werner Maihofer.",
"title": "Ideology and policies"
},
{
"paragraph_id": 58,
"text": "During the 2017 federal election, the party called for Germany to adopt an immigration channel using a Canada-style points-based immigration system; spend up to 3% of GDP on defense and international security; phase out the solidarity surcharge tax (which was first levied in 1991 to pay for the costs of absorbing East Germany after German reunification); cut taxes by 30 billion euro (twice the amount of the tax cut proposed by the CDU); and improve road infrastructure by spending 2 billion euro annually for each of the next two decades, to be funded by selling government stakes in Deutsche Bahn, Deutsche Telekom, and Deutsche Post. The FDP also called for the improvement of Germany's digital infrastructure, the establishment of a Ministry of Digital Affairs, and greater investment in education. The party also supports allowing dual citizenship (in contrast to the CDU/CSU, which opposes it) but also supports requiring third-generation immigrants to select a single nationality.",
"title": "Ideology and policies"
},
{
"paragraph_id": 59,
"text": "The FDP supports the legalization of cannabis in Germany and opposes proposals to heighten Internet surveillance. The FDP supports same-sex marriage in Germany. The FDP supports legalisation of altruistic surrogacy.",
"title": "Ideology and policies"
},
{
"paragraph_id": 60,
"text": "The FDP has mixed views on European integration. In its 2009 campaign manifesto, the FDP pledged support for ratification of the Lisbon Treaty as well as EU reforms aimed at enhancing transparency and democratic responsiveness, reducing bureaucracy, establishing stringent curbs on the EU budget, and fully liberalizing the Single Market. At its January 2019 congress ahead of the 2019 European Parliament election, FDP's manifesto called for further EU reforms, including reducing the number of European Commissioners to 18 from the current 28, abolishing the European Economic and Social Committee, and ending the European Parliament's \"traveling circus\" between Brussels and Strasbourg. Vice chairwoman and Deputy Leader Nicola Beer stated: \"We want both more and less Europe.\"",
"title": "Ideology and policies"
},
{
"paragraph_id": 61,
"text": "In 1940s and 1950s, the FDP was the only German party strongly in favour of market economy, while the CDU/CSU was still adhering to a \"third way\" between capitalism and socialism. At the time, the FDP wanted former Nazis to be reintegrated into society and demanded a release of Nazi war criminals.",
"title": "Ideology and policies"
},
{
"paragraph_id": 62,
"text": "The party's membership has historically been largely male; in 1995, less than one-third of the party's members were women, and in the 1980s women made up less than one-tenth of the party's national executive committee. By the 1990s, the percentage of women on the FDP's national executive committee rose to 20%.",
"title": "Ideology and policies"
},
{
"paragraph_id": 63,
"text": "The party tends to draw its support from professionals and self-employed Germans. It lacks consistent support from a voting bloc, such as the trade union membership that supports the SPD or the church membership that supports the CDU/CSU, and thus has historically only garnered a small group of Stammwähler (core voters) who consistently vote for the party.",
"title": "Ideology and policies"
},
{
"paragraph_id": 64,
"text": "In the 2021 elections, the FDP was the second-most popular party among voters under age 30; among this demographic, the Greens won 22% of the vote, the FDP 19%, the SPD 17%, the CDU/CSU 11%, Die Linke 8%, and the AfD 8%. According to Deutsche Welle in 2021, voters for both the FDP and the Greens are similar in being younger, politically centrist professionals living in cities, unlike left working-class voters and right Christian voters.",
"title": "Ideology and policies"
},
{
"paragraph_id": 65,
"text": "In the European Parliament the Free Democratic Party sits in the Renew Europe group with five MEPs.",
"title": "European representation"
},
{
"paragraph_id": 66,
"text": "In the European Committee of the Regions, the Free Democratic Party sits in the Renew Europe CoR group, with one full member for the 2020–2025 mandate.",
"title": "European representation"
},
{
"paragraph_id": 67,
"text": "Below are charts of the results that the FDP has secured in each election to the federal Bundestag. Timelines showing the number of seats and percentage of party list votes won are on the right.",
"title": "Election results"
}
]
| The Free Democratic Party is a liberal political party in Germany. The FDP was founded in 1948 by members of former liberal political parties which existed in Germany before World War II, namely the German Democratic Party and the German People's Party. For most of the second half of the 20th century, the FDP held the balance of power in the Bundestag. It has been a junior coalition partner to both the CDU/CSU and Social Democratic Party (SPD). In the 2013 federal election, the FDP failed to win any directly elected seats in the Bundestag and came up short of the 5 percent threshold to qualify for list representation, being left without representation in the Bundestag for the first time in its history. In the 2017 federal election, the FDP regained its representation in the Bundestag, receiving 10.6% of the vote. After the 2021 federal election the FDP became part of governing Scholz cabinet in coalition with the Social Democratic Party and the Greens. Since the 1980s, the party, consistently with its ordoliberal tradition, has pushed economic liberalism and has aligned itself closely to the promotion of free markets and privatization, and is aligned to the centre or centre-right of the political spectrum. The FDP is a member of the Liberal International, the Alliance of Liberals and Democrats for Europe and Renew Europe. | 2001-05-17T17:46:40Z | 2023-12-31T06:37:00Z | [
"Template:More citations needed section",
"Template:Efn",
"Template:Webarchive",
"Template:Authority control",
"Template:Infobox political party",
"Template:IPA-de",
"Template:Colour box",
"Template:Reflist",
"Template:ISBN",
"Template:Alliance of Liberals and Democrats for Europe Party",
"Template:Short description",
"Template:Incomprehensible inline",
"Template:Main article",
"Template:Cite book",
"Template:Cite news",
"Template:Citation",
"Template:Free Democratic Party (Germany)",
"Template:Use dmy dates",
"Template:Sps",
"Template:Citation needed",
"Template:Liberalism sidebar",
"Template:Yes2",
"Template:Nowrap",
"Template:Notelist",
"Template:Parties of Germany",
"Template:More citations needed",
"Template:Clarify",
"Template:Renew Europe",
"Template:Official",
"Template:EngvarB",
"Template:Composition bar",
"Template:Increase",
"Template:Flagicon",
"Template:Cite web",
"Template:Bulleted list",
"Template:Cbignore",
"Template:Refn",
"Template:Decrease",
"Template:No2",
"Template:Steady",
"Template:Cite journal",
"Template:In lang",
"Template:Lang-de",
"Template:Lang",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:ELDR member parties"
]
| https://en.wikipedia.org/wiki/Free_Democratic_Party_(Germany) |
10,826 | Fax | Fax (short for facsimile), sometimes called telecopying or telefax (the latter short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to more quickly transmit areas that are all-white or all-black.
Fax machines were ubiquitous in offices in the 1980s and 1990s, but have gradually been rendered obsolete by Internet-based technologies such as email and the World Wide Web. They remain particularly popular in medical administration and law enforcement.
Scottish inventor Alexander Bain worked on chemical-mechanical fax-type devices and in 1846 was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843, for his "Electric Printing Telegraph". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone.
In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's "telephane" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission.
The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances.
On May 19, 1924, scientists of the AT&T Corporation "by a new process of transmitting pictures by electricity" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process.
The Western Union "Deskfax" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper.
As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's "fax" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations.
Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium.
By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's "Telecar" telegram delivery vehicles.
In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite.
Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. The closely related technology of slow-scan television is still used by amateur radio operators.
In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would truly set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, 46 lb (21 kg) facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication.
By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s.
Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate.
In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus.
Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and graphemic reasons. They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering the transition to remote work.
In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office.
The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020.
Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records.
In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other.
Hospitals are the leading users for fax machines in the United States where almost all doctors prefer fax machines over emails, often due to concerns about accidentally violating HIPAA. However, fax machines are beginning to decline due to expansion of telehealth as a result of the COVID-19 pandemic, and virtual visits often replace the need for a patient to fax or mail information to a doctor, since the doctor would receive the information via a telehealth platform such as Zoom or Microsoft Teams.
There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterfone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers.
Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996.
A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites.
Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times.
Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way.
Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem.
Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax.
"Super Group 3" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s.
As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are:
An additional method is specified in T.6:
Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides.
Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor.
Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new "first line" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for "Standard"-resolution faxes, and 4 for "Fine"-resolution faxes.
The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited.
In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a "fax profile" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images.
JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered.
The ITU-T T.85 "fax profile" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of "endless" images, where the height of the image may not be known until the last row is transmitted.
ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 "fax profile":
A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called "typical prediction", if header flag TPBON is set to 1.)
Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax.
The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about 3 minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences.
Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use.
Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers.
One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer.
Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed.
As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage.
A CNG tone is an 1100 Hz tone transmitted by a fax machine when it calls another fax machine. Fax tones can cause complications when implementing fax over IP.
One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources.
Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file.
With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services. | [
{
"paragraph_id": 0,
"text": "Fax (short for facsimile), sometimes called telecopying or telefax (the latter short for telefacsimile), is the telephonic transmission of scanned printed material (both text and images), normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine (or a telecopier), which processes the contents (text or images) as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to more quickly transmit areas that are all-white or all-black.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fax machines were ubiquitous in offices in the 1980s and 1990s, but have gradually been rendered obsolete by Internet-based technologies such as email and the World Wide Web. They remain particularly popular in medical administration and law enforcement.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Scottish inventor Alexander Bain worked on chemical-mechanical fax-type devices and in 1846 was able to reproduce graphic signs in laboratory experiments. He received British patent 9745 on May 27, 1843, for his \"Electric Printing Telegraph\". Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. The Pantelegraph was invented by the Italian physicist Giovanni Caselli. He introduced the first commercial telefax service between Paris and Lyon in 1865, some 11 years before the invention of the telephone.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In 1880, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. An account of Henry Sutton's \"telephane\" was published in 1896. Around 1900, German physicist Arthur Korn invented the Bildtelegraph, widespread in continental Europe especially following a widely noticed transmission of a wanted-person photograph from Paris to London in 1908, used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The 1888 invention of the telautograph by Elisha Gray marked a further development in fax technology, allowing users to send signatures over long distances, thus allowing the verification of identification or ownership over long distances.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "On May 19, 1924, scientists of the AT&T Corporation \"by a new process of transmitting pictures by electricity\" sent 15 photographs by telephone from Cleveland to New York City, such photos being suitable for newspaper reproduction. Previously, photographs had been sent over the radio using this process.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Western Union \"Deskfax\" fax machine, announced in 1948, was a compact machine that fit comfortably on a desktop, using special spark printer paper.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "As a designer for the Radio Corporation of America (RCA), in 1924, Richard H. Ranger invented the wireless photoradiogram, or transoceanic radio facsimile, the forerunner of today's \"fax\" machines. A photograph of President Calvin Coolidge sent from New York to London on November 29, 1924, became the first photo picture reproduced by transoceanic radio facsimile. Commercial use of Ranger's product began two years later. Also in 1924, Herbert E. Ives of AT&T transmitted and reconstructed the first color facsimile, a natural-color photograph of silent film star Rudolph Valentino in period costume, using red, green and blue color separations.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Beginning in the late 1930s, the Finch Facsimile system was used to transmit a \"radio newspaper\" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any \"hot\" news, still showed only a passing curiosity about the new medium.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union's \"Telecar\" telegram delivery vehicles.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Radio fax is still in limited use today for transmitting weather charts and information to ships at sea. The closely related technology of slow-scan television is still used by amateur radio operators.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1964, Xerox Corporation introduced (and patented) what many consider to be the first commercialized version of the modern fax machine, under the name (LDX) or Long Distance Xerography. This model was superseded two years later with a unit that would truly set the standard for fax machines for years to come. Up until this point facsimile machines were very expensive and hard to operate. In 1966, Xerox released the Magnafax Telecopiers, a smaller, 46 lb (21 kg) facsimile machine. This unit was far easier to operate and could be connected to any standard telephone line. This machine was capable of transmitting a letter-sized document in about six minutes. The first sub-minute, digital fax machine was developed by Dacom, which built on digital data compression technology originally developed at Lockheed for satellite communication.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "By the late 1970s, many companies around the world (especially Japanese firms) had entered the fax market. Very shortly after this, a new wave of more compact, faster and efficient fax machines would hit the market. Xerox continued to refine the fax machine for years after their ground-breaking first machine. In later years it would be combined with copier equipment to create the hybrid machines we have today that copy, scan and fax. Some of the lesser known capabilities of the Xerox fax technologies included their Ethernet enabled Fax Services on their 8000 workstations in the early 1980s.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Prior to the introduction of the ubiquitous fax machine, one of the first being the Exxon Qwip in the mid-1970s, facsimile machines worked by optical scanning of a document or drawing spinning on a drum. The reflected light, varying in intensity according to the light and dark areas of the document, was focused on a photocell so that the current in a circuit varied with the amount of light. This current was used to control a tone generator (a modulator), the current determining the frequency of the tone produced. This audio tone was then transmitted using an acoustic coupler (a speaker, in this case) attached to the microphone of a common telephone handset. At the receiving end, a handset's speaker was attached to an acoustic coupler (a microphone), and a demodulator converted the varying tone into a variable current that controlled the mechanical movement of a pen or pencil to reproduce the image on a blank sheet of paper on an identical drum rotating at the same rate.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1985, Hank Magnuski, founder of GammaLink, produced the first computer fax board, called GammaFax. Such boards could provide voice telephony via Analog Expansion Bus.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Although businesses usually maintain some kind of fax capability, the technology has faced increasing competition from Internet-based alternatives. In some countries, because electronic signatures on contracts are not yet recognized by law, while faxed contracts with copies of signatures are, fax machines enjoy continuing support in business. In Japan, faxes are still used extensively as of September 2020 for cultural and graphemic reasons. They are available for sending to both domestic and international recipients from over 81% of all convenience stores nationwide. Convenience-store fax machines commonly print the slightly re-sized content of the sent fax in the electronic confirmation-slip, in A4 paper size. Use of fax machines for reporting cases during the COVID-19 pandemic has been criticised in Japan for introducing data errors and delays in reporting, slowing response efforts to contain the spread of infections and hindering the transition to remote work.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In many corporate environments, freestanding fax machines have been replaced by fax servers and other computerized systems capable of receiving and storing incoming faxes electronically, and then routing them to users on paper or via an email (which may be secured). Such systems have the advantage of reducing costs by eliminating unnecessary printouts and reducing the number of inbound analog phone lines needed by an office.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The once ubiquitous fax machine has also begun to disappear from the small office and home office environments. Remotely hosted fax-server services are widely available from VoIP and e-mail providers allowing users to send and receive faxes using their existing e-mail accounts without the need for any hardware or dedicated fax lines. Personal computers have also long been able to handle incoming and outgoing faxes using analog modems or ISDN, eliminating the need for a stand-alone fax machine. These solutions are often ideally suited for users who only very occasionally need to use fax services. In July 2017 the United Kingdom's National Health Service was said to be the world's largest purchaser of fax machines because the digital revolution has largely bypassed it. In June 2018 the Labour Party said that the NHS had at least 11,620 fax machines in operation and in December the Department of Health and Social Care said that no more fax machines could be bought from 2019 and that the existing ones must be replaced by secure email by March 31, 2020.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Leeds Teaching Hospitals NHS Trust, generally viewed as digitally advanced in the NHS, was engaged in a process of removing its fax machines in early 2019. This involved quite a lot of e-fax solutions because of the need to communicate with pharmacies and nursing homes which may not have access to the NHS email system and may need something in their paper records.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 2018 two-thirds of Canadian doctors reported that they primarily used fax machines to communicate with other doctors. Faxes are still seen as safer and more secure and electronic systems are often unable to communicate with each other.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Hospitals are the leading users for fax machines in the United States where almost all doctors prefer fax machines over emails, often due to concerns about accidentally violating HIPAA. However, fax machines are beginning to decline due to expansion of telehealth as a result of the COVID-19 pandemic, and virtual visits often replace the need for a patient to fax or mail information to a doctor, since the doctor would receive the information via a telehealth platform such as Zoom or Microsoft Teams.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "There are several indicators of fax capabilities: group, class, data transmission rate, and conformance with ITU-T (formerly CCITT) recommendations. Since the 1968 Carterfone decision, most fax machines have been designed to connect to standard PSTN lines and telephone numbers.",
"title": "Capabilities"
},
{
"paragraph_id": 23,
"text": "Group 1 and 2 faxes are sent in the same manner as a frame of analog television, with each scanned line transmitted as a continuous analog signal. Horizontal resolution depended upon the quality of the scanner, transmission line, and the printer. Analog fax machines are obsolete and no longer manufactured. ITU-T Recommendations T.2 and T.3 were withdrawn as obsolete in July 1996.",
"title": "Capabilities"
},
{
"paragraph_id": 24,
"text": "A major breakthrough in the development of the modern facsimile system was the result of digital technology, where the analog signal from scanners was digitized and then compressed, resulting in the ability to transmit high rates of data across standard phone lines. The first digital fax machine was the Dacom Rapidfax first sold in late 1960s, which incorporated digital data compression technology developed by Lockheed for transmission of images from satellites.",
"title": "Capabilities"
},
{
"paragraph_id": 25,
"text": "Group 3 and 4 faxes are digital formats and take advantage of digital compression methods to greatly reduce transmission times.",
"title": "Capabilities"
},
{
"paragraph_id": 26,
"text": "Fax Over IP (FoIP) can transmit and receive pre-digitized documents at near-realtime speeds using ITU-T recommendation T.38 to send digitised images over an IP network using JPEG compression. T.38 is designed to work with VoIP services and often supported by analog telephone adapters used by legacy fax machines that need to connect through a VoIP service. Scanned documents are limited to the amount of time the user takes to load the document in a scanner and for the device to process a digital file. The resolution can vary from as little as 150 DPI to 9600 DPI or more. This type of faxing is not related to the e-mail–to–fax service that still uses fax modems at least one way.",
"title": "Capabilities"
},
{
"paragraph_id": 27,
"text": "Computer modems are often designated by a particular fax class, which indicates how much processing is offloaded from the computer's CPU to the fax modem.",
"title": "Capabilities"
},
{
"paragraph_id": 28,
"text": "Several different telephone-line modulation techniques are used by fax machines. They are negotiated during the fax-modem handshake, and the fax devices will use the highest data rate that both fax devices support, usually a minimum of 14.4 kbit/s for Group 3 fax.",
"title": "Capabilities"
},
{
"paragraph_id": 29,
"text": "\"Super Group 3\" faxes use V.34bis modulation that allows a data rate of up to 33.6 kbit/s.",
"title": "Capabilities"
},
{
"paragraph_id": 30,
"text": "As well as specifying the resolution (and allowable physical size) of the image being faxed, the ITU-T T.4 recommendation specifies two compression methods for decreasing the amount of data that needs to be transmitted between the fax machines to transfer the image. The two methods defined in T.4 are:",
"title": "Capabilities"
},
{
"paragraph_id": 31,
"text": "An additional method is specified in T.6:",
"title": "Capabilities"
},
{
"paragraph_id": 32,
"text": "Later, other compression techniques were added as options to ITU-T recommendation T.30, such as the more efficient JBIG (T.82, T.85) for bi-level content, and JPEG (T.81), T.43, MRC (T.44), and T.45 for grayscale, palette, and colour content. Fax machines can negotiate at the start of the T.30 session to use the best technique implemented on both sides.",
"title": "Capabilities"
},
{
"paragraph_id": 33,
"text": "Modified Huffman (MH), specified in T.4 as the one-dimensional coding scheme, is a codebook-based run-length encoding scheme optimised to efficiently compress whitespace. As most faxes consist mostly of white space, this minimises the transmission time of most faxes. Each line scanned is compressed independently of its predecessor and successor.",
"title": "Capabilities"
},
{
"paragraph_id": 34,
"text": "Modified READ, specified as an optional two-dimensional coding scheme in T.4, encodes the first scanned line using MH. The next line is compared to the first, the differences determined, and then the differences are encoded and transmitted. This is effective, as most lines differ little from their predecessor. This is not continued to the end of the fax transmission, but only for a limited number of lines until the process is reset, and a new \"first line\" encoded with MH is produced. This limited number of lines is to prevent errors propagating throughout the whole fax, as the standard does not provide for error correction. This is an optional facility, and some fax machines do not use MR in order to minimise the amount of computation required by the machine. The limited number of lines is 2 for \"Standard\"-resolution faxes, and 4 for \"Fine\"-resolution faxes.",
"title": "Capabilities"
},
{
"paragraph_id": 35,
"text": "The ITU-T T.6 recommendation adds a further compression type of Modified Modified READ (MMR), which simply allows a greater number of lines to be coded by MR than in T.4. This is because T.6 makes the assumption that the transmission is over a circuit with a low number of line errors, such as digital ISDN. In this case, the number of lines for which the differences are encoded is not limited.",
"title": "Capabilities"
},
{
"paragraph_id": 36,
"text": "In 1999, ITU-T recommendation T.30 added JBIG (ITU-T T.82) as another lossless bi-level compression algorithm, or more precisely a \"fax profile\" subset of JBIG (ITU-T T.85). JBIG-compressed pages result in 20% to 50% faster transmission than MMR-compressed pages, and up to 30 times faster transmission if the page includes halftone images.",
"title": "Capabilities"
},
{
"paragraph_id": 37,
"text": "JBIG performs adaptive compression, that is, both the encoder and decoder collect statistical information about the transmitted image from the pixels transmitted so far, in order to predict the probability for each next pixel being either black or white. For each new pixel, JBIG looks at ten nearby, previously transmitted pixels. It counts, how often in the past the next pixel has been black or white in the same neighborhood, and estimates from that the probability distribution of the next pixel. This is fed into an arithmetic coder, which adds only a small fraction of a bit to the output sequence if the more probable pixel is then encountered.",
"title": "Capabilities"
},
{
"paragraph_id": 38,
"text": "The ITU-T T.85 \"fax profile\" constrains some optional features of the full JBIG standard, such that codecs do not have to keep data about more than the last three pixel rows of an image in memory at any time. This allows the streaming of \"endless\" images, where the height of the image may not be known until the last row is transmitted.",
"title": "Capabilities"
},
{
"paragraph_id": 39,
"text": "ITU-T T.30 allows fax machines to negotiate one of two options of the T.85 \"fax profile\":",
"title": "Capabilities"
},
{
"paragraph_id": 40,
"text": "A proprietary compression scheme employed on Panasonic fax machines is Matsushita Whiteline Skip (MWS). It can be overlaid on the other compression schemes, but is operative only when two Panasonic machines are communicating with one another. This system detects the blank scanned areas between lines of text, and then compresses several blank scan lines into the data space of a single character. (JBIG implements a similar technique called \"typical prediction\", if header flag TPBON is set to 1.)",
"title": "Capabilities"
},
{
"paragraph_id": 41,
"text": "Group 3 fax machines transfer one or a few printed or handwritten pages per minute in black-and-white (bitonal) at a resolution of 204×98 (normal) or 204×196 (fine) dots per square inch. The transfer rate is 14.4 kbit/s or higher for modems and some fax machines, but fax machines support speeds beginning with 2400 bit/s and typically operate at 9600 bit/s. The transferred image formats are called ITU-T (formerly CCITT) fax group 3 or 4. Group 3 faxes have the suffix .g3 and the MIME type image/g3fax.",
"title": "Capabilities"
},
{
"paragraph_id": 42,
"text": "The most basic fax mode transfers in black and white only. The original page is scanned in a resolution of 1728 pixels/line and 1145 lines/page (for A4). The resulting raw data is compressed using a modified Huffman code optimized for written text, achieving average compression factors of around 20. Typically a page needs 10 s for transmission, instead of about 3 minutes for the same uncompressed raw data of 1728×1145 bits at a speed of 9600 bit/s. The compression method uses a Huffman codebook for run lengths of black and white runs in a single scanned line, and it can also use the fact that two adjacent scanlines are usually quite similar, saving bandwidth by encoding only the differences.",
"title": "Capabilities"
},
{
"paragraph_id": 43,
"text": "Fax classes denote the way fax programs interact with fax hardware. Available classes include Class 1, Class 2, Class 2.0 and 2.1, and Intel CAS. Many modems support at least class 1 and often either Class 2 or Class 2.0. Which is preferable to use depends on factors such as hardware, software, modem firmware, and expected use.",
"title": "Capabilities"
},
{
"paragraph_id": 44,
"text": "Fax machines from the 1970s to the 1990s often used direct thermal printers with rolls of thermal paper as their printing technology, but since the mid-1990s there has been a transition towards plain-paper faxes: thermal transfer printers, inkjet printers and laser printers.",
"title": "Capabilities"
},
{
"paragraph_id": 45,
"text": "One of the advantages of inkjet printing is that inkjets can affordably print in color; therefore, many of the inkjet-based fax machines claim to have color fax capability. There is a standard called ITU-T30e (formally ITU-T Recommendation T.30 Annex E ) for faxing in color; however, it is not widely supported, so many of the color fax machines can only fax in color to machines from the same manufacturer.",
"title": "Capabilities"
},
{
"paragraph_id": 46,
"text": "Stroke speed in facsimile systems is the rate at which a fixed line perpendicular to the direction of scanning is crossed in one direction by a scanning or recording spot. Stroke speed is usually expressed as a number of strokes per minute. When the fax system scans in both directions, the stroke speed is twice this number. In most conventional 20th century mechanical systems, the stroke speed is equivalent to drum speed.",
"title": "Capabilities"
},
{
"paragraph_id": 47,
"text": "As a precaution, thermal fax paper is typically not accepted in archives or as documentary evidence in some courts of law unless photocopied. This is because the image-forming coating is eradicable and brittle, and it tends to detach from the medium after a long time in storage.",
"title": "Capabilities"
},
{
"paragraph_id": 48,
"text": "A CNG tone is an 1100 Hz tone transmitted by a fax machine when it calls another fax machine. Fax tones can cause complications when implementing fax over IP.",
"title": "Capabilities"
},
{
"paragraph_id": 49,
"text": "One popular alternative is to subscribe to an Internet fax service, allowing users to send and receive faxes from their personal computers using an existing email account. No software, fax server or fax machine is needed. Faxes are received as attached TIFF or PDF files, or in proprietary formats that require the use of the service provider's software. Faxes can be sent or retrieved from anywhere at any time that a user can get Internet access. Some services offer secure faxing to comply with stringent HIPAA and Gramm–Leach–Bliley Act requirements to keep medical information and financial information private and secure. Utilizing a fax service provider does not require paper, a dedicated fax line, or consumable resources.",
"title": "Internet fax"
},
{
"paragraph_id": 50,
"text": "Another alternative to a physical fax machine is to make use of computer software which allows people to send and receive faxes using their own computers, utilizing fax servers and unified messaging. A virtual (email) fax can be printed out and then signed and scanned back to computer before being emailed. Also the sender can attach a digital signature to the document file.",
"title": "Internet fax"
},
{
"paragraph_id": 51,
"text": "With the surging popularity of mobile phones, virtual fax machines can now be downloaded as applications for Android and iOS. These applications make use of the phone's internal camera to scan fax documents for upload or they can import from various cloud services.",
"title": "Internet fax"
}
]
| Fax, sometimes called telecopying or telefax, is the telephonic transmission of scanned printed material, normally to a telephone number connected to a printer or other output device. The original document is scanned with a fax machine, which processes the contents as a single fixed graphic image, converting it into a bitmap, and then transmitting it through the telephone system in the form of audio-frequency tones. The receiving fax machine interprets the tones and reconstructs the image, printing a paper copy. Early systems used direct conversions of image darkness to audio tone in a continuous or analog manner. Since the 1980s, most machines transmit an audio-encoded digital representation of the page, using data compression to more quickly transmit areas that are all-white or all-black. Fax machines were ubiquitous in offices in the 1980s and 1990s, but have gradually been rendered obsolete by Internet-based technologies such as email and the World Wide Web. They remain particularly popular in medical administration and law enforcement. | 2001-05-20T05:09:53Z | 2023-11-19T07:45:55Z | [
"Template:Div col end",
"Template:Webarchive",
"Template:Div col",
"Template:Cite news",
"Template:Commons category",
"Template:URI schemes",
"Template:Short description",
"Template:Cvt",
"Template:Vague",
"Template:Stub section",
"Template:IETF RFC",
"Template:Reflist",
"Template:Wiktionary",
"Template:Redirect",
"Template:See also",
"Template:Val",
"Template:Main",
"Template:Cite magazine",
"Template:Authority control",
"Template:External media",
"Template:Which",
"Template:Cite journal",
"Template:Telecommunications",
"Template:Citation needed",
"Template:More citations needed section",
"Template:Cite web",
"Template:Portal",
"Template:Cite book",
"Template:FS1037C MS188"
]
| https://en.wikipedia.org/wiki/Fax |
10,827 | Film crew | A film crew is a group of people, hired by a production company, for the purpose of producing a film or motion picture. The crew is distinguished from the cast, as the cast are understood to be the actors who appear in front of the camera or provide voices for characters in the film. The crew is also separate from the producers, as the producers are the ones who own a portion of either the film studio or the film's intellectual property rights. A film crew is divided into different departments, each of which specializes in a specific aspect of the production. Film crew positions have evolved over the years, spurred by technological change, but many traditional jobs date from the early 20th century and are common across jurisdictions and filmmaking cultures.
Motion picture projects have three discrete stages: development, production, and distribution. Within the production stage there are also three clearly defined sequential phases (pre-production, principal photography, and post-production) and many film crew positions are associated with only one or two of the phases. Distinctions are also made between above-the-line personnel (such as the director, screenwriter, and producers) who begin their involvement during the project's development stage, and the below-the-line technical crew involved only with the production stage.
A director is the person who directs the making of a film. The director most often has the highest authority on a film set. Generally, a director controls a film's artistic and dramatic aspects and visualizes the screenplay (or script) while guiding the technical crew and actors in the fulfillment of that vision. The director has a key role in choosing the cast members, production design, and the creative aspects of filmmaking. Under European Union law, the director is viewed as the author of the film.
The director gives direction to the cast and crew, and creates an overall vision through which a film eventually becomes realized or noticed. Directors need to be able to mediate differences in creative visions and stay within the boundaries of the film's budget. There are many pathways to becoming a film director. Some directors started as screenwriters, cinematographers, film editors, or actors. Other directors have attended a film school. Directors use different approaches. Some outline a general plotline and let the actors improvise dialogue, while others control every aspect, and demand that the actors and crew follow instructions precisely. Some directors also write their own screenplays or collaborate on screenplays with long-standing writing partners. Some directors edit or appear in their films, or compose the music score for their films.
Production is generally not considered a department as such, but rather as a series of functional groups. These include the film's producers and executive producers and production office staff such as the production manager, the production coordinator, and their assistants; the various assistant directors; the accounting staff and sometimes the locations manager and their assistants.
Since the turn of the 21st century, several additional professionals are now routinely listed in the production credits on most major motion pictures.
Grips are trained lighting and rigging technicians. Their main responsibility is to work closely with the electrical department to put in the non-electrical components of lighting set-ups required for a shot, such as flags, overheads, and bounces. On the sound stage, they move and adjust major set pieces when something needs to be moved to get a camera into position. In addition to lifting heavy objects and setting rigging points for lights, they also report to the key grip. In the US and Canada, grips may belong to the International Alliance of Theatrical Stage Employees.
The art department in a major feature film can often number hundreds of people. Usually it is considered to include several sub-departments: the art department proper, with its art director, set designers and draftsmen; set decoration, under the set decorator; props, under the props master/mistress; construction, headed by the construction coordinator; scenic, headed by the key scenic artist; and special effects.
Within the overall art department is a sub-department, also called the art department – which can be confusing. This consists of the people who design the sets and create the graphic art.
Some actors or actresses have personal makeup artists or hair stylists.
The special effects department oversees the mechanical effects (also called physical or practical effects) that create optical illusions during live-action shooting. It is not to be confused with the visual effects department, which adds photographic effects during filming to be altered later during video editing in the post-production process.
Visual effects commonly refers to post-production alterations of the film's images. The on set VFX crew works to prepare shots and plates for future visual effects. This may include adding tracking markers, taking and asking for reference plates and helping the director understand the limitations and ease of certain shots that will effect the future post production. A VFX crew can also work alongside the special effects department for any on-set optical effects that need physical representation during filming (on camera).
Previsualization (also known as previs, previz, pre-rendering, preview, or wireframe windows) is the visualizing of complex scenes in a film before filming. It is also a concept in still photography. It is also used to describe techniques such as storyboarding, either in the form of charcoal sketches or in digital technology, in the planning and conceptualization of film scenes.
Animation film crews have many of the same roles and departments as live-action films (including directing, production, editing, camera, sound, etc.), but nearly all on-set departments (lighting, electrical, grip, sets, props, costume, hair, makeup, special effects, and stunts) were traditionally replaced with a single animation department made up of various types of animators (character, effects, in-betweeners, cleanup, etc.). In traditional animation, the nature of the medium meant that everything was literally flattened into the drawn lines and solid colors that became the characters, making nearly all live-action positions irrelevant. Because animation has traditionally been so labor-intensive and thus expensive, animation films normally have a separate story department in which storyboard artists painstakingly develop scenes to make sure they make sense before they are actually animated.
However, since the turn of the 21st century, modern 3D computer graphics and computer animation have made possible a level of rich detail never seen before. Many animated films now have specialized artists and animators who act as the virtual equivalent of lighting technicians, grips, costume designers, props masters, set decorators, set dressers, and cinematographers. They make artistic decisions strongly similar to those of their live-action counterparts, but implement them in a virtual space that exists only in software rather than on a physical set. There have been major breakthroughs in the simulation of hair since 2005, meaning that hairstylists have been called in since then to consult on a few animation projects. | [
{
"paragraph_id": 0,
"text": "A film crew is a group of people, hired by a production company, for the purpose of producing a film or motion picture. The crew is distinguished from the cast, as the cast are understood to be the actors who appear in front of the camera or provide voices for characters in the film. The crew is also separate from the producers, as the producers are the ones who own a portion of either the film studio or the film's intellectual property rights. A film crew is divided into different departments, each of which specializes in a specific aspect of the production. Film crew positions have evolved over the years, spurred by technological change, but many traditional jobs date from the early 20th century and are common across jurisdictions and filmmaking cultures.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Motion picture projects have three discrete stages: development, production, and distribution. Within the production stage there are also three clearly defined sequential phases (pre-production, principal photography, and post-production) and many film crew positions are associated with only one or two of the phases. Distinctions are also made between above-the-line personnel (such as the director, screenwriter, and producers) who begin their involvement during the project's development stage, and the below-the-line technical crew involved only with the production stage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A director is the person who directs the making of a film. The director most often has the highest authority on a film set. Generally, a director controls a film's artistic and dramatic aspects and visualizes the screenplay (or script) while guiding the technical crew and actors in the fulfillment of that vision. The director has a key role in choosing the cast members, production design, and the creative aspects of filmmaking. Under European Union law, the director is viewed as the author of the film.",
"title": "Director"
},
{
"paragraph_id": 3,
"text": "The director gives direction to the cast and crew, and creates an overall vision through which a film eventually becomes realized or noticed. Directors need to be able to mediate differences in creative visions and stay within the boundaries of the film's budget. There are many pathways to becoming a film director. Some directors started as screenwriters, cinematographers, film editors, or actors. Other directors have attended a film school. Directors use different approaches. Some outline a general plotline and let the actors improvise dialogue, while others control every aspect, and demand that the actors and crew follow instructions precisely. Some directors also write their own screenplays or collaborate on screenplays with long-standing writing partners. Some directors edit or appear in their films, or compose the music score for their films.",
"title": "Director"
},
{
"paragraph_id": 4,
"text": "Production is generally not considered a department as such, but rather as a series of functional groups. These include the film's producers and executive producers and production office staff such as the production manager, the production coordinator, and their assistants; the various assistant directors; the accounting staff and sometimes the locations manager and their assistants.",
"title": "Production"
},
{
"paragraph_id": 5,
"text": "Since the turn of the 21st century, several additional professionals are now routinely listed in the production credits on most major motion pictures.",
"title": "Production"
},
{
"paragraph_id": 6,
"text": "Grips are trained lighting and rigging technicians. Their main responsibility is to work closely with the electrical department to put in the non-electrical components of lighting set-ups required for a shot, such as flags, overheads, and bounces. On the sound stage, they move and adjust major set pieces when something needs to be moved to get a camera into position. In addition to lifting heavy objects and setting rigging points for lights, they also report to the key grip. In the US and Canada, grips may belong to the International Alliance of Theatrical Stage Employees.",
"title": "Camera and Lighting"
},
{
"paragraph_id": 7,
"text": "The art department in a major feature film can often number hundreds of people. Usually it is considered to include several sub-departments: the art department proper, with its art director, set designers and draftsmen; set decoration, under the set decorator; props, under the props master/mistress; construction, headed by the construction coordinator; scenic, headed by the key scenic artist; and special effects.",
"title": "Art department"
},
{
"paragraph_id": 8,
"text": "Within the overall art department is a sub-department, also called the art department – which can be confusing. This consists of the people who design the sets and create the graphic art.",
"title": "Art department"
},
{
"paragraph_id": 9,
"text": "Some actors or actresses have personal makeup artists or hair stylists.",
"title": "Costume department"
},
{
"paragraph_id": 10,
"text": "The special effects department oversees the mechanical effects (also called physical or practical effects) that create optical illusions during live-action shooting. It is not to be confused with the visual effects department, which adds photographic effects during filming to be altered later during video editing in the post-production process.",
"title": "Special effects"
},
{
"paragraph_id": 11,
"text": "Visual effects commonly refers to post-production alterations of the film's images. The on set VFX crew works to prepare shots and plates for future visual effects. This may include adding tracking markers, taking and asking for reference plates and helping the director understand the limitations and ease of certain shots that will effect the future post production. A VFX crew can also work alongside the special effects department for any on-set optical effects that need physical representation during filming (on camera).",
"title": "Post-production"
},
{
"paragraph_id": 12,
"text": "Previsualization (also known as previs, previz, pre-rendering, preview, or wireframe windows) is the visualizing of complex scenes in a film before filming. It is also a concept in still photography. It is also used to describe techniques such as storyboarding, either in the form of charcoal sketches or in digital technology, in the planning and conceptualization of film scenes.",
"title": "Previsualization"
},
{
"paragraph_id": 13,
"text": "Animation film crews have many of the same roles and departments as live-action films (including directing, production, editing, camera, sound, etc.), but nearly all on-set departments (lighting, electrical, grip, sets, props, costume, hair, makeup, special effects, and stunts) were traditionally replaced with a single animation department made up of various types of animators (character, effects, in-betweeners, cleanup, etc.). In traditional animation, the nature of the medium meant that everything was literally flattened into the drawn lines and solid colors that became the characters, making nearly all live-action positions irrelevant. Because animation has traditionally been so labor-intensive and thus expensive, animation films normally have a separate story department in which storyboard artists painstakingly develop scenes to make sure they make sense before they are actually animated.",
"title": "Animation"
},
{
"paragraph_id": 14,
"text": "However, since the turn of the 21st century, modern 3D computer graphics and computer animation have made possible a level of rich detail never seen before. Many animated films now have specialized artists and animators who act as the virtual equivalent of lighting technicians, grips, costume designers, props masters, set decorators, set dressers, and cinematographers. They make artistic decisions strongly similar to those of their live-action counterparts, but implement them in a virtual space that exists only in software rather than on a physical set. There have been major breakthroughs in the simulation of hair since 2005, meaning that hairstylists have been called in since then to consult on a few animation projects.",
"title": "Animation"
}
]
| A film crew is a group of people, hired by a production company, for the purpose of producing a film or motion picture. The crew is distinguished from the cast, as the cast are understood to be the actors who appear in front of the camera or provide voices for characters in the film. The crew is also separate from the producers, as the producers are the ones who own a portion of either the film studio or the film's intellectual property rights. A film crew is divided into different departments, each of which specializes in a specific aspect of the production. Film crew positions have evolved over the years, spurred by technological change, but many traditional jobs date from the early 20th century and are common across jurisdictions and filmmaking cultures. Motion picture projects have three discrete stages: development, production, and distribution. Within the production stage there are also three clearly defined sequential phases and many film crew positions are associated with only one or two of the phases. Distinctions are also made between above-the-line personnel who begin their involvement during the project's development stage, and the below-the-line technical crew involved only with the production stage. | 2001-05-20T13:53:30Z | 2023-12-31T18:09:28Z | [
"Template:Short description",
"Template:For",
"Template:Use American English",
"Template:Portal",
"Template:Reflist",
"Template:Cite web",
"Template:Filmmaking",
"Template:Authority control",
"Template:More citations needed",
"Template:Use mdy dates",
"Template:Cite news",
"Template:Cite book",
"Template:Film crew"
]
| https://en.wikipedia.org/wiki/Film_crew |
10,828 | Fear | Fear is an intensely unpleasant emotion in response to perceiving or recognizing a danger or threat. Fear causes physiological changes that may produce behavioral reactions such as mounting an aggressive response or fleeing the threat. Fear in human beings may occur in response to a certain stimulus occurring in the present, or in anticipation or expectation of a future threat perceived as a risk to oneself. The fear response arises from the perception of danger leading to confrontation with or escape from/avoiding the threat (also known as the fight-or-flight response), which in extreme cases of fear (horror and terror) can be a freeze response.
In humans and other animals, fear is modulated by the process of cognition and learning. Thus, fear is judged as rational and appropriate, or irrational and inappropriate. An irrational fear is called a phobia.
Fear is closely related to the emotion anxiety, which occurs as the result of often future threats that are perceived to be uncontrollable or unavoidable. The fear response serves survival by engendering appropriate behavioral responses, so it has been preserved throughout evolution. Sociological and organizational research also suggests that individuals' fears are not solely dependent on their nature but are also shaped by their social relations and culture, which guide their understanding of when and how much fear to feel.
Fear is sometimes incorrectly considered the opposite of courage. For the reason that courage is a willingness to face adversity, fear is an example of a condition that makes the exercise of courage possible.
Many physiological changes in the body are associated with fear, summarized as the fight-or-flight response. An innate response for coping with danger, it works by accelerating the breathing rate (hyperventilation), heart rate, vasoconstriction of the peripheral blood vessels leading to blood pooling, increasing muscle tension including the muscles attached to each hair follicle to contract and causing "goosebumps", or more clinically, piloerection (making a cold person warmer or a frightened animal look more impressive), sweating, increased blood glucose (hyperglycemia), increased serum calcium, increase in white blood cells called neutrophilic leukocytes, alertness leading to sleep disturbance and "butterflies in the stomach" (dyspepsia). This primitive mechanism may help an organism survive by either running away or fighting the danger. With the series of physiological changes, the consciousness realizes an emotion of fear.
There are observable physical reactions in individuals who experience fear. An individual might experience a feeling of dizziness, lightheaded, like they are being choked, sweating, shortness of breath, vomiting or nausea, numbness or shaking and any other like symptoms. These bodily reactions informs the individual that they are afraid and should proceed to remove or get away from the stimulus that is causing that fear.
An influential categorization of stimuli causing fear was proposed by psychologist Jeffrey Alan Gray; namely, intensity, novelty, special evolutionary dangers, stimuli arising during social interaction, and conditioned stimuli. Another categorization was proposed by Archer, who, besides conditioned fear stimuli, categorized fear-evoking (as well as aggression-evoking) stimuli into three groups; namely, pain, novelty, and frustration, although he also described "looming", which refers to an object rapidly moving towards the visual sensors of a subject, and can be categorized as "intensity". Russell described a more functional categorization of fear-evoking stimuli, in which for instance novelty is a variable affecting more than one category: 1) Predator stimuli (including movement, suddenness, proximity, but also learned and innate predator stimuli); 2) Physical environmental dangers (including intensity and heights); 3) Stimuli associated with increased risk of predation and other dangers (including novelty, openness, illumination, and being alone); 4) Stimuli stemming from conspecifics (including novelty, movement, and spacing behavior); 5) Species-predictable fear stimuli and experience (special evolutionary dangers); and 6) Fear stimuli that are not species predictable (conditioned fear stimuli).
Although many fears are learned, the capacity to fear is part of human nature. Many studies have found that certain fears (e.g. animals, heights) are much more common than others (e.g. flowers, clouds). These fears are also easier to induce in the laboratory. This phenomenon is known as preparedness. Because early humans that were quick to fear dangerous situations were more likely to survive and reproduce; preparedness is theorized to be a genetic effect that is the result of natural selection.
From an evolutionary psychology perspective, different fears may be different adaptations that have been useful in our evolutionary past. They may have developed during different time periods. Some fears, such as fear of heights, may be common to all mammals and developed during the mesozoic period. Other fears, such as fear of snakes, may be common to all simians and developed during the cenozoic time period (the still-ongoing geological era encompassing the last 66 million of history). Still others, such as fear of mice and insects, may be unique to humans and developed during the paleolithic and neolithic time periods (when mice and insects become important carriers of infectious diseases and harmful for crops and stored foods).
Nonhuman animals and humans innovate specific fears as a result of learning. This has been studied in psychology as fear conditioning, beginning with John B. Watson's Little Albert experiment in 1920, which was inspired after observing a child with an irrational fear of dogs. In this study, an 11-month-old boy was conditioned to fear a white rat in the laboratory. The fear became generalized to include other white, furry objects, such as a rabbit, dog, and even a Santa Claus mask with white cotton balls in the beard.
Fear can be learned by experiencing or watching a frightening traumatic accident. For example, if a child falls into a well and struggles to get out, he or she may develop a fear of wells, heights (acrophobia), enclosed spaces (claustrophobia), or water (aquaphobia). There are studies looking at areas of the brain that are affected in relation to fear. When looking at these areas (such as the amygdala), it was proposed that a person learns to fear regardless of whether they themselves have experienced trauma, or if they have observed the fear in others. In a study completed by Andreas Olsson, Katherine I. Nearing and Elizabeth A. Phelps, the amygdala were affected both when subjects observed someone else being submitted to an aversive event, knowing that the same treatment awaited themselves, and when subjects were subsequently placed in a fear-provoking situation. This suggests that fear can develop in both conditions, not just simply from personal history.
Fear is affected by cultural and historical context. For example, in the early 20th century, many Americans feared polio, a disease that can lead to paralysis. There are consistent cross-cultural differences in how people respond to fear. Display rules affect how likely people are to express the facial expression of fear and other emotions.
Fear of victimization is a function of perceived risk and seriousness.
According to surveys, some of the most common fears are of demons and ghosts, the existence of evil powers, cockroaches, spiders, snakes, heights, water, enclosed spaces, tunnels, bridges, needles, social rejection, failure, examinations, and public speaking.
Regionally some may more so fear terrorist attacks, death, war, criminal or gang violence, being alone, the future, nuclear war, flying, clowns, intimacy, people, and driving.
Fear of the unknown or irrational fear is caused by negative thinking (worry) which arises from anxiety accompanied by a subjective sense of apprehension or dread. Irrational fear shares a common neural pathway with other fears, a pathway that engages the nervous system to mobilize bodily resources in the face of danger or threat. Many people are scared of the "unknown". The irrational fear can branch out to many areas such as the hereafter, the next ten years or even tomorrow. Chronic irrational fear has deleterious effects since the elicitor stimulus is commonly absent or perceived from delusions. Such fear can create comorbidity with the anxiety disorder umbrella. Being scared may cause people to experience anticipatory fear of what may lie ahead rather than planning and evaluating for the same. For example, "continuation of scholarly education" is perceived by many educators as a risk that may cause them fear and stress, and they would rather teach things they've been taught than go and do research.
The ambiguity of situations that tend to be uncertain and unpredictable can cause anxiety in addition to other psychological and physical problems in some populations; especially those who engage it constantly, for example, in war-ridden places or in places of conflict, terrorism, abuse, etc. Poor parenting that instills fear can also debilitate a child's psyche development or personality. For example, parents tell their children not to talk to strangers in order to protect them. In school, they would be motivated to not show fear in talking with strangers, but to be assertive and also aware of the risks and the environment in which it takes place. Ambiguous and mixed messages like this can affect their self-esteem and self-confidence. Researchers say talking to strangers isn't something to be thwarted but allowed in a parent's presence if required. Developing a sense of equanimity to handle various situations is often advocated as an antidote to irrational fear and as an essential skill by a number of ancient philosophies.
Fear of the unknown (FOTU) "may be a, or possibly the, fundamental fear" from early times when there were many threats to life.
Although fear behavior varies from species to species, it is often divided into two main categories; namely, avoidance/flight and immobility. To these, different researchers have added different categories, such as threat display and attack, protective responses (including startle and looming responses), defensive burying, and social responses (including alarm vocalizations and submission). Finally, immobility is often divided into freezing and tonic immobility.
The decision as to which particular fear behavior to perform is determined by the level of fear as well as the specific context, such as environmental characteristics (escape route present, distance to refuge), the presence of a discrete and localized threat, the distance between threat and subject, threat characteristics (speed, size, directness of approach), the characteristics of the subject under threat (size, physical condition, speed, degree of crypsis, protective morphological structures), social conditions (group size), and the amount of experience with the type of the threat.
Often laboratory studies with rats are conducted to examine the acquisition and extinction of conditioned fear responses. In 2004, researchers conditioned rats (Rattus norvegicus) to fear a certain stimulus, through electric shock. The researchers were able to then cause an extinction of this conditioned fear, to a point that no medications or drugs were able to further aid in the extinction process. The rats showed signs of avoidance learning, not fear, but simply avoiding the area that brought pain to the test rats. The avoidance learning of rats is seen as a conditioned response, and therefore the behavior can be unconditioned, as supported by the earlier research.
Species-specific defense reactions (SSDRs) or avoidance learning in nature is the specific tendency to avoid certain threats or stimuli, it is how animals survive in the wild. Humans and animals both share these species-specific defense reactions, such as the flight-or-fight, which also include pseudo-aggression, fake or intimidating aggression and freeze response to threats, which is controlled by the sympathetic nervous system. These SSDRs are learned very quickly through social interactions between others of the same species, other species, and interaction with the environment. These acquired sets of reactions or responses are not easily forgotten. The animal that survives is the animal that already knows what to fear and how to avoid this threat. An example in humans is the reaction to the sight of a snake, many jump backwards before cognitively realizing what they are jumping away from, and in some cases, it is a stick rather than a snake.
As with many functions of the brain, there are various regions of the brain involved in deciphering fear in humans and other nonhuman species. The amygdala communicates both directions between the prefrontal cortex, hypothalamus, the sensory cortex, the hippocampus, thalamus, septum, and the brainstem. The amygdala plays an important role in SSDR, such as the ventral amygdalofugal, which is essential for associative learning, and SSDRs are learned through interaction with the environment and others of the same species. An emotional response is created only after the signals have been relayed between the different regions of the brain, and activating the sympathetic nervous systems; which controls the flight, fight, freeze, fright, and faint response. Often a damaged amygdala can cause impairment in the recognition of fear (like the human case of patient S.M.). This impairment can cause different species to lack the sensation of fear, and often can become overly confident, confronting larger peers, or walking up to predatory creatures.
Robert C. Bolles (1970), a researcher at University of Washington, wanted to understand species-specific defense reactions and avoidance learning among animals, but found that the theories of avoidance learning and the tools that were used to measure this tendency were out of touch with the natural world. He theorized the species-specific defense reaction (SSDR). There are three forms of SSDRs: flight, fight (pseudo-aggression), or freeze. Even domesticated animals have SSDRs, and in those moments it is seen that animals revert to atavistic standards and become "wild" again. Dr. Bolles states that responses are often dependent on the reinforcement of a safety signal, and not the aversive conditioned stimuli. This safety signal can be a source of feedback or even stimulus change. Intrinsic feedback or information coming from within, muscle twitches, increased heart rate, are seen to be more important in SSDRs than extrinsic feedback, stimuli that comes from the external environment. Dr. Bolles found that most creatures have some intrinsic set of fears, to help assure survival of the species. Rats will run away from any shocking event, and pigeons will flap their wings harder when threatened. The wing flapping in pigeons and the scattered running of rats are considered species-specific defense reactions or behaviors. Bolles believed that SSDRs are conditioned through Pavlovian conditioning, and not operant conditioning; SSDRs arise from the association between the environmental stimuli and adverse events. Michael S. Fanselow conducted an experiment, to test some specific defense reactions, he observed that rats in two different shock situations responded differently, based on instinct or defensive topography, rather than contextual information.
Species-specific defense responses are created out of fear, and are essential for survival. Rats that lack the gene stathmin show no avoidance learning, or a lack of fear, and will often walk directly up to cats and be eaten. Animals use these SSDRs to continue living, to help increase their chance of fitness, by surviving long enough to procreate. Humans and animals alike have created fear to know what should be avoided, and this fear can be learned through association with others in the community, or learned through personal experience with a creature, species, or situations that should be avoided. SSDRs are an evolutionary adaptation that has been seen in many species throughout the world including rats, chimpanzees, prairie dogs, and even humans, an adaptation created to help individual creatures survive in a hostile world.
Fear learning changes across the lifetime due to natural developmental changes in the brain. This includes changes in the prefrontal cortex and the amygdala.
The visual exploration of an emotional face does not follow a fixed pattern but modulated by the emotional content of the face. Scheller et al. found that participants paid more attention to the eyes when recognising fearful or neutral faces, while the mouth was fixated on when happy faces are presented, irrespective of task demands and spatial locations of face stimuli. These findings were replicated when fearful eyes are presented and when canonical face configurations are distorted for fearful, neutral and happy expressions.
The brain structures that are the center of most neurobiological events associated with fear are the two amygdalae, located behind the pituitary gland. Each amygdala is part of a circuitry of fear learning. They are essential for proper adaptation to stress and specific modulation of emotional learning memory. In the presence of a threatening stimulus, the amygdalae generate the secretion of hormones that influence fear and aggression. Once a response to the stimulus in the form of fear or aggression commences, the amygdalae may elicit the release of hormones into the body to put the person into a state of alertness, in which they are ready to move, run, fight, etc. This defensive response is generally referred to in physiology as the fight-or-flight response regulated by the hypothalamus, part of the limbic system. Once the person is in safe mode, meaning that there are no longer any potential threats surrounding them, the amygdalae will send this information to the medial prefrontal cortex (mPFC) where it is stored for similar future situations, which is known as memory consolidation.
Some of the hormones involved during the state of fight-or-flight include epinephrine, which regulates heart rate and metabolism as well as dilating blood vessels and air passages, norepinephrine increasing heart rate, blood flow to skeletal muscles and the release of glucose from energy stores, and cortisol which increases blood sugar, increases circulating neutrophilic leukocytes, calcium amongst other things.
After a situation which incites fear occurs, the amygdalae and hippocampus record the event through synaptic plasticity. The stimulation to the hippocampus will cause the individual to remember many details surrounding the situation. Plasticity and memory formation in the amygdala are generated by activation of the neurons in the region. Experimental data supports the notion that synaptic plasticity of the neurons leading to the lateral amygdalae occurs with fear conditioning. In some cases, this forms permanent fear responses such as post-traumatic stress disorder (PTSD) or a phobia. MRI and fMRI scans have shown that the amygdalae in individuals diagnosed with such disorders including bipolar or panic disorder are larger and wired for a higher level of fear.
Pathogens can suppress amygdala activity. Rats infected with the toxoplasmosis parasite become less fearful of cats, sometimes even seeking out their urine-marked areas. This behavior often leads to them being eaten by cats. The parasite then reproduces within the body of the cat. There is evidence that the parasite concentrates itself in the amygdala of infected rats. In a separate experiment, rats with lesions in the amygdala did not express fear or anxiety towards unwanted stimuli. These rats pulled on levers supplying food that sometimes sent out electrical shocks. While they learned to avoid pressing on them, they did not distance themselves from these shock-inducing levers.
Several brain structures other than the amygdalae have also been observed to be activated when individuals are presented with fearful vs. neutral faces, namely the occipitocerebellar regions including the fusiform gyrus and the inferior parietal / superior temporal gyri. Fearful eyes, brows and mouth seem to separately reproduce these brain responses. Scientists from Zurich studies show that the hormone oxytocin related to stress and sex reduces activity in your brain fear center.
In threatening situations, insects, aquatic organisms, birds, reptiles, and mammals emit odorant substances, initially called alarm substances, which are chemical signals now called alarm pheromones. This is to defend themselves and at the same time to inform members of the same species of danger and leads to observable behavior change like freezing, defensive behavior, or dispersion depending on circumstances and species. For example, stressed rats release odorant cues that cause other rats to move away from the source of the signal.
After the discovery of pheromones in 1959, alarm pheromones were first described in 1968 in ants and earthworms, and four years later also found in mammals, both mice and rats. Over the next two decades, identification and characterization of these pheromones proceeded in all manner of insects and sea animals, including fish, but it was not until 1990 that more insight into mammalian alarm pheromones was gleaned.
In 1985, a link between odors released by stressed rats and pain perception was discovered: unstressed rats exposed to these odors developed opioid-mediated analgesia. In 1997, researchers found that bees became less responsive to pain after they had been stimulated with isoamyl acetate, a chemical smelling of banana, and a component of bee alarm pheromone. The experiment also showed that the bees' fear-induced pain tolerance was mediated by an endorphin.
By using the forced swimming test in rats as a model of fear-induction, the first mammalian "alarm substance" was found. In 1991, this "alarm substance" was shown to fulfill criteria for pheromones: well-defined behavioral effect, species specificity, minimal influence of experience and control for nonspecific arousal. Rat activity testing with the alarm pheromone, and their preference/avoidance for odors from cylinders containing the pheromone, showed that the pheromone had very low volatility.
In 1993 a connection between alarm chemosignals in mice and their immune response was found. Pheromone production in mice was found to be associated with or mediated by the pituitary gland in 1994.
In 2004, it was demonstrated that rats' alarm pheromones had different effects on the "recipient" rat (the rat perceiving the pheromone) depending which body region they were released from: Pheromone production from the face modified behavior in the recipient rat, e.g. caused sniffing or movement, whereas pheromone secreted from the rat's anal area induced autonomic nervous system stress responses, like an increase in core body temperature. Further experiments showed that when a rat perceived alarm pheromones, it increased its defensive and risk assessment behavior, and its acoustic startle reflex was enhanced.
It was not until 2011 that a link between severe pain, neuroinflammation and alarm pheromones release in rats was found: real time RT-PCR analysis of rat brain tissues indicated that shocking the footpad of a rat increased its production of proinflammatory cytokines in deep brain structures, namely of IL-1β, heteronuclear Corticotropin-releasing hormone and c-fos mRNA expressions in both the paraventricular nucleus and the bed nucleus of the stria terminalis, and it increased stress hormone levels in plasma (corticosterone).
The neurocircuit for how rats perceive alarm pheromones was shown to be related to the hypothalamus, brainstem, and amygdalae, all of which are evolutionary ancient structures deep inside or in the case of the brainstem underneath the brain away from the cortex, and involved in the fight-or-flight response, as is the case in humans.
Alarm pheromone-induced anxiety in rats has been used to evaluate the degree to which anxiolytics can alleviate anxiety in humans. For this, the change in the acoustic startle reflex of rats with alarm pheromone-induced anxiety (i.e. reduction of defensiveness) has been measured. Pretreatment of rats with one of five anxiolytics used in clinical medicine was able to reduce their anxiety: namely midazolam, phenelzine (a nonselective monoamine oxidase (MAO) inhibitor), propranolol, a nonselective beta blocker, clonidine, an alpha 2 adrenergic agonist or CP-154,526, a corticotropin-releasing hormone antagonist.
Faulty development of odor discrimination impairs the perception of pheromones and pheromone-related behavior, like aggressive behavior and mating in male rats: The enzyme Mitogen-activated protein kinase 7 (MAPK7) has been implicated in regulating the development of the olfactory bulb and odor discrimination and it is highly expressed in developing rat brains, but absent in most regions of adult rat brains. Conditional deletion of the MAPK7gene in mouse neural stem cells impairs several pheromone-mediated behaviors, including aggression and mating in male mice. These behavior impairments were not caused by a reduction in the level of testosterone, by physical immobility, by heightened fear or anxiety or by depression. Using mouse urine as a natural pheromone-containing solution, it has been shown that the impairment was associated with defective detection of related pheromones, and with changes in their inborn preference for pheromones related to sexual and reproductive activities.
Lastly, alleviation of an acute fear response because a friendly peer (or in biological language: an affiliative conspecific) tends and befriends is called "social buffering". The term is in analogy to the 1985 "buffering" hypothesis in psychology, where social support has been proven to mitigate the negative health effects of alarm pheromone mediated distress. The role of a "social pheromone" is suggested by the recent discovery that olfactory signals are responsible in mediating the "social buffering" in male rats. "Social buffering" was also observed to mitigate the conditioned fear responses of honeybees. A bee colony exposed to an environment of high threat of predation did not show increased aggression and aggressive-like gene expression patterns in individual bees, but decreased aggression. That the bees did not simply habituate to threats is suggested by the fact that the disturbed colonies also decreased their foraging.
Biologists have proposed in 2012 that fear pheromones evolved as molecules of "keystone significance", a term coined in analogy to keystone species. Pheromones may determine species compositions and affect rates of energy and material exchange in an ecological community. Thus pheromones generate structure in a food web and play critical roles in maintaining natural systems.
Evidence of chemosensory alarm signals in humans has emerged slowly: Although alarm pheromones have not been physically isolated and their chemical structures have not been identified in humans so far, there is evidence for their presence. Androstadienone, for example, a steroidal, endogenous odorant, is a pheromone candidate found in human sweat, axillary hair and plasma. The closely related compound androstenone is involved in communicating dominance, aggression or competition; sex hormone influences on androstenone perception in humans showed a high testosterone level related to heightened androstenone sensitivity in men, a high testosterone level related to unhappiness in response to androstenone in men, and a high estradiol level related to disliking of androstenone in women.
A German study from 2006 showed when anxiety-induced versus exercise-induced human sweat from a dozen people was pooled and offered to seven study participants, of five able to olfactorily distinguish exercise-induced sweat from room air, three could also distinguish exercise-induced sweat from anxiety induced sweat. The acoustic startle reflex response to a sound when sensing anxiety sweat was larger than when sensing exercise-induced sweat, as measured by electromyography analysis of the orbital muscle, which is responsible for the eyeblink component. This showed for the first time that fear chemosignals can modulate the startle reflex in humans without emotional mediation; fear chemosignals primed the recipient's "defensive behavior" prior to the subjects' conscious attention on the acoustic startle reflex level.
In analogy to the social buffering of rats and honeybees in response to chemosignals, induction of empathy by "smelling anxiety" of another person has been found in humans.
A study from 2013 provided brain imaging evidence that human responses to fear chemosignals may be gender-specific. Researchers collected alarm-induced sweat and exercise-induced sweat from donors extracted it, pooled it and presented it to 16 unrelated people undergoing functional brain MRI. While stress-induced sweat from males produced a comparably strong emotional response in both females and males, stress-induced sweat from females produced markedly stronger arousal in women than in men. Statistical tests pinpointed this gender-specificity to the right amygdala and strongest in the superficial nuclei. Since no significant differences were found in the olfactory bulb, the response to female fear-induced signals is likely based on processing the meaning, i.e. on the emotional level, rather than the strength of chemosensory cues from each gender, i.e. the perceptual level.
An approach-avoidance task was set up where volunteers seeing either an angry or a happy cartoon face on a computer screen pushed away or pulled toward them a joystick as fast as possible. Volunteers smelling androstadienone, masked with clove oil scent responded faster, especially to angry faces than those smelling clove oil only, which was interpreted as androstadienone-related activation of the fear system. A potential mechanism of action is, that androstadienone alters the "emotional face processing". Androstadienone is known to influence the activity of the fusiform gyrus which is relevant for face recognition.
Cognitive-consistency theories assume that "when two or more simultaneously active cognitive structures are logically inconsistent, arousal is increased, which activates processes with the expected consequence of increasing consistency and decreasing arousal." In this context, it has been proposed that fear behavior is caused by an inconsistency between a preferred, or expected, situation and the actually perceived situation, and functions to remove the inconsistent stimulus from the perceptual field, for instance by fleeing or hiding, thereby resolving the inconsistency. This approach puts fear in a broader perspective, also involving aggression and curiosity. When the inconsistency between perception and expectancy is small, learning as a result of curiosity reduces inconsistency by updating expectancy to match perception. If the inconsistency is larger, fear or aggressive behavior may be employed to alter the perception in order to make it match expectancy, depending on the size of the inconsistency as well as the specific context. Aggressive behavior is assumed to alter perception by forcefully manipulating it into matching the expected situation, while in some cases thwarted escape may also trigger aggressive behavior in an attempt to remove the thwarting stimulus.
In order to improve our understanding of the neural and behavioral mechanisms of adaptive and maladaptive fear, investigators use a variety of translational animal models. These models are particularly important for research that would be too invasive for human studies. Rodents such as mice and rats are common animal models, but other species are used. Certain aspects of fear research still requires more research such as sex, gender, and age differences.
These animal models include, but are not limited to, fear conditioning, predator-based psychosocial stress, single prolonged stress, chronic stress models, inescapable foot/tail shocks, immobilization or restraint, and stress enhanced fear learning. While the stress and fear paradigms differ between the models, they tend to involve aspects such as acquisition, generalization, extinction, cognitive regulation, and reconsolidation.
Fear conditioning, also known as Pavlovian or classical conditioning, is a process of learning that involves pairing a neutral stimulus with an unconditional stimulus (US). A neutral stimulus is something like a bell, tone, or room that doesn't illicit a response normally where a US is a stimulus that results in a natural or unconditioned response (UR) – in Pavlov's famous experiment the neutral stimulus is a bell and the US would be food with the dog's salvation being the UR. Pairing the neutral stimulus and the US results in the UR occurring not only with the US but also the neutral stimulus. When this occurs the neutral stimulus is referred to as the conditional stimulus (CS) and the response the conditional response (CR). In the fear conditioning model of Pavlovian conditioning the US is an aversive stimulus such as a shock, tone, or unpleasant odor.
Predator-based psychosocial stress (PPS) involves a more naturalistic approach to fear learning. Predators such as a cat, a snake, or urine from a fox or cat are used along with other stressors such as immobilization or restraint in order to generate instinctual fear responses.
Chronic stress models include chronic variable stress, chronic social defeat, and chronic mild stress. These models are often used to study how long-term or prolonged stress/pain can alter fear learning and disorders.
Single prolonged stress (SPS) is a fear model that is often used to study PTSD. It's paradigm involves multiple stressors such as immobilization, a force swim, and exposure to ether delivered concurrently to the subject. This is used to study non-naturalistic, uncontrollable situations that can cause a maladaptive fear responses that is seen in a lot of anxiety and traumatic based disorders.
Stress enhanced fear learning (SEFL) like SPS is often used to study the maladaptive fear learning involved in PTSD and other traumatic based disorders. SEFL involves a single extreme stressor such as a large number of footshocks simulating a single traumatic stressor that somehow enhances and alters future fear learning.
A drug treatment for fear conditioning and phobias via the amygdalae is the use of glucocorticoids. In one study, glucocorticoid receptors in the central nuclei of the amygdalae were disrupted in order to better understand the mechanisms of fear and fear conditioning. The glucocorticoid receptors were inhibited using lentiviral vectors containing Cre-recombinase injected into mice. Results showed that disruption of the glucocorticoid receptors prevented conditioned fear behavior. The mice were subjected to auditory cues which caused them to freeze normally. A reduction of freezing was observed in the mice that had inhibited glucocorticoid receptors.
Cognitive behavioral therapy has been successful in helping people overcome their fear. Because fear is more complex than just forgetting or deleting memories, an active and successful approach involves people repeatedly confronting their fears. By confronting their fears in a safe manner a person can suppress the "fear-triggering memories" or stimuli.
Exposure therapy has known to have helped up to 90% of people with specific phobias to significantly decrease their fear over time.
Another psychological treatment is systematic desensitization, which is a type of behavior therapy used to completely remove the fear or produce a disgusted response to this fear and replace it. The replacement that occurs will be relaxation and will occur through conditioning. Through conditioning treatments, muscle tensioning will lessen and deep breathing techniques will aid in de-tensioning.
There are other methods for treating or coping with one's fear, such as writing down rational thoughts regarding fears. Journal entries are a healthy method of expressing one's fears without compromising safety or causing uncertainty. Another suggestion is a fear ladder. To create a fear ladder, one must write down all of their fears and score them on a scale of one to ten. Next, the person addresses their phobia, starting with the lowest number.
Religion can help some individuals cope with fear.
People who have damage to their amygdalae, which can be caused by a rare genetic disease known as Urbach–Wiethe disease, are unable to experience fear. The disease destroys both amygdalae in late childhood. Since the discovery of the disease, there have only been 400 recorded cases. A lack of fear can allow someone to get into a dangerous situation they otherwise would have avoided.
The fear of the end of life and its existence is, in other words, the fear of death. Historically, attempts were made to reduce this fear by performing rituals which have helped collect the cultural ideas that we now have in the present. These rituals also helped preserve the cultural ideas. The results and methods of human existence had been changing at the same time that social formation was changing.
When people are faced with their own thoughts of death, they either accept that they are dying or will die because they have lived a full life or they will experience fear. A theory was developed in response to this, which is called the terror management theory. The theory states that a person's cultural worldviews (religion, values, etc.) will mitigate the terror associated with the fear of death through avoidance. To help manage their terror, they find solace in their death-denying beliefs, such as their religion. Another way people cope with their death related fears is pushing any thoughts of death into the future or by avoiding these thoughts all together through distractions. Although there are methods for one coping with the terror associated with their fear of death, not everyone suffers from these same uncertainties. People who believe they have lived life to the "fullest" typically do not fear death.
Death anxiety is multidimensional; it covers "fears related to one's own death, the death of others, fear of the unknown after death, fear of obliteration, and fear of the dying process, which includes fear of a slow death and a painful death".
The Yale philosopher Shelly Kagan examined fear of death in a 2007 Yale open course by examining the following questions: Is fear of death a reasonable appropriate response? What conditions are required and what are appropriate conditions for feeling fear of death? What is meant by fear, and how much fear is appropriate? According to Kagan for fear in general to make sense, three conditions should be met:
The amount of fear should be appropriate to the size of "the bad". If the three conditions are not met, fear is an inappropriate emotion. He argues, that death does not meet the first two criteria, even if death is a "deprivation of good things" and even if one believes in a painful afterlife. Because death is certain, it also does not meet the third criterion, but he grants that the unpredictability of when one dies may be cause to a sense of fear.
In a 2003 study of 167 women and 121 men, aged 65–87, low self-efficacy predicted fear of the unknown after death and fear of dying for women and men better than demographics, social support, and physical health. Fear of death was measured by a "Multidimensional Fear of Death Scale" which included the 8 subscales Fear of Dying, Fear of the Dead, Fear of Being Destroyed, Fear for Significant Others, Fear of the Unknown, Fear of Conscious Death, Fear for the Body After Death, and Fear of Premature Death. In hierarchical multiple regression analysis, the most potent predictors of death fears were low "spiritual health efficacy", defined as beliefs relating to one's perceived ability to generate spiritually based faith and inner strength, and low "instrumental efficacy", defined as beliefs relating to one's perceived ability to manage activities of daily living.
Psychologists have tested the hypotheses that fear of death motivates religious commitment, and that assurances about an afterlife alleviate the fear, with equivocal results. Religiosity can be related to fear of death when the afterlife is portrayed as time of punishment. "Intrinsic religiosity", as opposed to mere "formal religious involvement", has been found to be negatively correlated with death anxiety. In a 1976 study of people of various Christian denominations, those who were most firm in their faith, who attended religious services weekly, were the least afraid of dying. The survey found a negative correlation between fear of death and "religious concern".
In a 2006 study of white, Christian men and women the hypothesis was tested that traditional, church-centered religiousness and de-institutionalized spiritual seeking are ways of approaching fear of death in old age. Both religiousness and spirituality were related to positive psychosocial functioning, but only church-centered religiousness protected subjects against the fear of death.
Statius in the Thebaid (Book 3, line 661) aired the irreverent suggestion that "fear first made gods in the world".
From a Christian theological perspective, the word fear can encompass more than simple dread. Robert B. Strimple says that fear includes the "... convergence of awe, reverence, adoration...". Some translations of the Bible, such as the New International Version, sometimes express the concept of fear with the word reverence.
Fear in religious contexts can be seen throughout the years, including in the Crusades. In 1095 Pope Urban II called for Christian troops to recover the Holy Land from the Muslim control. A misinterpretation of the Pope's message resulted in the slaughter of innocent people: although the first crusade aimed to conquer Muslim territory, hate became redirected against Jewish culture - note especially the Rhineland massacres of 1096. Jewish people who feared for their lives gave in to forced conversion to Christianity because they believed this would secure their safety. Other Jewish people feared betraying their God by conceding to a conversion, and instead secured their own fate, which was death.
Fear may be politically and culturally manipulated to persuade citizenry of ideas which would otherwise be widely rejected or dissuade citizenry from ideas which would otherwise be widely supported. In contexts of disasters, nation-states manage the fear not only to provide their citizens with an explanation about the event or blaming some minorities, but also to adjust their previous beliefs.
Fear can alter how a person thinks or reacts to situations because fear has the power to inhibit one's rational way of thinking. As a result, people who do not experience fear, are able to use fear as a tool to manipulate others. People who are experiencing fear, seek preservation through safety and can be manipulated by a person who is there to provide that safety that is being sought after. "When we're afraid, a manipulator can talk us out of the truth we see right in front of us. Words become more real than reality" By this, a manipulator is able to use our fear to manipulate us out the truth and instead make us believe and trust in their truth. Politicians are notorious for using fear to manipulate the people into supporting their policies.
Fear is found and reflected in mythology and folklore as well as in works of fiction such as novels and films.
Works of dystopian and (post)apocalyptic fiction convey the fears and anxieties of societies.
The fear of the world's end is about as old as civilization itself. In a 1967 study, Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Scientific and critical thought supplanting religious and mythical thought as well as a public emancipation may be the cause of eschatology becoming replaced by more realistic scenarios. Such might constructively provoke discussion and steps to be taken to prevent depicted catastrophes.
The Story of the Youth Who Went Forth to Learn What Fear Was is a German fairy tale dealing with the topic of not knowing fear. Many stories also include characters who fear the antagonist of the plot. One important characteristic of historical and mythical heroes across cultures is to be fearless in the face of big and often lethal enemies.
In the world of athletics, fear is often used as a means of motivation to not fail. This situation involves using fear in a way that increases the chances of a positive outcome. In this case, the fear that is being created is initially a cognitive state to the receiver. This initial state is what generates the first response of the athlete, this response generates a possibility of fight or flight reaction by the athlete (receiver), which in turn will increase or decrease the possibility of success or failure in the certain situation for the athlete. The amount of time that the athlete has to determine this decision is small but it is still enough time for the receiver to make a determination through cognition. Even though the decision is made quickly, the decision is determined through past events that have been experienced by the athlete. The results of these past events will determine how the athlete will make his cognitive decision in the split second that he or she has.
Fear of failure as described above has been studied frequently in the field of sport psychology. Many scholars have tried to determine how often fear of failure is triggered within athletes, as well as what personalities of athletes most often choose to use this type of motivation. Studies have also been conducted to determine the success rate of this method of motivation.
Murray's Exploration in Personal (1938) was one of the first studies that actually identified fear of failure as an actual motive to avoid failure or to achieve success. His studies suggested that inavoidance, the need to avoid failure, was found in many college-aged men during the time of his research in 1938. This was a monumental finding in the field of psychology because it allowed other researchers to better clarify how fear of failure can actually be a determinant of creating achievement goals as well as how it could be used in the actual act of achievement.
In the context of sport, a model was created by R.S. Lazarus in 1991 that uses the cognitive-motivational-relational theory of emotion.
It holds that Fear of Failure results when beliefs or cognitive schemas about aversive consequences of failing are activated by situations in which failure is possible. These belief systems predispose the individual to make appraisals of threat and experience the state anxiety that is associated with Fear of Failure in evaluative situations.
Another study was done in 2001 by Conroy, Poczwardowski, and Henschen that created five aversive consequences of failing that have been repeated over time. The five categories include (a) experiencing shame and embarrassment, (b) devaluing one's self-estimate, (c) having an uncertain future, (d) important others losing interest, (e) upsetting important others. These five categories can help one infer the possibility of an individual to associate failure with one of these threat categories, which will lead them to experiencing fear of failure.
In summary, the two studies that were done above created a more precise definition of fear of failure, which is "a dispositional tendency to experience apprehension and anxiety in evaluative situations because individuals have learned that failure is associated with aversive consequences". | [
{
"paragraph_id": 0,
"text": "Fear is an intensely unpleasant emotion in response to perceiving or recognizing a danger or threat. Fear causes physiological changes that may produce behavioral reactions such as mounting an aggressive response or fleeing the threat. Fear in human beings may occur in response to a certain stimulus occurring in the present, or in anticipation or expectation of a future threat perceived as a risk to oneself. The fear response arises from the perception of danger leading to confrontation with or escape from/avoiding the threat (also known as the fight-or-flight response), which in extreme cases of fear (horror and terror) can be a freeze response.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In humans and other animals, fear is modulated by the process of cognition and learning. Thus, fear is judged as rational and appropriate, or irrational and inappropriate. An irrational fear is called a phobia.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fear is closely related to the emotion anxiety, which occurs as the result of often future threats that are perceived to be uncontrollable or unavoidable. The fear response serves survival by engendering appropriate behavioral responses, so it has been preserved throughout evolution. Sociological and organizational research also suggests that individuals' fears are not solely dependent on their nature but are also shaped by their social relations and culture, which guide their understanding of when and how much fear to feel.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fear is sometimes incorrectly considered the opposite of courage. For the reason that courage is a willingness to face adversity, fear is an example of a condition that makes the exercise of courage possible.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Many physiological changes in the body are associated with fear, summarized as the fight-or-flight response. An innate response for coping with danger, it works by accelerating the breathing rate (hyperventilation), heart rate, vasoconstriction of the peripheral blood vessels leading to blood pooling, increasing muscle tension including the muscles attached to each hair follicle to contract and causing \"goosebumps\", or more clinically, piloerection (making a cold person warmer or a frightened animal look more impressive), sweating, increased blood glucose (hyperglycemia), increased serum calcium, increase in white blood cells called neutrophilic leukocytes, alertness leading to sleep disturbance and \"butterflies in the stomach\" (dyspepsia). This primitive mechanism may help an organism survive by either running away or fighting the danger. With the series of physiological changes, the consciousness realizes an emotion of fear.",
"title": "Physiological signs"
},
{
"paragraph_id": 5,
"text": "There are observable physical reactions in individuals who experience fear. An individual might experience a feeling of dizziness, lightheaded, like they are being choked, sweating, shortness of breath, vomiting or nausea, numbness or shaking and any other like symptoms. These bodily reactions informs the individual that they are afraid and should proceed to remove or get away from the stimulus that is causing that fear.",
"title": "Physiological signs"
},
{
"paragraph_id": 6,
"text": "An influential categorization of stimuli causing fear was proposed by psychologist Jeffrey Alan Gray; namely, intensity, novelty, special evolutionary dangers, stimuli arising during social interaction, and conditioned stimuli. Another categorization was proposed by Archer, who, besides conditioned fear stimuli, categorized fear-evoking (as well as aggression-evoking) stimuli into three groups; namely, pain, novelty, and frustration, although he also described \"looming\", which refers to an object rapidly moving towards the visual sensors of a subject, and can be categorized as \"intensity\". Russell described a more functional categorization of fear-evoking stimuli, in which for instance novelty is a variable affecting more than one category: 1) Predator stimuli (including movement, suddenness, proximity, but also learned and innate predator stimuli); 2) Physical environmental dangers (including intensity and heights); 3) Stimuli associated with increased risk of predation and other dangers (including novelty, openness, illumination, and being alone); 4) Stimuli stemming from conspecifics (including novelty, movement, and spacing behavior); 5) Species-predictable fear stimuli and experience (special evolutionary dangers); and 6) Fear stimuli that are not species predictable (conditioned fear stimuli).",
"title": "Causes"
},
{
"paragraph_id": 7,
"text": "Although many fears are learned, the capacity to fear is part of human nature. Many studies have found that certain fears (e.g. animals, heights) are much more common than others (e.g. flowers, clouds). These fears are also easier to induce in the laboratory. This phenomenon is known as preparedness. Because early humans that were quick to fear dangerous situations were more likely to survive and reproduce; preparedness is theorized to be a genetic effect that is the result of natural selection.",
"title": "Causes"
},
{
"paragraph_id": 8,
"text": "From an evolutionary psychology perspective, different fears may be different adaptations that have been useful in our evolutionary past. They may have developed during different time periods. Some fears, such as fear of heights, may be common to all mammals and developed during the mesozoic period. Other fears, such as fear of snakes, may be common to all simians and developed during the cenozoic time period (the still-ongoing geological era encompassing the last 66 million of history). Still others, such as fear of mice and insects, may be unique to humans and developed during the paleolithic and neolithic time periods (when mice and insects become important carriers of infectious diseases and harmful for crops and stored foods).",
"title": "Causes"
},
{
"paragraph_id": 9,
"text": "Nonhuman animals and humans innovate specific fears as a result of learning. This has been studied in psychology as fear conditioning, beginning with John B. Watson's Little Albert experiment in 1920, which was inspired after observing a child with an irrational fear of dogs. In this study, an 11-month-old boy was conditioned to fear a white rat in the laboratory. The fear became generalized to include other white, furry objects, such as a rabbit, dog, and even a Santa Claus mask with white cotton balls in the beard.",
"title": "Causes"
},
{
"paragraph_id": 10,
"text": "Fear can be learned by experiencing or watching a frightening traumatic accident. For example, if a child falls into a well and struggles to get out, he or she may develop a fear of wells, heights (acrophobia), enclosed spaces (claustrophobia), or water (aquaphobia). There are studies looking at areas of the brain that are affected in relation to fear. When looking at these areas (such as the amygdala), it was proposed that a person learns to fear regardless of whether they themselves have experienced trauma, or if they have observed the fear in others. In a study completed by Andreas Olsson, Katherine I. Nearing and Elizabeth A. Phelps, the amygdala were affected both when subjects observed someone else being submitted to an aversive event, knowing that the same treatment awaited themselves, and when subjects were subsequently placed in a fear-provoking situation. This suggests that fear can develop in both conditions, not just simply from personal history.",
"title": "Causes"
},
{
"paragraph_id": 11,
"text": "Fear is affected by cultural and historical context. For example, in the early 20th century, many Americans feared polio, a disease that can lead to paralysis. There are consistent cross-cultural differences in how people respond to fear. Display rules affect how likely people are to express the facial expression of fear and other emotions.",
"title": "Causes"
},
{
"paragraph_id": 12,
"text": "Fear of victimization is a function of perceived risk and seriousness.",
"title": "Causes"
},
{
"paragraph_id": 13,
"text": "According to surveys, some of the most common fears are of demons and ghosts, the existence of evil powers, cockroaches, spiders, snakes, heights, water, enclosed spaces, tunnels, bridges, needles, social rejection, failure, examinations, and public speaking.",
"title": "Common triggers"
},
{
"paragraph_id": 14,
"text": "Regionally some may more so fear terrorist attacks, death, war, criminal or gang violence, being alone, the future, nuclear war, flying, clowns, intimacy, people, and driving.",
"title": "Common triggers"
},
{
"paragraph_id": 15,
"text": "Fear of the unknown or irrational fear is caused by negative thinking (worry) which arises from anxiety accompanied by a subjective sense of apprehension or dread. Irrational fear shares a common neural pathway with other fears, a pathway that engages the nervous system to mobilize bodily resources in the face of danger or threat. Many people are scared of the \"unknown\". The irrational fear can branch out to many areas such as the hereafter, the next ten years or even tomorrow. Chronic irrational fear has deleterious effects since the elicitor stimulus is commonly absent or perceived from delusions. Such fear can create comorbidity with the anxiety disorder umbrella. Being scared may cause people to experience anticipatory fear of what may lie ahead rather than planning and evaluating for the same. For example, \"continuation of scholarly education\" is perceived by many educators as a risk that may cause them fear and stress, and they would rather teach things they've been taught than go and do research.",
"title": "Common triggers"
},
{
"paragraph_id": 16,
"text": "The ambiguity of situations that tend to be uncertain and unpredictable can cause anxiety in addition to other psychological and physical problems in some populations; especially those who engage it constantly, for example, in war-ridden places or in places of conflict, terrorism, abuse, etc. Poor parenting that instills fear can also debilitate a child's psyche development or personality. For example, parents tell their children not to talk to strangers in order to protect them. In school, they would be motivated to not show fear in talking with strangers, but to be assertive and also aware of the risks and the environment in which it takes place. Ambiguous and mixed messages like this can affect their self-esteem and self-confidence. Researchers say talking to strangers isn't something to be thwarted but allowed in a parent's presence if required. Developing a sense of equanimity to handle various situations is often advocated as an antidote to irrational fear and as an essential skill by a number of ancient philosophies.",
"title": "Common triggers"
},
{
"paragraph_id": 17,
"text": "Fear of the unknown (FOTU) \"may be a, or possibly the, fundamental fear\" from early times when there were many threats to life.",
"title": "Common triggers"
},
{
"paragraph_id": 18,
"text": "Although fear behavior varies from species to species, it is often divided into two main categories; namely, avoidance/flight and immobility. To these, different researchers have added different categories, such as threat display and attack, protective responses (including startle and looming responses), defensive burying, and social responses (including alarm vocalizations and submission). Finally, immobility is often divided into freezing and tonic immobility.",
"title": "Behavior"
},
{
"paragraph_id": 19,
"text": "The decision as to which particular fear behavior to perform is determined by the level of fear as well as the specific context, such as environmental characteristics (escape route present, distance to refuge), the presence of a discrete and localized threat, the distance between threat and subject, threat characteristics (speed, size, directness of approach), the characteristics of the subject under threat (size, physical condition, speed, degree of crypsis, protective morphological structures), social conditions (group size), and the amount of experience with the type of the threat.",
"title": "Behavior"
},
{
"paragraph_id": 20,
"text": "Often laboratory studies with rats are conducted to examine the acquisition and extinction of conditioned fear responses. In 2004, researchers conditioned rats (Rattus norvegicus) to fear a certain stimulus, through electric shock. The researchers were able to then cause an extinction of this conditioned fear, to a point that no medications or drugs were able to further aid in the extinction process. The rats showed signs of avoidance learning, not fear, but simply avoiding the area that brought pain to the test rats. The avoidance learning of rats is seen as a conditioned response, and therefore the behavior can be unconditioned, as supported by the earlier research.",
"title": "Mechanism"
},
{
"paragraph_id": 21,
"text": "Species-specific defense reactions (SSDRs) or avoidance learning in nature is the specific tendency to avoid certain threats or stimuli, it is how animals survive in the wild. Humans and animals both share these species-specific defense reactions, such as the flight-or-fight, which also include pseudo-aggression, fake or intimidating aggression and freeze response to threats, which is controlled by the sympathetic nervous system. These SSDRs are learned very quickly through social interactions between others of the same species, other species, and interaction with the environment. These acquired sets of reactions or responses are not easily forgotten. The animal that survives is the animal that already knows what to fear and how to avoid this threat. An example in humans is the reaction to the sight of a snake, many jump backwards before cognitively realizing what they are jumping away from, and in some cases, it is a stick rather than a snake.",
"title": "Mechanism"
},
{
"paragraph_id": 22,
"text": "As with many functions of the brain, there are various regions of the brain involved in deciphering fear in humans and other nonhuman species. The amygdala communicates both directions between the prefrontal cortex, hypothalamus, the sensory cortex, the hippocampus, thalamus, septum, and the brainstem. The amygdala plays an important role in SSDR, such as the ventral amygdalofugal, which is essential for associative learning, and SSDRs are learned through interaction with the environment and others of the same species. An emotional response is created only after the signals have been relayed between the different regions of the brain, and activating the sympathetic nervous systems; which controls the flight, fight, freeze, fright, and faint response. Often a damaged amygdala can cause impairment in the recognition of fear (like the human case of patient S.M.). This impairment can cause different species to lack the sensation of fear, and often can become overly confident, confronting larger peers, or walking up to predatory creatures.",
"title": "Mechanism"
},
{
"paragraph_id": 23,
"text": "Robert C. Bolles (1970), a researcher at University of Washington, wanted to understand species-specific defense reactions and avoidance learning among animals, but found that the theories of avoidance learning and the tools that were used to measure this tendency were out of touch with the natural world. He theorized the species-specific defense reaction (SSDR). There are three forms of SSDRs: flight, fight (pseudo-aggression), or freeze. Even domesticated animals have SSDRs, and in those moments it is seen that animals revert to atavistic standards and become \"wild\" again. Dr. Bolles states that responses are often dependent on the reinforcement of a safety signal, and not the aversive conditioned stimuli. This safety signal can be a source of feedback or even stimulus change. Intrinsic feedback or information coming from within, muscle twitches, increased heart rate, are seen to be more important in SSDRs than extrinsic feedback, stimuli that comes from the external environment. Dr. Bolles found that most creatures have some intrinsic set of fears, to help assure survival of the species. Rats will run away from any shocking event, and pigeons will flap their wings harder when threatened. The wing flapping in pigeons and the scattered running of rats are considered species-specific defense reactions or behaviors. Bolles believed that SSDRs are conditioned through Pavlovian conditioning, and not operant conditioning; SSDRs arise from the association between the environmental stimuli and adverse events. Michael S. Fanselow conducted an experiment, to test some specific defense reactions, he observed that rats in two different shock situations responded differently, based on instinct or defensive topography, rather than contextual information.",
"title": "Mechanism"
},
{
"paragraph_id": 24,
"text": "Species-specific defense responses are created out of fear, and are essential for survival. Rats that lack the gene stathmin show no avoidance learning, or a lack of fear, and will often walk directly up to cats and be eaten. Animals use these SSDRs to continue living, to help increase their chance of fitness, by surviving long enough to procreate. Humans and animals alike have created fear to know what should be avoided, and this fear can be learned through association with others in the community, or learned through personal experience with a creature, species, or situations that should be avoided. SSDRs are an evolutionary adaptation that has been seen in many species throughout the world including rats, chimpanzees, prairie dogs, and even humans, an adaptation created to help individual creatures survive in a hostile world.",
"title": "Mechanism"
},
{
"paragraph_id": 25,
"text": "Fear learning changes across the lifetime due to natural developmental changes in the brain. This includes changes in the prefrontal cortex and the amygdala.",
"title": "Mechanism"
},
{
"paragraph_id": 26,
"text": "The visual exploration of an emotional face does not follow a fixed pattern but modulated by the emotional content of the face. Scheller et al. found that participants paid more attention to the eyes when recognising fearful or neutral faces, while the mouth was fixated on when happy faces are presented, irrespective of task demands and spatial locations of face stimuli. These findings were replicated when fearful eyes are presented and when canonical face configurations are distorted for fearful, neutral and happy expressions.",
"title": "Mechanism"
},
{
"paragraph_id": 27,
"text": "The brain structures that are the center of most neurobiological events associated with fear are the two amygdalae, located behind the pituitary gland. Each amygdala is part of a circuitry of fear learning. They are essential for proper adaptation to stress and specific modulation of emotional learning memory. In the presence of a threatening stimulus, the amygdalae generate the secretion of hormones that influence fear and aggression. Once a response to the stimulus in the form of fear or aggression commences, the amygdalae may elicit the release of hormones into the body to put the person into a state of alertness, in which they are ready to move, run, fight, etc. This defensive response is generally referred to in physiology as the fight-or-flight response regulated by the hypothalamus, part of the limbic system. Once the person is in safe mode, meaning that there are no longer any potential threats surrounding them, the amygdalae will send this information to the medial prefrontal cortex (mPFC) where it is stored for similar future situations, which is known as memory consolidation.",
"title": "Mechanism"
},
{
"paragraph_id": 28,
"text": "Some of the hormones involved during the state of fight-or-flight include epinephrine, which regulates heart rate and metabolism as well as dilating blood vessels and air passages, norepinephrine increasing heart rate, blood flow to skeletal muscles and the release of glucose from energy stores, and cortisol which increases blood sugar, increases circulating neutrophilic leukocytes, calcium amongst other things.",
"title": "Mechanism"
},
{
"paragraph_id": 29,
"text": "After a situation which incites fear occurs, the amygdalae and hippocampus record the event through synaptic plasticity. The stimulation to the hippocampus will cause the individual to remember many details surrounding the situation. Plasticity and memory formation in the amygdala are generated by activation of the neurons in the region. Experimental data supports the notion that synaptic plasticity of the neurons leading to the lateral amygdalae occurs with fear conditioning. In some cases, this forms permanent fear responses such as post-traumatic stress disorder (PTSD) or a phobia. MRI and fMRI scans have shown that the amygdalae in individuals diagnosed with such disorders including bipolar or panic disorder are larger and wired for a higher level of fear.",
"title": "Mechanism"
},
{
"paragraph_id": 30,
"text": "Pathogens can suppress amygdala activity. Rats infected with the toxoplasmosis parasite become less fearful of cats, sometimes even seeking out their urine-marked areas. This behavior often leads to them being eaten by cats. The parasite then reproduces within the body of the cat. There is evidence that the parasite concentrates itself in the amygdala of infected rats. In a separate experiment, rats with lesions in the amygdala did not express fear or anxiety towards unwanted stimuli. These rats pulled on levers supplying food that sometimes sent out electrical shocks. While they learned to avoid pressing on them, they did not distance themselves from these shock-inducing levers.",
"title": "Mechanism"
},
{
"paragraph_id": 31,
"text": "Several brain structures other than the amygdalae have also been observed to be activated when individuals are presented with fearful vs. neutral faces, namely the occipitocerebellar regions including the fusiform gyrus and the inferior parietal / superior temporal gyri. Fearful eyes, brows and mouth seem to separately reproduce these brain responses. Scientists from Zurich studies show that the hormone oxytocin related to stress and sex reduces activity in your brain fear center.",
"title": "Mechanism"
},
{
"paragraph_id": 32,
"text": "In threatening situations, insects, aquatic organisms, birds, reptiles, and mammals emit odorant substances, initially called alarm substances, which are chemical signals now called alarm pheromones. This is to defend themselves and at the same time to inform members of the same species of danger and leads to observable behavior change like freezing, defensive behavior, or dispersion depending on circumstances and species. For example, stressed rats release odorant cues that cause other rats to move away from the source of the signal.",
"title": "Mechanism"
},
{
"paragraph_id": 33,
"text": "After the discovery of pheromones in 1959, alarm pheromones were first described in 1968 in ants and earthworms, and four years later also found in mammals, both mice and rats. Over the next two decades, identification and characterization of these pheromones proceeded in all manner of insects and sea animals, including fish, but it was not until 1990 that more insight into mammalian alarm pheromones was gleaned.",
"title": "Mechanism"
},
{
"paragraph_id": 34,
"text": "In 1985, a link between odors released by stressed rats and pain perception was discovered: unstressed rats exposed to these odors developed opioid-mediated analgesia. In 1997, researchers found that bees became less responsive to pain after they had been stimulated with isoamyl acetate, a chemical smelling of banana, and a component of bee alarm pheromone. The experiment also showed that the bees' fear-induced pain tolerance was mediated by an endorphin.",
"title": "Mechanism"
},
{
"paragraph_id": 35,
"text": "By using the forced swimming test in rats as a model of fear-induction, the first mammalian \"alarm substance\" was found. In 1991, this \"alarm substance\" was shown to fulfill criteria for pheromones: well-defined behavioral effect, species specificity, minimal influence of experience and control for nonspecific arousal. Rat activity testing with the alarm pheromone, and their preference/avoidance for odors from cylinders containing the pheromone, showed that the pheromone had very low volatility.",
"title": "Mechanism"
},
{
"paragraph_id": 36,
"text": "In 1993 a connection between alarm chemosignals in mice and their immune response was found. Pheromone production in mice was found to be associated with or mediated by the pituitary gland in 1994.",
"title": "Mechanism"
},
{
"paragraph_id": 37,
"text": "In 2004, it was demonstrated that rats' alarm pheromones had different effects on the \"recipient\" rat (the rat perceiving the pheromone) depending which body region they were released from: Pheromone production from the face modified behavior in the recipient rat, e.g. caused sniffing or movement, whereas pheromone secreted from the rat's anal area induced autonomic nervous system stress responses, like an increase in core body temperature. Further experiments showed that when a rat perceived alarm pheromones, it increased its defensive and risk assessment behavior, and its acoustic startle reflex was enhanced.",
"title": "Mechanism"
},
{
"paragraph_id": 38,
"text": "It was not until 2011 that a link between severe pain, neuroinflammation and alarm pheromones release in rats was found: real time RT-PCR analysis of rat brain tissues indicated that shocking the footpad of a rat increased its production of proinflammatory cytokines in deep brain structures, namely of IL-1β, heteronuclear Corticotropin-releasing hormone and c-fos mRNA expressions in both the paraventricular nucleus and the bed nucleus of the stria terminalis, and it increased stress hormone levels in plasma (corticosterone).",
"title": "Mechanism"
},
{
"paragraph_id": 39,
"text": "The neurocircuit for how rats perceive alarm pheromones was shown to be related to the hypothalamus, brainstem, and amygdalae, all of which are evolutionary ancient structures deep inside or in the case of the brainstem underneath the brain away from the cortex, and involved in the fight-or-flight response, as is the case in humans.",
"title": "Mechanism"
},
{
"paragraph_id": 40,
"text": "Alarm pheromone-induced anxiety in rats has been used to evaluate the degree to which anxiolytics can alleviate anxiety in humans. For this, the change in the acoustic startle reflex of rats with alarm pheromone-induced anxiety (i.e. reduction of defensiveness) has been measured. Pretreatment of rats with one of five anxiolytics used in clinical medicine was able to reduce their anxiety: namely midazolam, phenelzine (a nonselective monoamine oxidase (MAO) inhibitor), propranolol, a nonselective beta blocker, clonidine, an alpha 2 adrenergic agonist or CP-154,526, a corticotropin-releasing hormone antagonist.",
"title": "Mechanism"
},
{
"paragraph_id": 41,
"text": "Faulty development of odor discrimination impairs the perception of pheromones and pheromone-related behavior, like aggressive behavior and mating in male rats: The enzyme Mitogen-activated protein kinase 7 (MAPK7) has been implicated in regulating the development of the olfactory bulb and odor discrimination and it is highly expressed in developing rat brains, but absent in most regions of adult rat brains. Conditional deletion of the MAPK7gene in mouse neural stem cells impairs several pheromone-mediated behaviors, including aggression and mating in male mice. These behavior impairments were not caused by a reduction in the level of testosterone, by physical immobility, by heightened fear or anxiety or by depression. Using mouse urine as a natural pheromone-containing solution, it has been shown that the impairment was associated with defective detection of related pheromones, and with changes in their inborn preference for pheromones related to sexual and reproductive activities.",
"title": "Mechanism"
},
{
"paragraph_id": 42,
"text": "Lastly, alleviation of an acute fear response because a friendly peer (or in biological language: an affiliative conspecific) tends and befriends is called \"social buffering\". The term is in analogy to the 1985 \"buffering\" hypothesis in psychology, where social support has been proven to mitigate the negative health effects of alarm pheromone mediated distress. The role of a \"social pheromone\" is suggested by the recent discovery that olfactory signals are responsible in mediating the \"social buffering\" in male rats. \"Social buffering\" was also observed to mitigate the conditioned fear responses of honeybees. A bee colony exposed to an environment of high threat of predation did not show increased aggression and aggressive-like gene expression patterns in individual bees, but decreased aggression. That the bees did not simply habituate to threats is suggested by the fact that the disturbed colonies also decreased their foraging.",
"title": "Mechanism"
},
{
"paragraph_id": 43,
"text": "Biologists have proposed in 2012 that fear pheromones evolved as molecules of \"keystone significance\", a term coined in analogy to keystone species. Pheromones may determine species compositions and affect rates of energy and material exchange in an ecological community. Thus pheromones generate structure in a food web and play critical roles in maintaining natural systems.",
"title": "Mechanism"
},
{
"paragraph_id": 44,
"text": "Evidence of chemosensory alarm signals in humans has emerged slowly: Although alarm pheromones have not been physically isolated and their chemical structures have not been identified in humans so far, there is evidence for their presence. Androstadienone, for example, a steroidal, endogenous odorant, is a pheromone candidate found in human sweat, axillary hair and plasma. The closely related compound androstenone is involved in communicating dominance, aggression or competition; sex hormone influences on androstenone perception in humans showed a high testosterone level related to heightened androstenone sensitivity in men, a high testosterone level related to unhappiness in response to androstenone in men, and a high estradiol level related to disliking of androstenone in women.",
"title": "Mechanism"
},
{
"paragraph_id": 45,
"text": "A German study from 2006 showed when anxiety-induced versus exercise-induced human sweat from a dozen people was pooled and offered to seven study participants, of five able to olfactorily distinguish exercise-induced sweat from room air, three could also distinguish exercise-induced sweat from anxiety induced sweat. The acoustic startle reflex response to a sound when sensing anxiety sweat was larger than when sensing exercise-induced sweat, as measured by electromyography analysis of the orbital muscle, which is responsible for the eyeblink component. This showed for the first time that fear chemosignals can modulate the startle reflex in humans without emotional mediation; fear chemosignals primed the recipient's \"defensive behavior\" prior to the subjects' conscious attention on the acoustic startle reflex level.",
"title": "Mechanism"
},
{
"paragraph_id": 46,
"text": "In analogy to the social buffering of rats and honeybees in response to chemosignals, induction of empathy by \"smelling anxiety\" of another person has been found in humans.",
"title": "Mechanism"
},
{
"paragraph_id": 47,
"text": "A study from 2013 provided brain imaging evidence that human responses to fear chemosignals may be gender-specific. Researchers collected alarm-induced sweat and exercise-induced sweat from donors extracted it, pooled it and presented it to 16 unrelated people undergoing functional brain MRI. While stress-induced sweat from males produced a comparably strong emotional response in both females and males, stress-induced sweat from females produced markedly stronger arousal in women than in men. Statistical tests pinpointed this gender-specificity to the right amygdala and strongest in the superficial nuclei. Since no significant differences were found in the olfactory bulb, the response to female fear-induced signals is likely based on processing the meaning, i.e. on the emotional level, rather than the strength of chemosensory cues from each gender, i.e. the perceptual level.",
"title": "Mechanism"
},
{
"paragraph_id": 48,
"text": "An approach-avoidance task was set up where volunteers seeing either an angry or a happy cartoon face on a computer screen pushed away or pulled toward them a joystick as fast as possible. Volunteers smelling androstadienone, masked with clove oil scent responded faster, especially to angry faces than those smelling clove oil only, which was interpreted as androstadienone-related activation of the fear system. A potential mechanism of action is, that androstadienone alters the \"emotional face processing\". Androstadienone is known to influence the activity of the fusiform gyrus which is relevant for face recognition.",
"title": "Mechanism"
},
{
"paragraph_id": 49,
"text": "Cognitive-consistency theories assume that \"when two or more simultaneously active cognitive structures are logically inconsistent, arousal is increased, which activates processes with the expected consequence of increasing consistency and decreasing arousal.\" In this context, it has been proposed that fear behavior is caused by an inconsistency between a preferred, or expected, situation and the actually perceived situation, and functions to remove the inconsistent stimulus from the perceptual field, for instance by fleeing or hiding, thereby resolving the inconsistency. This approach puts fear in a broader perspective, also involving aggression and curiosity. When the inconsistency between perception and expectancy is small, learning as a result of curiosity reduces inconsistency by updating expectancy to match perception. If the inconsistency is larger, fear or aggressive behavior may be employed to alter the perception in order to make it match expectancy, depending on the size of the inconsistency as well as the specific context. Aggressive behavior is assumed to alter perception by forcefully manipulating it into matching the expected situation, while in some cases thwarted escape may also trigger aggressive behavior in an attempt to remove the thwarting stimulus.",
"title": "Mechanism"
},
{
"paragraph_id": 50,
"text": "In order to improve our understanding of the neural and behavioral mechanisms of adaptive and maladaptive fear, investigators use a variety of translational animal models. These models are particularly important for research that would be too invasive for human studies. Rodents such as mice and rats are common animal models, but other species are used. Certain aspects of fear research still requires more research such as sex, gender, and age differences.",
"title": "Research"
},
{
"paragraph_id": 51,
"text": "These animal models include, but are not limited to, fear conditioning, predator-based psychosocial stress, single prolonged stress, chronic stress models, inescapable foot/tail shocks, immobilization or restraint, and stress enhanced fear learning. While the stress and fear paradigms differ between the models, they tend to involve aspects such as acquisition, generalization, extinction, cognitive regulation, and reconsolidation.",
"title": "Research"
},
{
"paragraph_id": 52,
"text": "Fear conditioning, also known as Pavlovian or classical conditioning, is a process of learning that involves pairing a neutral stimulus with an unconditional stimulus (US). A neutral stimulus is something like a bell, tone, or room that doesn't illicit a response normally where a US is a stimulus that results in a natural or unconditioned response (UR) – in Pavlov's famous experiment the neutral stimulus is a bell and the US would be food with the dog's salvation being the UR. Pairing the neutral stimulus and the US results in the UR occurring not only with the US but also the neutral stimulus. When this occurs the neutral stimulus is referred to as the conditional stimulus (CS) and the response the conditional response (CR). In the fear conditioning model of Pavlovian conditioning the US is an aversive stimulus such as a shock, tone, or unpleasant odor.",
"title": "Research"
},
{
"paragraph_id": 53,
"text": "Predator-based psychosocial stress (PPS) involves a more naturalistic approach to fear learning. Predators such as a cat, a snake, or urine from a fox or cat are used along with other stressors such as immobilization or restraint in order to generate instinctual fear responses.",
"title": "Research"
},
{
"paragraph_id": 54,
"text": "Chronic stress models include chronic variable stress, chronic social defeat, and chronic mild stress. These models are often used to study how long-term or prolonged stress/pain can alter fear learning and disorders.",
"title": "Research"
},
{
"paragraph_id": 55,
"text": "Single prolonged stress (SPS) is a fear model that is often used to study PTSD. It's paradigm involves multiple stressors such as immobilization, a force swim, and exposure to ether delivered concurrently to the subject. This is used to study non-naturalistic, uncontrollable situations that can cause a maladaptive fear responses that is seen in a lot of anxiety and traumatic based disorders.",
"title": "Research"
},
{
"paragraph_id": 56,
"text": "Stress enhanced fear learning (SEFL) like SPS is often used to study the maladaptive fear learning involved in PTSD and other traumatic based disorders. SEFL involves a single extreme stressor such as a large number of footshocks simulating a single traumatic stressor that somehow enhances and alters future fear learning.",
"title": "Research"
},
{
"paragraph_id": 57,
"text": "A drug treatment for fear conditioning and phobias via the amygdalae is the use of glucocorticoids. In one study, glucocorticoid receptors in the central nuclei of the amygdalae were disrupted in order to better understand the mechanisms of fear and fear conditioning. The glucocorticoid receptors were inhibited using lentiviral vectors containing Cre-recombinase injected into mice. Results showed that disruption of the glucocorticoid receptors prevented conditioned fear behavior. The mice were subjected to auditory cues which caused them to freeze normally. A reduction of freezing was observed in the mice that had inhibited glucocorticoid receptors.",
"title": "Management"
},
{
"paragraph_id": 58,
"text": "Cognitive behavioral therapy has been successful in helping people overcome their fear. Because fear is more complex than just forgetting or deleting memories, an active and successful approach involves people repeatedly confronting their fears. By confronting their fears in a safe manner a person can suppress the \"fear-triggering memories\" or stimuli.",
"title": "Management"
},
{
"paragraph_id": 59,
"text": "Exposure therapy has known to have helped up to 90% of people with specific phobias to significantly decrease their fear over time.",
"title": "Management"
},
{
"paragraph_id": 60,
"text": "Another psychological treatment is systematic desensitization, which is a type of behavior therapy used to completely remove the fear or produce a disgusted response to this fear and replace it. The replacement that occurs will be relaxation and will occur through conditioning. Through conditioning treatments, muscle tensioning will lessen and deep breathing techniques will aid in de-tensioning.",
"title": "Management"
},
{
"paragraph_id": 61,
"text": "There are other methods for treating or coping with one's fear, such as writing down rational thoughts regarding fears. Journal entries are a healthy method of expressing one's fears without compromising safety or causing uncertainty. Another suggestion is a fear ladder. To create a fear ladder, one must write down all of their fears and score them on a scale of one to ten. Next, the person addresses their phobia, starting with the lowest number.",
"title": "Management"
},
{
"paragraph_id": 62,
"text": "Religion can help some individuals cope with fear.",
"title": "Management"
},
{
"paragraph_id": 63,
"text": "People who have damage to their amygdalae, which can be caused by a rare genetic disease known as Urbach–Wiethe disease, are unable to experience fear. The disease destroys both amygdalae in late childhood. Since the discovery of the disease, there have only been 400 recorded cases. A lack of fear can allow someone to get into a dangerous situation they otherwise would have avoided.",
"title": "Incapability"
},
{
"paragraph_id": 64,
"text": "The fear of the end of life and its existence is, in other words, the fear of death. Historically, attempts were made to reduce this fear by performing rituals which have helped collect the cultural ideas that we now have in the present. These rituals also helped preserve the cultural ideas. The results and methods of human existence had been changing at the same time that social formation was changing.",
"title": "Society and culture"
},
{
"paragraph_id": 65,
"text": "When people are faced with their own thoughts of death, they either accept that they are dying or will die because they have lived a full life or they will experience fear. A theory was developed in response to this, which is called the terror management theory. The theory states that a person's cultural worldviews (religion, values, etc.) will mitigate the terror associated with the fear of death through avoidance. To help manage their terror, they find solace in their death-denying beliefs, such as their religion. Another way people cope with their death related fears is pushing any thoughts of death into the future or by avoiding these thoughts all together through distractions. Although there are methods for one coping with the terror associated with their fear of death, not everyone suffers from these same uncertainties. People who believe they have lived life to the \"fullest\" typically do not fear death.",
"title": "Society and culture"
},
{
"paragraph_id": 66,
"text": "Death anxiety is multidimensional; it covers \"fears related to one's own death, the death of others, fear of the unknown after death, fear of obliteration, and fear of the dying process, which includes fear of a slow death and a painful death\".",
"title": "Society and culture"
},
{
"paragraph_id": 67,
"text": "The Yale philosopher Shelly Kagan examined fear of death in a 2007 Yale open course by examining the following questions: Is fear of death a reasonable appropriate response? What conditions are required and what are appropriate conditions for feeling fear of death? What is meant by fear, and how much fear is appropriate? According to Kagan for fear in general to make sense, three conditions should be met:",
"title": "Society and culture"
},
{
"paragraph_id": 68,
"text": "The amount of fear should be appropriate to the size of \"the bad\". If the three conditions are not met, fear is an inappropriate emotion. He argues, that death does not meet the first two criteria, even if death is a \"deprivation of good things\" and even if one believes in a painful afterlife. Because death is certain, it also does not meet the third criterion, but he grants that the unpredictability of when one dies may be cause to a sense of fear.",
"title": "Society and culture"
},
{
"paragraph_id": 69,
"text": "In a 2003 study of 167 women and 121 men, aged 65–87, low self-efficacy predicted fear of the unknown after death and fear of dying for women and men better than demographics, social support, and physical health. Fear of death was measured by a \"Multidimensional Fear of Death Scale\" which included the 8 subscales Fear of Dying, Fear of the Dead, Fear of Being Destroyed, Fear for Significant Others, Fear of the Unknown, Fear of Conscious Death, Fear for the Body After Death, and Fear of Premature Death. In hierarchical multiple regression analysis, the most potent predictors of death fears were low \"spiritual health efficacy\", defined as beliefs relating to one's perceived ability to generate spiritually based faith and inner strength, and low \"instrumental efficacy\", defined as beliefs relating to one's perceived ability to manage activities of daily living.",
"title": "Society and culture"
},
{
"paragraph_id": 70,
"text": "Psychologists have tested the hypotheses that fear of death motivates religious commitment, and that assurances about an afterlife alleviate the fear, with equivocal results. Religiosity can be related to fear of death when the afterlife is portrayed as time of punishment. \"Intrinsic religiosity\", as opposed to mere \"formal religious involvement\", has been found to be negatively correlated with death anxiety. In a 1976 study of people of various Christian denominations, those who were most firm in their faith, who attended religious services weekly, were the least afraid of dying. The survey found a negative correlation between fear of death and \"religious concern\".",
"title": "Society and culture"
},
{
"paragraph_id": 71,
"text": "In a 2006 study of white, Christian men and women the hypothesis was tested that traditional, church-centered religiousness and de-institutionalized spiritual seeking are ways of approaching fear of death in old age. Both religiousness and spirituality were related to positive psychosocial functioning, but only church-centered religiousness protected subjects against the fear of death.",
"title": "Society and culture"
},
{
"paragraph_id": 72,
"text": "Statius in the Thebaid (Book 3, line 661) aired the irreverent suggestion that \"fear first made gods in the world\".",
"title": "Society and culture"
},
{
"paragraph_id": 73,
"text": "From a Christian theological perspective, the word fear can encompass more than simple dread. Robert B. Strimple says that fear includes the \"... convergence of awe, reverence, adoration...\". Some translations of the Bible, such as the New International Version, sometimes express the concept of fear with the word reverence.",
"title": "Society and culture"
},
{
"paragraph_id": 74,
"text": "Fear in religious contexts can be seen throughout the years, including in the Crusades. In 1095 Pope Urban II called for Christian troops to recover the Holy Land from the Muslim control. A misinterpretation of the Pope's message resulted in the slaughter of innocent people: although the first crusade aimed to conquer Muslim territory, hate became redirected against Jewish culture - note especially the Rhineland massacres of 1096. Jewish people who feared for their lives gave in to forced conversion to Christianity because they believed this would secure their safety. Other Jewish people feared betraying their God by conceding to a conversion, and instead secured their own fate, which was death.",
"title": "Society and culture"
},
{
"paragraph_id": 75,
"text": "Fear may be politically and culturally manipulated to persuade citizenry of ideas which would otherwise be widely rejected or dissuade citizenry from ideas which would otherwise be widely supported. In contexts of disasters, nation-states manage the fear not only to provide their citizens with an explanation about the event or blaming some minorities, but also to adjust their previous beliefs.",
"title": "Society and culture"
},
{
"paragraph_id": 76,
"text": "Fear can alter how a person thinks or reacts to situations because fear has the power to inhibit one's rational way of thinking. As a result, people who do not experience fear, are able to use fear as a tool to manipulate others. People who are experiencing fear, seek preservation through safety and can be manipulated by a person who is there to provide that safety that is being sought after. \"When we're afraid, a manipulator can talk us out of the truth we see right in front of us. Words become more real than reality\" By this, a manipulator is able to use our fear to manipulate us out the truth and instead make us believe and trust in their truth. Politicians are notorious for using fear to manipulate the people into supporting their policies.",
"title": "Society and culture"
},
{
"paragraph_id": 77,
"text": "Fear is found and reflected in mythology and folklore as well as in works of fiction such as novels and films.",
"title": "Society and culture"
},
{
"paragraph_id": 78,
"text": "Works of dystopian and (post)apocalyptic fiction convey the fears and anxieties of societies.",
"title": "Society and culture"
},
{
"paragraph_id": 79,
"text": "The fear of the world's end is about as old as civilization itself. In a 1967 study, Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Scientific and critical thought supplanting religious and mythical thought as well as a public emancipation may be the cause of eschatology becoming replaced by more realistic scenarios. Such might constructively provoke discussion and steps to be taken to prevent depicted catastrophes.",
"title": "Society and culture"
},
{
"paragraph_id": 80,
"text": "The Story of the Youth Who Went Forth to Learn What Fear Was is a German fairy tale dealing with the topic of not knowing fear. Many stories also include characters who fear the antagonist of the plot. One important characteristic of historical and mythical heroes across cultures is to be fearless in the face of big and often lethal enemies.",
"title": "Society and culture"
},
{
"paragraph_id": 81,
"text": "In the world of athletics, fear is often used as a means of motivation to not fail. This situation involves using fear in a way that increases the chances of a positive outcome. In this case, the fear that is being created is initially a cognitive state to the receiver. This initial state is what generates the first response of the athlete, this response generates a possibility of fight or flight reaction by the athlete (receiver), which in turn will increase or decrease the possibility of success or failure in the certain situation for the athlete. The amount of time that the athlete has to determine this decision is small but it is still enough time for the receiver to make a determination through cognition. Even though the decision is made quickly, the decision is determined through past events that have been experienced by the athlete. The results of these past events will determine how the athlete will make his cognitive decision in the split second that he or she has.",
"title": "Society and culture"
},
{
"paragraph_id": 82,
"text": "Fear of failure as described above has been studied frequently in the field of sport psychology. Many scholars have tried to determine how often fear of failure is triggered within athletes, as well as what personalities of athletes most often choose to use this type of motivation. Studies have also been conducted to determine the success rate of this method of motivation.",
"title": "Society and culture"
},
{
"paragraph_id": 83,
"text": "Murray's Exploration in Personal (1938) was one of the first studies that actually identified fear of failure as an actual motive to avoid failure or to achieve success. His studies suggested that inavoidance, the need to avoid failure, was found in many college-aged men during the time of his research in 1938. This was a monumental finding in the field of psychology because it allowed other researchers to better clarify how fear of failure can actually be a determinant of creating achievement goals as well as how it could be used in the actual act of achievement.",
"title": "Society and culture"
},
{
"paragraph_id": 84,
"text": "In the context of sport, a model was created by R.S. Lazarus in 1991 that uses the cognitive-motivational-relational theory of emotion.",
"title": "Society and culture"
},
{
"paragraph_id": 85,
"text": "It holds that Fear of Failure results when beliefs or cognitive schemas about aversive consequences of failing are activated by situations in which failure is possible. These belief systems predispose the individual to make appraisals of threat and experience the state anxiety that is associated with Fear of Failure in evaluative situations.",
"title": "Society and culture"
},
{
"paragraph_id": 86,
"text": "Another study was done in 2001 by Conroy, Poczwardowski, and Henschen that created five aversive consequences of failing that have been repeated over time. The five categories include (a) experiencing shame and embarrassment, (b) devaluing one's self-estimate, (c) having an uncertain future, (d) important others losing interest, (e) upsetting important others. These five categories can help one infer the possibility of an individual to associate failure with one of these threat categories, which will lead them to experiencing fear of failure.",
"title": "Society and culture"
},
{
"paragraph_id": 87,
"text": "In summary, the two studies that were done above created a more precise definition of fear of failure, which is \"a dispositional tendency to experience apprehension and anxiety in evaluative situations because individuals have learned that failure is associated with aversive consequences\".",
"title": "Society and culture"
}
]
| Fear is an intensely unpleasant emotion in response to perceiving or recognizing a danger or threat. Fear causes physiological changes that may produce behavioral reactions such as mounting an aggressive response or fleeing the threat. Fear in human beings may occur in response to a certain stimulus occurring in the present, or in anticipation or expectation of a future threat perceived as a risk to oneself. The fear response arises from the perception of danger leading to confrontation with or escape from/avoiding the threat, which in extreme cases of fear can be a freeze response. In humans and other animals, fear is modulated by the process of cognition and learning. Thus, fear is judged as rational and appropriate, or irrational and inappropriate. An irrational fear is called a phobia. Fear is closely related to the emotion anxiety, which occurs as the result of often future threats that are perceived to be uncontrollable or unavoidable. The fear response serves survival by engendering appropriate behavioral responses, so it has been preserved throughout evolution. Sociological and organizational research also suggests that individuals' fears are not solely dependent on their nature but are also shaped by their social relations and culture, which guide their understanding of when and how much fear to feel. Fear is sometimes incorrectly considered the opposite of courage. For the reason that courage is a willingness to face adversity, fear is an example of a condition that makes the exercise of courage possible. | 2001-05-20T16:15:10Z | 2023-12-15T20:35:34Z | [
"Template:Citation needed",
"Template:Better source needed",
"Template:Cn",
"Template:Refend",
"Template:Short description",
"Template:Refbegin",
"Template:Authority control",
"Template:See also",
"Template:Page needed",
"Template:Anchor",
"Template:Emotion",
"Template:Blockquote",
"Template:Div col end",
"Template:Cite news",
"Template:Webarchive",
"Template:ISBN",
"Template:Wikiquote",
"Template:Wiktionary",
"Template:Cite journal",
"Template:Main",
"Template:Further",
"Template:Div col",
"Template:Cite book",
"Template:Commons category",
"Template:Redirect",
"Template:Reflist",
"Template:Cite web",
"Template:Emotion-footer"
]
| https://en.wikipedia.org/wiki/Fear |
10,830 | Football team | A football team is a group of players selected to play together in the various team sports known as football. Such teams could be selected to play in a match against an opposing team, to represent a football club, group, state or nation, an all-star team or even selected as a hypothetical team (such as a Dream Team or Team of the Century) and never play an actual match.
The difference between a football team and a football club is incorporation, a football club is an entity which is formed and governed by a committee and has members which may consist of supporters in addition to players. The benefit of club formation is that it gives teams access to additional volunteer or paid support staff, facilities and equipment.
There are several varieties of football, including association football, gridiron football, Australian rules football, Gaelic football, rugby league and rugby union. The number of players selected for each team, within these varieties and their associated codes, can vary substantially. Sometimes, the word "team" is limited to those who play on the field in a match and does not always include other players who may take part as replacements or emergency players. "Football squad" may be used to be inclusive of these support and reserve players.
The words team and club are sometimes used interchangeably by supporters, typically referring to the team within the club playing in the highest division or competition. A football club is a type of sports club which is an organized or incorporated body. Typically these will have a committee, secretary, president or chairperson, registrar and members. Football clubs typically have a set of rules, including rules under which they play and are themselves typically members of a league or association which are affiliated with a governing body within their sport. Clubs may field multiple teams from their registered players (which may participate in several different divisions or leagues). A club is responsible for ensuring the continued existence of its teams in their respective competitions. The oldest football clubs date back to the early 19th century. While records exist for most incorporated clubs, they do not exist for all football clubs. Standalone clubs are usually run like businesses and appear on official registers. However many football clubs were formed as part of larger organisations (schools, athletic clubs, societies) and therefore public records of their formation and operation may not be kept unless they compete with other teams. Football clubs may also be dormant for periods and be re-formed (for example going into recess for reasons such as war or lack of a league or competition to participate in) and even switch between football codes. Likewise, a football club may fold if it becomes insolvent or is incapable of fielding a team to play matches.
The number of players that take part in the sport simultaneously, thus forming the team are: | [
{
"paragraph_id": 0,
"text": "A football team is a group of players selected to play together in the various team sports known as football. Such teams could be selected to play in a match against an opposing team, to represent a football club, group, state or nation, an all-star team or even selected as a hypothetical team (such as a Dream Team or Team of the Century) and never play an actual match.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The difference between a football team and a football club is incorporation, a football club is an entity which is formed and governed by a committee and has members which may consist of supporters in addition to players. The benefit of club formation is that it gives teams access to additional volunteer or paid support staff, facilities and equipment.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are several varieties of football, including association football, gridiron football, Australian rules football, Gaelic football, rugby league and rugby union. The number of players selected for each team, within these varieties and their associated codes, can vary substantially. Sometimes, the word \"team\" is limited to those who play on the field in a match and does not always include other players who may take part as replacements or emergency players. \"Football squad\" may be used to be inclusive of these support and reserve players.",
"title": "Summary"
},
{
"paragraph_id": 3,
"text": "The words team and club are sometimes used interchangeably by supporters, typically referring to the team within the club playing in the highest division or competition. A football club is a type of sports club which is an organized or incorporated body. Typically these will have a committee, secretary, president or chairperson, registrar and members. Football clubs typically have a set of rules, including rules under which they play and are themselves typically members of a league or association which are affiliated with a governing body within their sport. Clubs may field multiple teams from their registered players (which may participate in several different divisions or leagues). A club is responsible for ensuring the continued existence of its teams in their respective competitions. The oldest football clubs date back to the early 19th century. While records exist for most incorporated clubs, they do not exist for all football clubs. Standalone clubs are usually run like businesses and appear on official registers. However many football clubs were formed as part of larger organisations (schools, athletic clubs, societies) and therefore public records of their formation and operation may not be kept unless they compete with other teams. Football clubs may also be dormant for periods and be re-formed (for example going into recess for reasons such as war or lack of a league or competition to participate in) and even switch between football codes. Likewise, a football club may fold if it becomes insolvent or is incapable of fielding a team to play matches.",
"title": "Summary"
},
{
"paragraph_id": 4,
"text": "The number of players that take part in the sport simultaneously, thus forming the team are:",
"title": "Variation of player numbers among football codes"
}
]
| A football team is a group of players selected to play together in the various team sports known as football. Such teams could be selected to play in a match against an opposing team, to represent a football club, group, state or nation, an all-star team or even selected as a hypothetical team and never play an actual match. The difference between a football team and a football club is incorporation, a football club is an entity which is formed and governed by a committee and has members which may consist of supporters in addition to players. The benefit of club formation is that it gives teams access to additional volunteer or paid support staff, facilities and equipment. | 2001-05-20T18:22:39Z | 2023-08-19T20:02:41Z | [
"Template:Short description",
"Template:Reflist",
"Template:Cite web",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Football_team |
10,831 | F | F, or f, is the sixth letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ef (pronounced /ˈɛf/), and the plural is efs.
The origin of 'F' is the Semitic letter waw that represented a sound like /v/ or /w/. Graphically it originally probably depicted either a hook or a club. It may have been based on a comparable Egyptian hieroglyph such as that which represented the word mace (transliterated as ḥ(dj)):
The Phoenician form of the letter was adopted into Greek as a vowel, upsilon (which resembled its descendant 'Y' but was also the ancestor of the Roman letters 'U', 'V', and 'W'); and, with another form, as a consonant, digamma, which indicated the pronunciation /w/, as in Phoenician. Latin 'F,' despite being pronounced differently, is ultimately descended from digamma and closely resembles it in form.
After sound changes eliminated /w/ from spoken Greek, digamma was used only as a numeral. However, the Greek alphabet also gave rise to other alphabets, and some of these retained letters descended from digamma. In the Etruscan alphabet, 'F' probably represented /w/, as in Greek, and the Etruscans formed the digraph 'FH' to represent /f/. (At the time these letters were borrowed, there was no Greek letter that represented /f/: the Greek letter phi 'Φ' then represented an aspirated voiceless bilabial plosive /p/, although in Modern Greek it has come to represent /f/.) When the Romans adopted the alphabet, they used 'V' (from Greek upsilon) not only for the vowel /u/, but also for the corresponding semivowel /w/, leaving 'F' available for /f/. And so out of the various vav variants in the Mediterranean world, the letter F entered the Roman alphabet attached to a sound which the Greeks did not have. The Roman alphabet forms the basis of the alphabet used today for English and many other languages.
The lowercase 'f' is not related to the visually similar long s, 'ſ' (or medial s). The use of the long s largely died out by the beginning of the 19th century, mostly to prevent confusion with 'f' when using a short mid-bar.
In the English writing system ⟨f⟩ is used to represent the sound /f/, the voiceless labiodental fricative. It is often doubled at the end of words. Exceptionally, it represents the voiced labiodental fricative /v/ in the common word "of". F is the eleventh least frequently used letter in the English language (after G, Y, P, B, V, K, J, X, Q, and Z), with a frequency of about 2.23% in words.
In the writing systems of other languages, ⟨f⟩ commonly represents /f/, [ɸ] or /v/.
The International Phonetic Alphabet uses ⟨f⟩ to represent the voiceless labiodental fricative.
An italic letter f is conventionally used to denote an arbitrary function. See also f with hook (ƒ).
A bold italic letter f is used in musical notation as a dynamic indicator for "loud or strong". It stands for the Italian word forte.
In countries such as the United States, the letter "F" is defined as a failure in terms of academic evaluation. Other countries that use this system include Saudi Arabia, Venezuela, and the Netherlands.
In the hexadecimal number system, the letter "F" or "f" is used to represent the hexadecimal digit fifteen (equivalent to 1510).
The letter F has become an Internet meme, where it is used to pay respects. This use is derived from the 2014 video game Call of Duty: Advanced Warfare, where in a quick-time event protagonist Jack Mitchell must pay his respects to his friend Will Irons who fell in combat in a previous mission, represented by the player pressing F when playing the PC version. People on the Internet use the letter F usually in a genuine way to express respects, sadness or condolences towards other Internet personalities, Internet memes or other players on certain events, such as death, misfortune or the end of a phenomenon, company, game, series, etc.
These are the code points for the forms of the letter in various systems
In the hexadecimal (base 16) numbering system, F is a number that corresponds to the number 15 in decimal (base 10) counting. | [
{
"paragraph_id": 0,
"text": "F, or f, is the sixth letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ef (pronounced /ˈɛf/), and the plural is efs.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The origin of 'F' is the Semitic letter waw that represented a sound like /v/ or /w/. Graphically it originally probably depicted either a hook or a club. It may have been based on a comparable Egyptian hieroglyph such as that which represented the word mace (transliterated as ḥ(dj)):",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The Phoenician form of the letter was adopted into Greek as a vowel, upsilon (which resembled its descendant 'Y' but was also the ancestor of the Roman letters 'U', 'V', and 'W'); and, with another form, as a consonant, digamma, which indicated the pronunciation /w/, as in Phoenician. Latin 'F,' despite being pronounced differently, is ultimately descended from digamma and closely resembles it in form.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "After sound changes eliminated /w/ from spoken Greek, digamma was used only as a numeral. However, the Greek alphabet also gave rise to other alphabets, and some of these retained letters descended from digamma. In the Etruscan alphabet, 'F' probably represented /w/, as in Greek, and the Etruscans formed the digraph 'FH' to represent /f/. (At the time these letters were borrowed, there was no Greek letter that represented /f/: the Greek letter phi 'Φ' then represented an aspirated voiceless bilabial plosive /p/, although in Modern Greek it has come to represent /f/.) When the Romans adopted the alphabet, they used 'V' (from Greek upsilon) not only for the vowel /u/, but also for the corresponding semivowel /w/, leaving 'F' available for /f/. And so out of the various vav variants in the Mediterranean world, the letter F entered the Roman alphabet attached to a sound which the Greeks did not have. The Roman alphabet forms the basis of the alphabet used today for English and many other languages.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The lowercase 'f' is not related to the visually similar long s, 'ſ' (or medial s). The use of the long s largely died out by the beginning of the 19th century, mostly to prevent confusion with 'f' when using a short mid-bar.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the English writing system ⟨f⟩ is used to represent the sound /f/, the voiceless labiodental fricative. It is often doubled at the end of words. Exceptionally, it represents the voiced labiodental fricative /v/ in the common word \"of\". F is the eleventh least frequently used letter in the English language (after G, Y, P, B, V, K, J, X, Q, and Z), with a frequency of about 2.23% in words.",
"title": "Use in writing systems"
},
{
"paragraph_id": 6,
"text": "In the writing systems of other languages, ⟨f⟩ commonly represents /f/, [ɸ] or /v/.",
"title": "Use in writing systems"
},
{
"paragraph_id": 7,
"text": "The International Phonetic Alphabet uses ⟨f⟩ to represent the voiceless labiodental fricative.",
"title": "Use in writing systems"
},
{
"paragraph_id": 8,
"text": "An italic letter f is conventionally used to denote an arbitrary function. See also f with hook (ƒ).",
"title": "Use in writing systems"
},
{
"paragraph_id": 9,
"text": "A bold italic letter f is used in musical notation as a dynamic indicator for \"loud or strong\". It stands for the Italian word forte.",
"title": "Use in writing systems"
},
{
"paragraph_id": 10,
"text": "In countries such as the United States, the letter \"F\" is defined as a failure in terms of academic evaluation. Other countries that use this system include Saudi Arabia, Venezuela, and the Netherlands.",
"title": "Use in writing systems"
},
{
"paragraph_id": 11,
"text": "In the hexadecimal number system, the letter \"F\" or \"f\" is used to represent the hexadecimal digit fifteen (equivalent to 1510).",
"title": "Use in writing systems"
},
{
"paragraph_id": 12,
"text": "The letter F has become an Internet meme, where it is used to pay respects. This use is derived from the 2014 video game Call of Duty: Advanced Warfare, where in a quick-time event protagonist Jack Mitchell must pay his respects to his friend Will Irons who fell in combat in a previous mission, represented by the player pressing F when playing the PC version. People on the Internet use the letter F usually in a genuine way to express respects, sadness or condolences towards other Internet personalities, Internet memes or other players on certain events, such as death, misfortune or the end of a phenomenon, company, game, series, etc.",
"title": "Other uses"
},
{
"paragraph_id": 13,
"text": "These are the code points for the forms of the letter in various systems",
"title": "Code points "
},
{
"paragraph_id": 14,
"text": "In the hexadecimal (base 16) numbering system, F is a number that corresponds to the number 15 in decimal (base 10) counting.",
"title": "Use as a number"
}
]
| F, or f, is the sixth letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ef, and the plural is efs. | 2001-05-20T21:35:51Z | 2023-11-26T01:04:02Z | [
"Template:Technical reasons",
"Template:Not a typo",
"Template:Cite book",
"Template:Cite web",
"Template:Cite work",
"Template:Latin letter info",
"Template:NoteTag",
"Template:Charmap",
"Template:Commons-inline",
"Template:Angbr",
"Template:Unichar",
"Template:NoteFoot",
"Template:Hatnote group",
"Template:Infobox grapheme",
"Template:Serif",
"Template:Letter other reps",
"Template:Cite news",
"Template:-",
"Template:Mvar",
"Template:Lang",
"Template:Short description",
"Template:IPAc-en",
"Template:Main",
"Template:Pp-vandalism",
"Template:IPA",
"Template:Angbr IPA",
"Template:Midsize",
"Template:Clear",
"Template:Reflist",
"Template:Wiktionary-inline",
"Template:Latin alphabet"
]
| https://en.wikipedia.org/wiki/F |
10,834 | Food preservation | Food preservation includes processes that make food more resistant to microorganism growth and slow the oxidation of fats. This slows down the decomposition and rancidification process. Food preservation may also include processes that inhibit visual deterioration, such as the enzymatic browning reaction in apples after they are cut during food preparation. By preserving food, food waste can be reduced, which is an important way to decrease production costs and increase the efficiency of food systems, improve food security and nutrition and contribute towards environmental sustainability. For instance, it can reduce the environmental impact of food production.
Many processes designed to preserve food involve more than one food preservation method. Preserving fruit by turning it into jam, for example, involves boiling (to reduce the fruit's moisture content and to kill bacteria, etc.), sugaring (to prevent their re-growth) and sealing within an airtight jar (to prevent recontamination).
Different food preservation methods have different impacts on the quality of the food and food systems. Some traditional methods of preserving food have been shown to have a lower energy input and carbon footprint compared to modern methods. Some methods of food preservation are known to create carcinogens. In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat—i.e., meat that has undergone salting, curing, fermenting, and smoking—as "carcinogenic to humans".
Some techniques of food preservation pre-date the dawn of agriculture. Others were discovered more recently.
Boiling liquids can kill any existing microbes. Milk and water are often boiled to kill any harmful microbes that may be present in them.
Burial of food can preserve it due to a variety of factors: lack of light, lack of oxygen, cool temperatures, pH level, or desiccants in the soil. Burial may be combined with other methods such as salting or fermentation. Most foods can be preserved in soil that is very dry and salty (thus a desiccant) such as sand, or soil that is frozen.
Many root vegetables are very resistant to spoilage and require no other preservation than storage in cool dark conditions, for example by burial in the ground, such as in a storage clamp (not to be confused with a root cellar). Cabbage was traditionally buried during Autumn in northern US farms for preservation. Some methods keep it crispy while other methods produce sauerkraut. A similar process is used in the traditional production of kimchi.
Sometimes meat is buried under conditions that cause preservation. If buried on hot coals or ashes, the heat can kill pathogens, the dry ash can desiccate, and the earth can block oxygen and further contamination. If buried where the earth is very cold, the earth acts like a refrigerator, or, in areas of permafrost, a freezer.
In Orissa, India, it is practical to store rice by burying it underground. This method helps to store for three to six months during the dry season.
Butter and similar substances have been preserved as bog butter in Irish peat bogs for centuries. Century eggs are traditionally created by placing eggs in alkaline mud (or other alkaline substance), resulting in their "inorganic" fermentation through raised pH instead of spoiling. The fermentation preserves them and breaks down some of the complex, less flavorful proteins and fats into simpler, more flavorful ones.
Canning involves cooking food, sealing it in sterilized cans or jars, and boiling the containers to kill or weaken any remaining bacteria as a form of sterilization. It was invented by the French confectioner Nicolas Appert. By 1806, this process was used by the French Navy to preserve meat, fruit, vegetables, and even milk. Although Appert had discovered a new way of preservation, it was not understood until 1864 when Louis Pasteur found the relationship between microorganisms, food spoilage, and illness.
Foods have varying degrees of natural protection against spoilage and may require that the final step occurs in a pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a short boiling cycle, whereas marginal vegetables such as carrots require longer boiling and the addition of other acidic elements. Low-acid foods, such as vegetables and meats, require pressure canning. Food preserved by canning or bottling is at immediate risk of spoilage once the can or bottle has been opened.
Lack of quality control in the canning process may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can cause gas production and the can will swell or burst. However, there have been examples of poor manufacture (underprocessing) and poor hygiene allowing contamination of canned food by the obligate anaerobe Clostridium botulinum, which produces an acute toxin within the food, leading to severe illness or death. This organism produces no gas or obvious taste and remains undetected by taste or smell. Its toxin is denatured by cooking, however. Cooked mushrooms, when handled poorly and then canned, can support the growth of Staphylococcus aureus, which produces a toxin that is not destroyed by canning or subsequent reheating.
Meat can be preserved by salting it, cooking it at or near 100 °C (212 °F) in some kind of fat (such as lard or tallow), and then storing it immersed in the fat. These preparations were popular in Europe before refrigerators became ubiquitous. They are still popular in France, where the term originates. The preparation will keep longer if stored in a cold cellar or buried in cold ground.
Cooling preserves food by slowing down the growth and reproduction of microorganisms and the action of enzymes that causes the food to rot. The introduction of commercial and domestic refrigerators drastically improved the diets of many in the Western world by allowing food such as fresh fruit, salads and dairy products to be stored safely for longer periods, particularly during warm weather.
Before the era of mechanical refrigeration, cooling for food storage occurred in the forms of root cellars and iceboxes. Rural people often did their own ice cutting, whereas town and city dwellers often relied on the ice trade. Today, root cellaring remains popular among people who value various goals, including local food, heirloom crops, traditional home cooking techniques, family farming, frugality, self-sufficiency, organic farming, and others.
The earliest form of curing was dehydration or drying, used as early as 12,000 BC. Smoking and salting techniques improve on the drying process and add antimicrobial agents that aid in preservation. Smoke deposits a number of pyrolysis products onto the food, including the phenols syringol, guaiacol and catechol. Salt accelerates the drying process using osmosis and also inhibits the growth of several common strains of bacteria. More recently nitrites have been used to cure meat, contributing a characteristic pink colour.
Some foods, such as many cheeses, wines, and beers, use specific micro-organisms that combat spoilage from other less-benign organisms. These micro-organisms keep pathogens in check by creating an environment toxic for themselves and other micro-organisms by producing acid or alcohol. Methods of fermentation include, but are not limited to, starter micro-organisms, salt, hops, controlled (usually cool) temperatures and controlled (usually low) levels of oxygen. These methods are used to create the specific controlled conditions that will support the desirable organisms that produce food fit for human consumption.
Fermentation is the microbial conversion of starch and sugars into alcohol. Not only can fermentation produce alcohol, but it can also be a valuable preservation technique. Fermentation can also make foods more nutritious and palatable. For example, drinking water in the Middle Ages was dangerous because it often contained pathogens that could spread disease. When the water is made into beer, the boiling during the brewing process kills any bacteria in the water that could make people sick. Additionally, the water now has the nutrients from the barley and other ingredients, and the microorganisms can also produce vitamins as they ferment.
Freezing is also one of the most commonly used processes, both commercially and domestically, for preserving a very wide range of foods, including prepared foods that would not have required freezing in their unprepared state. For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many months' storage. Cold stores provide large-volume, long-term storage for strategic food stocks held in case of national emergency in many countries.
Heating to temperatures which are sufficient to kill microorganisms inside the food is a method used with perpetual stews.
Food may be preserved by cooking in a material that solidifies to form a gel. Such materials include gelatin, agar, maize flour, and arrowroot flour.
Some animal flesh forms a protein gel when cooked. Eels and elvers, and sipunculid worms, are a delicacy in Xiamen, China, as are jellied eels in the East End of London, where they are eaten with mashed potatoes. British cuisine has a rich tradition of potted meats. Meat off-cuts were, until the 1950s, preserved in aspic, a gel made from gelatin and clarified meat broth. Another form of preservation is setting the cooked food in a container and covering it with a layer of fat. Potted chicken liver can be prepared in this way, and so can potted shrimps, to be served on toast. Calf's foot jelly used to be prepared for invalids.
Jellying is one of the steps in producing traditional pâtés. Many jugged meats (see below) are also jellied.
Another type of jellying is fruit preserves, which are preparations of cooked fruits, vegetables and sugar, often stored in glass jam jars and Mason jars. Many varieties of fruit preserves are made globally, including sweet fruit preserves, such as those made from strawberry or apricot, and savory preserves, such as those made from tomatoes or squash. The ingredients used and how they are prepared determine the type of preserves; jams, jellies, and marmalades are all examples of different styles of fruit preserves that vary based upon the fruit used. In English, the word preserves, in plural form, is used to describe all types of jams and jellies.
Meat can be preserved by jugging. Jugging is the process of stewing the meat (commonly game or fish) in a covered earthenware jug or casserole. The animal to be jugged is usually cut into pieces, placed into a tightly sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the 20th century.
Sodium hydroxide (lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor and texture. Lutefisk uses lye in its preparation, as do some olive recipes. Modern recipes for century eggs also call for lye.
Pickling is a method of preserving food in an edible, antimicrobial liquid. Pickling can be broadly classified into two categories: chemical pickling and fermentation pickling.
In chemical pickling, the food is placed in an edible liquid that inhibits or kills bacteria and other microorganisms. Typical pickling agents include brine (high in salt), vinegar, alcohol, and vegetable oil. Many chemical pickling processes also involve heating or boiling so that the food being preserved becomes saturated with the pickling agent. Common chemically pickled foods include cucumbers, peppers, corned beef, herring, and eggs, as well as mixed vegetables such as piccalilli.
In fermentation pickling, bacteria in the liquid produce organic acids as preservation agents, typically by a process that produces lactic acid through the presence of lactobacillales. Fermented pickles include sauerkraut, nukazuke, kimchi, and surströmming.
The earliest cultures have used sugar as a preservative, and it was commonplace to store fruit in honey. Similar to pickled foods, sugar cane was brought to Europe through the trade routes. In northern climates without sufficient sun to dry foods, preserves are made by heating the fruit with sugar. "Sugar tends to draw water from the microbes (plasmolysis). This process leaves the microbial cells dehydrated, thus killing them. In this way, the food will remain safe from microbial spoilage." Sugar is used to preserve fruits, either in an antimicrobial syrup with fruit such as apples, pears, peaches, apricots, and plums, or in crystallized form where the preserved material is cooked in sugar to the point of crystallization and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied peel), angelica, and ginger. Sugaring can be used in the production of jam and jelly.
Techniques of food preservation were developed in research laboratories for commercial applications.
Aseptic processing works by placing sterilized food (typically by heat, see ultra-high temperature processing) into sterlized packaging material under sterile conditions. The end result is a sealed, sterile food product similar to canned food, but depending on the technique used, damage to food quality is typically reduced compared to canned food. A greater variety of packaging materials can be used as well.
Besides UHT, aseptic processing may be used in conjunction with any of the microbe-reduction technologies listed below. With pasturization and "high pressure pasturization", the food may not be completely sterilized (instead achiving a specified log reduction), but the use of sterile packaging and enviornments is retained.
Pasteurization is a process for preservation of liquid food. It was originally applied to combat the souring of young local wines. Today, the process is mainly applied to dairy products. In this method, milk is heated at about 70 °C (158 °F) for 15–30 seconds to kill the bacteria present in it and cooling it quickly to 10 °C (50 °F) to prevent the remaining bacteria from growing. The milk is then stored in sterilized bottles or pouches in cold places. This method was invented by Louis Pasteur, a French chemist, in 1862.
Vacuum-packing stores food in a vacuum environment, usually in an air-tight bag or bottle. The vacuum environment strips bacteria of oxygen needed for survival. Vacuum-packing is commonly used for storing nuts to reduce loss of flavor from oxidization. A major drawback to vacuum packaging, at the consumer level, is that vacuum sealing can deform contents and rob certain foods, such as cheese, of its flavor.
Freeze drying, also known as lyophilization or cryodesiccation, is a low temperature dehydration process that involves freezing the product and lowering pressure, removing the ice by sublimation. This is in contrast to dehydration by most conventional methods that evaporate water using heat.
Preservative food additives can be antimicrobial – which inhibit the growth of bacteria or fungi, including mold – or antioxidant, such as oxygen absorbers, which inhibit the oxidation of food constituents. Common antimicrobial preservatives include nisin, sorbates, calcium propionate, sodium nitrate/nitrite, sulfites (sulfur dioxide, sodium bisulfite, potassium hydrogen sulfite, etc.), EDTA, hinokitiol, and ε-polylysine. Antioxidants include tocopherols (Vitamin E), butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT). Other preservatives include ethanol.
There is also another approach of impregnating packaging materials (plastic films or other) with antioxidants and antimicrobials.
Irradiation of food is the exposure of food to ionizing radiation. Multiple types of ionizing radiation can be used, including beta particles (high-energy electrons) and gamma rays (emitted from radioactive sources such as cobalt-60 or cesium-137). Irradiation can kill bacteria, molds, and insect pests, reduce the ripening and spoiling of fruits, and at higher doses induce sterility. The technology may be compared to pasteurization; it is sometimes called "cold pasteurization", as the product is not heated. Irradiation may allow lower-quality or contaminated foods to be rendered marketable.
National and international expert bodies have declared food irradiation as "wholesome"; organizations of the United Nations, such as the World Health Organization and Food and Agriculture Organization, endorse food irradiation. Consumers may have a negative view of irradiated food based on the misconception that such food is radioactive; in fact, irradiated food does not and cannot become radioactive. Activists have also opposed food irradiation for other reasons, for example, arguing that irradiation can be used to sterilize contaminated food without resolving the underlying cause of the contamination. International legislation on whether food may be irradiated or not varies worldwide from no regulation to a full ban.
Approximately 500,000 tons of food items are irradiated per year worldwide in over 40 countries. These are mainly spices and condiments, with an increasing segment of fresh fruit irradiated for fruit fly quarantine.
Pulsed electric field (PEF) electroporation is a method for processing cells by means of brief pulses of a strong electric field. PEF holds potential as a type of low-temperature alternative pasteurization process for sterilizing food products. In PEF processing, a substance is placed between two electrodes, then the pulsed electric field is applied. The electric field enlarges the pores of the cell membranes, which kills the cells and releases their contents. PEF for food processing is a developing technology still being researched. There have been limited industrial applications of PEF processing for the pasteurization of fruit juices. To date, several PEF treated juices are available on the market in Europe. Furthermore, for several years a juice pasteurization application in the US has used PEF. For cell disintegration purposes especially potato processors show great interest in PEF technology as an efficient alternative for their preheaters. Potato applications are already operational in the US and Canada. There are also commercial PEF potato applications in various countries in Europe, as well as in Australia, India, and China.
Modifying atmosphere is a way to preserve food by operating on the atmosphere around it. It is often used to package:
This process subjects the surface of food to a "flame" of ionized gas molecules, such as helium or nitrogen. This causes micro-organisms to die off on the surface.
High pressure can be used to disable harmful microorganisms and spoilage enzymes while retaining the food's fresh appearance, flavor, texture and nutrients. By 2005, the process was being used for products ranging from orange juice to guacamole to deli meats and widely sold. Depending on temperature and pressure settings, HP processing can achieve either pasteurization-equivalent log reduction or go all the way to achieve sterilization of all microbes.
Biopreservation is the use of natural or controlled microbiota or antimicrobials as a way of preserving food and extending its shelf life. Beneficial bacteria or the fermentation products produced by these bacteria are used in biopreservation to control spoilage and render pathogens inactive in food. It is a benign ecological approach which is gaining increasing attention.
Lactic acid bacteria (LAB) have antagonistic properties that make them particularly useful as biopreservatives. When LABs compete for nutrients, their metabolites often include active antimicrobials such as lactic acid, acetic acid, hydrogen peroxide, and peptide bacteriocins. Some LABs produce the antimicrobial nisin, which is a particularly effective preservative.
LAB bacteriocins are used in the present day as an integral part of hurdle technology. Using them in combination with other preservative techniques can effectively control spoilage bacteria and other pathogens, and can inhibit the activities of a wide spectrum of organisms, including inherently resistant Gram-negative bacteria.
Hurdle technology is a method of ensuring that pathogens in food products can be eliminated or controlled by combining more than one approach. These approaches can be thought of as "hurdles" the pathogen has to overcome if it is to remain active in the food. The right combination of hurdles can ensure all pathogens are eliminated or rendered harmless in the final product.
Hurdle technology has been defined by Leistner (2000) as an intelligent combination of hurdles that secures the microbial safety and stability as well as the organoleptic and nutritional quality and the economic viability of food products. The organoleptic quality of the food refers to its sensory properties, that is its look, taste, smell, and texture.
Examples of hurdles in a food system are high temperature during processing, low temperature during storage, increasing the acidity, lowering the water activity or redox potential, and the presence of preservatives or biopreservatives. According to the type of pathogens and how risky they are, the intensity of the hurdles can be adjusted individually to meet consumer preferences in an economical way, without sacrificing the safety of the product.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, In brief, 24, FAO, FAO. | [
{
"paragraph_id": 0,
"text": "Food preservation includes processes that make food more resistant to microorganism growth and slow the oxidation of fats. This slows down the decomposition and rancidification process. Food preservation may also include processes that inhibit visual deterioration, such as the enzymatic browning reaction in apples after they are cut during food preparation. By preserving food, food waste can be reduced, which is an important way to decrease production costs and increase the efficiency of food systems, improve food security and nutrition and contribute towards environmental sustainability. For instance, it can reduce the environmental impact of food production.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Many processes designed to preserve food involve more than one food preservation method. Preserving fruit by turning it into jam, for example, involves boiling (to reduce the fruit's moisture content and to kill bacteria, etc.), sugaring (to prevent their re-growth) and sealing within an airtight jar (to prevent recontamination).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Different food preservation methods have different impacts on the quality of the food and food systems. Some traditional methods of preserving food have been shown to have a lower energy input and carbon footprint compared to modern methods. Some methods of food preservation are known to create carcinogens. In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat—i.e., meat that has undergone salting, curing, fermenting, and smoking—as \"carcinogenic to humans\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Some techniques of food preservation pre-date the dawn of agriculture. Others were discovered more recently.",
"title": "Traditional techniques"
},
{
"paragraph_id": 4,
"text": "Boiling liquids can kill any existing microbes. Milk and water are often boiled to kill any harmful microbes that may be present in them.",
"title": "Traditional techniques"
},
{
"paragraph_id": 5,
"text": "Burial of food can preserve it due to a variety of factors: lack of light, lack of oxygen, cool temperatures, pH level, or desiccants in the soil. Burial may be combined with other methods such as salting or fermentation. Most foods can be preserved in soil that is very dry and salty (thus a desiccant) such as sand, or soil that is frozen.",
"title": "Traditional techniques"
},
{
"paragraph_id": 6,
"text": "Many root vegetables are very resistant to spoilage and require no other preservation than storage in cool dark conditions, for example by burial in the ground, such as in a storage clamp (not to be confused with a root cellar). Cabbage was traditionally buried during Autumn in northern US farms for preservation. Some methods keep it crispy while other methods produce sauerkraut. A similar process is used in the traditional production of kimchi.",
"title": "Traditional techniques"
},
{
"paragraph_id": 7,
"text": "Sometimes meat is buried under conditions that cause preservation. If buried on hot coals or ashes, the heat can kill pathogens, the dry ash can desiccate, and the earth can block oxygen and further contamination. If buried where the earth is very cold, the earth acts like a refrigerator, or, in areas of permafrost, a freezer.",
"title": "Traditional techniques"
},
{
"paragraph_id": 8,
"text": "In Orissa, India, it is practical to store rice by burying it underground. This method helps to store for three to six months during the dry season.",
"title": "Traditional techniques"
},
{
"paragraph_id": 9,
"text": "Butter and similar substances have been preserved as bog butter in Irish peat bogs for centuries. Century eggs are traditionally created by placing eggs in alkaline mud (or other alkaline substance), resulting in their \"inorganic\" fermentation through raised pH instead of spoiling. The fermentation preserves them and breaks down some of the complex, less flavorful proteins and fats into simpler, more flavorful ones.",
"title": "Traditional techniques"
},
{
"paragraph_id": 10,
"text": "Canning involves cooking food, sealing it in sterilized cans or jars, and boiling the containers to kill or weaken any remaining bacteria as a form of sterilization. It was invented by the French confectioner Nicolas Appert. By 1806, this process was used by the French Navy to preserve meat, fruit, vegetables, and even milk. Although Appert had discovered a new way of preservation, it was not understood until 1864 when Louis Pasteur found the relationship between microorganisms, food spoilage, and illness.",
"title": "Traditional techniques"
},
{
"paragraph_id": 11,
"text": "Foods have varying degrees of natural protection against spoilage and may require that the final step occurs in a pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a short boiling cycle, whereas marginal vegetables such as carrots require longer boiling and the addition of other acidic elements. Low-acid foods, such as vegetables and meats, require pressure canning. Food preserved by canning or bottling is at immediate risk of spoilage once the can or bottle has been opened.",
"title": "Traditional techniques"
},
{
"paragraph_id": 12,
"text": "Lack of quality control in the canning process may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can cause gas production and the can will swell or burst. However, there have been examples of poor manufacture (underprocessing) and poor hygiene allowing contamination of canned food by the obligate anaerobe Clostridium botulinum, which produces an acute toxin within the food, leading to severe illness or death. This organism produces no gas or obvious taste and remains undetected by taste or smell. Its toxin is denatured by cooking, however. Cooked mushrooms, when handled poorly and then canned, can support the growth of Staphylococcus aureus, which produces a toxin that is not destroyed by canning or subsequent reheating.",
"title": "Traditional techniques"
},
{
"paragraph_id": 13,
"text": "Meat can be preserved by salting it, cooking it at or near 100 °C (212 °F) in some kind of fat (such as lard or tallow), and then storing it immersed in the fat. These preparations were popular in Europe before refrigerators became ubiquitous. They are still popular in France, where the term originates. The preparation will keep longer if stored in a cold cellar or buried in cold ground.",
"title": "Traditional techniques"
},
{
"paragraph_id": 14,
"text": "Cooling preserves food by slowing down the growth and reproduction of microorganisms and the action of enzymes that causes the food to rot. The introduction of commercial and domestic refrigerators drastically improved the diets of many in the Western world by allowing food such as fresh fruit, salads and dairy products to be stored safely for longer periods, particularly during warm weather.",
"title": "Traditional techniques"
},
{
"paragraph_id": 15,
"text": "Before the era of mechanical refrigeration, cooling for food storage occurred in the forms of root cellars and iceboxes. Rural people often did their own ice cutting, whereas town and city dwellers often relied on the ice trade. Today, root cellaring remains popular among people who value various goals, including local food, heirloom crops, traditional home cooking techniques, family farming, frugality, self-sufficiency, organic farming, and others.",
"title": "Traditional techniques"
},
{
"paragraph_id": 16,
"text": "The earliest form of curing was dehydration or drying, used as early as 12,000 BC. Smoking and salting techniques improve on the drying process and add antimicrobial agents that aid in preservation. Smoke deposits a number of pyrolysis products onto the food, including the phenols syringol, guaiacol and catechol. Salt accelerates the drying process using osmosis and also inhibits the growth of several common strains of bacteria. More recently nitrites have been used to cure meat, contributing a characteristic pink colour.",
"title": "Traditional techniques"
},
{
"paragraph_id": 17,
"text": "Some foods, such as many cheeses, wines, and beers, use specific micro-organisms that combat spoilage from other less-benign organisms. These micro-organisms keep pathogens in check by creating an environment toxic for themselves and other micro-organisms by producing acid or alcohol. Methods of fermentation include, but are not limited to, starter micro-organisms, salt, hops, controlled (usually cool) temperatures and controlled (usually low) levels of oxygen. These methods are used to create the specific controlled conditions that will support the desirable organisms that produce food fit for human consumption.",
"title": "Traditional techniques"
},
{
"paragraph_id": 18,
"text": "Fermentation is the microbial conversion of starch and sugars into alcohol. Not only can fermentation produce alcohol, but it can also be a valuable preservation technique. Fermentation can also make foods more nutritious and palatable. For example, drinking water in the Middle Ages was dangerous because it often contained pathogens that could spread disease. When the water is made into beer, the boiling during the brewing process kills any bacteria in the water that could make people sick. Additionally, the water now has the nutrients from the barley and other ingredients, and the microorganisms can also produce vitamins as they ferment.",
"title": "Traditional techniques"
},
{
"paragraph_id": 19,
"text": "Freezing is also one of the most commonly used processes, both commercially and domestically, for preserving a very wide range of foods, including prepared foods that would not have required freezing in their unprepared state. For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many months' storage. Cold stores provide large-volume, long-term storage for strategic food stocks held in case of national emergency in many countries.",
"title": "Traditional techniques"
},
{
"paragraph_id": 20,
"text": "Heating to temperatures which are sufficient to kill microorganisms inside the food is a method used with perpetual stews.",
"title": "Traditional techniques"
},
{
"paragraph_id": 21,
"text": "Food may be preserved by cooking in a material that solidifies to form a gel. Such materials include gelatin, agar, maize flour, and arrowroot flour.",
"title": "Traditional techniques"
},
{
"paragraph_id": 22,
"text": "Some animal flesh forms a protein gel when cooked. Eels and elvers, and sipunculid worms, are a delicacy in Xiamen, China, as are jellied eels in the East End of London, where they are eaten with mashed potatoes. British cuisine has a rich tradition of potted meats. Meat off-cuts were, until the 1950s, preserved in aspic, a gel made from gelatin and clarified meat broth. Another form of preservation is setting the cooked food in a container and covering it with a layer of fat. Potted chicken liver can be prepared in this way, and so can potted shrimps, to be served on toast. Calf's foot jelly used to be prepared for invalids.",
"title": "Traditional techniques"
},
{
"paragraph_id": 23,
"text": "Jellying is one of the steps in producing traditional pâtés. Many jugged meats (see below) are also jellied.",
"title": "Traditional techniques"
},
{
"paragraph_id": 24,
"text": "Another type of jellying is fruit preserves, which are preparations of cooked fruits, vegetables and sugar, often stored in glass jam jars and Mason jars. Many varieties of fruit preserves are made globally, including sweet fruit preserves, such as those made from strawberry or apricot, and savory preserves, such as those made from tomatoes or squash. The ingredients used and how they are prepared determine the type of preserves; jams, jellies, and marmalades are all examples of different styles of fruit preserves that vary based upon the fruit used. In English, the word preserves, in plural form, is used to describe all types of jams and jellies.",
"title": "Traditional techniques"
},
{
"paragraph_id": 25,
"text": "Meat can be preserved by jugging. Jugging is the process of stewing the meat (commonly game or fish) in a covered earthenware jug or casserole. The animal to be jugged is usually cut into pieces, placed into a tightly sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the 20th century.",
"title": "Traditional techniques"
},
{
"paragraph_id": 26,
"text": "Sodium hydroxide (lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor and texture. Lutefisk uses lye in its preparation, as do some olive recipes. Modern recipes for century eggs also call for lye.",
"title": "Traditional techniques"
},
{
"paragraph_id": 27,
"text": "Pickling is a method of preserving food in an edible, antimicrobial liquid. Pickling can be broadly classified into two categories: chemical pickling and fermentation pickling.",
"title": "Traditional techniques"
},
{
"paragraph_id": 28,
"text": "In chemical pickling, the food is placed in an edible liquid that inhibits or kills bacteria and other microorganisms. Typical pickling agents include brine (high in salt), vinegar, alcohol, and vegetable oil. Many chemical pickling processes also involve heating or boiling so that the food being preserved becomes saturated with the pickling agent. Common chemically pickled foods include cucumbers, peppers, corned beef, herring, and eggs, as well as mixed vegetables such as piccalilli.",
"title": "Traditional techniques"
},
{
"paragraph_id": 29,
"text": "In fermentation pickling, bacteria in the liquid produce organic acids as preservation agents, typically by a process that produces lactic acid through the presence of lactobacillales. Fermented pickles include sauerkraut, nukazuke, kimchi, and surströmming.",
"title": "Traditional techniques"
},
{
"paragraph_id": 30,
"text": "The earliest cultures have used sugar as a preservative, and it was commonplace to store fruit in honey. Similar to pickled foods, sugar cane was brought to Europe through the trade routes. In northern climates without sufficient sun to dry foods, preserves are made by heating the fruit with sugar. \"Sugar tends to draw water from the microbes (plasmolysis). This process leaves the microbial cells dehydrated, thus killing them. In this way, the food will remain safe from microbial spoilage.\" Sugar is used to preserve fruits, either in an antimicrobial syrup with fruit such as apples, pears, peaches, apricots, and plums, or in crystallized form where the preserved material is cooked in sugar to the point of crystallization and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied peel), angelica, and ginger. Sugaring can be used in the production of jam and jelly.",
"title": "Traditional techniques"
},
{
"paragraph_id": 31,
"text": "Techniques of food preservation were developed in research laboratories for commercial applications.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 32,
"text": "Aseptic processing works by placing sterilized food (typically by heat, see ultra-high temperature processing) into sterlized packaging material under sterile conditions. The end result is a sealed, sterile food product similar to canned food, but depending on the technique used, damage to food quality is typically reduced compared to canned food. A greater variety of packaging materials can be used as well.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 33,
"text": "Besides UHT, aseptic processing may be used in conjunction with any of the microbe-reduction technologies listed below. With pasturization and \"high pressure pasturization\", the food may not be completely sterilized (instead achiving a specified log reduction), but the use of sterile packaging and enviornments is retained.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 34,
"text": "Pasteurization is a process for preservation of liquid food. It was originally applied to combat the souring of young local wines. Today, the process is mainly applied to dairy products. In this method, milk is heated at about 70 °C (158 °F) for 15–30 seconds to kill the bacteria present in it and cooling it quickly to 10 °C (50 °F) to prevent the remaining bacteria from growing. The milk is then stored in sterilized bottles or pouches in cold places. This method was invented by Louis Pasteur, a French chemist, in 1862.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 35,
"text": "Vacuum-packing stores food in a vacuum environment, usually in an air-tight bag or bottle. The vacuum environment strips bacteria of oxygen needed for survival. Vacuum-packing is commonly used for storing nuts to reduce loss of flavor from oxidization. A major drawback to vacuum packaging, at the consumer level, is that vacuum sealing can deform contents and rob certain foods, such as cheese, of its flavor.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 36,
"text": "Freeze drying, also known as lyophilization or cryodesiccation, is a low temperature dehydration process that involves freezing the product and lowering pressure, removing the ice by sublimation. This is in contrast to dehydration by most conventional methods that evaporate water using heat.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 37,
"text": "Preservative food additives can be antimicrobial – which inhibit the growth of bacteria or fungi, including mold – or antioxidant, such as oxygen absorbers, which inhibit the oxidation of food constituents. Common antimicrobial preservatives include nisin, sorbates, calcium propionate, sodium nitrate/nitrite, sulfites (sulfur dioxide, sodium bisulfite, potassium hydrogen sulfite, etc.), EDTA, hinokitiol, and ε-polylysine. Antioxidants include tocopherols (Vitamin E), butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT). Other preservatives include ethanol.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 38,
"text": "There is also another approach of impregnating packaging materials (plastic films or other) with antioxidants and antimicrobials.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 39,
"text": "Irradiation of food is the exposure of food to ionizing radiation. Multiple types of ionizing radiation can be used, including beta particles (high-energy electrons) and gamma rays (emitted from radioactive sources such as cobalt-60 or cesium-137). Irradiation can kill bacteria, molds, and insect pests, reduce the ripening and spoiling of fruits, and at higher doses induce sterility. The technology may be compared to pasteurization; it is sometimes called \"cold pasteurization\", as the product is not heated. Irradiation may allow lower-quality or contaminated foods to be rendered marketable.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 40,
"text": "National and international expert bodies have declared food irradiation as \"wholesome\"; organizations of the United Nations, such as the World Health Organization and Food and Agriculture Organization, endorse food irradiation. Consumers may have a negative view of irradiated food based on the misconception that such food is radioactive; in fact, irradiated food does not and cannot become radioactive. Activists have also opposed food irradiation for other reasons, for example, arguing that irradiation can be used to sterilize contaminated food without resolving the underlying cause of the contamination. International legislation on whether food may be irradiated or not varies worldwide from no regulation to a full ban.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 41,
"text": "Approximately 500,000 tons of food items are irradiated per year worldwide in over 40 countries. These are mainly spices and condiments, with an increasing segment of fresh fruit irradiated for fruit fly quarantine.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 42,
"text": "Pulsed electric field (PEF) electroporation is a method for processing cells by means of brief pulses of a strong electric field. PEF holds potential as a type of low-temperature alternative pasteurization process for sterilizing food products. In PEF processing, a substance is placed between two electrodes, then the pulsed electric field is applied. The electric field enlarges the pores of the cell membranes, which kills the cells and releases their contents. PEF for food processing is a developing technology still being researched. There have been limited industrial applications of PEF processing for the pasteurization of fruit juices. To date, several PEF treated juices are available on the market in Europe. Furthermore, for several years a juice pasteurization application in the US has used PEF. For cell disintegration purposes especially potato processors show great interest in PEF technology as an efficient alternative for their preheaters. Potato applications are already operational in the US and Canada. There are also commercial PEF potato applications in various countries in Europe, as well as in Australia, India, and China.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 43,
"text": "Modifying atmosphere is a way to preserve food by operating on the atmosphere around it. It is often used to package:",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 44,
"text": "This process subjects the surface of food to a \"flame\" of ionized gas molecules, such as helium or nitrogen. This causes micro-organisms to die off on the surface.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 45,
"text": "High pressure can be used to disable harmful microorganisms and spoilage enzymes while retaining the food's fresh appearance, flavor, texture and nutrients. By 2005, the process was being used for products ranging from orange juice to guacamole to deli meats and widely sold. Depending on temperature and pressure settings, HP processing can achieve either pasteurization-equivalent log reduction or go all the way to achieve sterilization of all microbes.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 46,
"text": "Biopreservation is the use of natural or controlled microbiota or antimicrobials as a way of preserving food and extending its shelf life. Beneficial bacteria or the fermentation products produced by these bacteria are used in biopreservation to control spoilage and render pathogens inactive in food. It is a benign ecological approach which is gaining increasing attention.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 47,
"text": "Lactic acid bacteria (LAB) have antagonistic properties that make them particularly useful as biopreservatives. When LABs compete for nutrients, their metabolites often include active antimicrobials such as lactic acid, acetic acid, hydrogen peroxide, and peptide bacteriocins. Some LABs produce the antimicrobial nisin, which is a particularly effective preservative.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 48,
"text": "LAB bacteriocins are used in the present day as an integral part of hurdle technology. Using them in combination with other preservative techniques can effectively control spoilage bacteria and other pathogens, and can inhibit the activities of a wide spectrum of organisms, including inherently resistant Gram-negative bacteria.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 49,
"text": "Hurdle technology is a method of ensuring that pathogens in food products can be eliminated or controlled by combining more than one approach. These approaches can be thought of as \"hurdles\" the pathogen has to overcome if it is to remain active in the food. The right combination of hurdles can ensure all pathogens are eliminated or rendered harmless in the final product.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 50,
"text": "Hurdle technology has been defined by Leistner (2000) as an intelligent combination of hurdles that secures the microbial safety and stability as well as the organoleptic and nutritional quality and the economic viability of food products. The organoleptic quality of the food refers to its sensory properties, that is its look, taste, smell, and texture.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 51,
"text": "Examples of hurdles in a food system are high temperature during processing, low temperature during storage, increasing the acidity, lowering the water activity or redox potential, and the presence of preservatives or biopreservatives. According to the type of pathogens and how risky they are, the intensity of the hurdles can be adjusted individually to meet consumer preferences in an economical way, without sacrificing the safety of the product.",
"title": "Modern industrial techniques"
},
{
"paragraph_id": 52,
"text": "This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, In brief, 24, FAO, FAO.",
"title": "Sources"
}
]
| Food preservation includes processes that make food more resistant to microorganism growth and slow the oxidation of fats. This slows down the decomposition and rancidification process. Food preservation may also include processes that inhibit visual deterioration, such as the enzymatic browning reaction in apples after they are cut during food preparation. By preserving food, food waste can be reduced, which is an important way to decrease production costs and increase the efficiency of food systems, improve food security and nutrition and contribute towards environmental sustainability. For instance, it can reduce the environmental impact of food production. Many processes designed to preserve food involve more than one food preservation method. Preserving fruit by turning it into jam, for example, involves boiling, sugaring and sealing within an airtight jar. Different food preservation methods have different impacts on the quality of the food and food systems. Some traditional methods of preserving food have been shown to have a lower energy input and carbon footprint compared to modern methods. Some methods of food preservation are known to create carcinogens. In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat—i.e., meat that has undergone salting, curing, fermenting, and smoking—as "carcinogenic to humans". | 2001-05-24T14:53:15Z | 2023-12-06T22:44:07Z | [
"Template:Cn",
"Template:Commons category",
"Template:Excerpt",
"Template:Cite news",
"Template:Refend",
"Template:Portal",
"Template:Short description",
"Template:Pp",
"Template:Main",
"Template:See also",
"Template:Consumer Food Safety",
"Template:Cite book",
"Template:Free-content attribution",
"Template:Refbegin",
"Template:Food preservation",
"Template:Chem2",
"Template:Div col end",
"Template:Reflist",
"Template:Sub",
"Template:Div col",
"Template:Webarchive",
"Template:Nbsp",
"Template:Cite web",
"Template:Cooking techniques",
"Template:ISBN",
"Template:Cite journal",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Cvt",
"Template:CO2"
]
| https://en.wikipedia.org/wiki/Food_preservation |
10,835 | Frequency modulation | Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing.
In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. the difference between the frequency of the carrier and its center frequency, has a functional relation to the modulating signal amplitude.
Digital data can be encoded and transmitted with a type of frequency modulation known as frequency-shift keying (FSK), in which the instantaneous frequency of the carrier is shifted among a set of frequencies. The frequencies may represent digits, such as '0' and '1'. FSK is widely used in computer modems, such as fax modems, telephone caller ID systems, garage door openers, and other low-frequency transmissions. Radioteletype also uses FSK.
Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, sound synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio.
However, under severe enough multipath conditions it performs much more poorly than AM, with distinct high frequency noise artifacts that are audible with lower volumes and less complex tones. With high enough volume and carrier deviation audio distortion starts to occur that otherwise wouldn't be present without multipath or with an AM signal.
Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant.
If the information to be transmitted (i.e., the baseband signal) is x m ( t ) {\displaystyle x_{m}(t)} and the sinusoidal carrier is x c ( t ) = A c cos ( 2 π f c t ) {\displaystyle x_{c}(t)=A_{c}\cos(2\pi f_{c}t)\,} , where fc is the carrier's base frequency, and Ac is the carrier's amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted signal:
where f Δ = K f A m {\displaystyle f_{\Delta }=K_{f}A_{m}} , K f {\displaystyle K_{f}} being the sensitivity of the frequency modulator and A m {\displaystyle A_{m}} being the amplitude of the modulating signal or baseband signal.
In this equation, f ( τ ) {\displaystyle f(\tau )\,} is the instantaneous frequency of the oscillator and f Δ {\displaystyle f_{\Delta }\,} is the frequency deviation, which represents the maximum shift away from fc in one direction, assuming xm(t) is limited to the range ±1.
It is important to realize that this process of integrating the instantaneous frequency to create an instantaneous phase is quite different from what the term "frequency modulation" naively implies, namely directly adding the modulating signal to the carrier frequency
which would result in a modulated signal that has spurious local minima and maxima that do not correspond to those of the carrier.
While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier analysis that a wider range of frequencies is required to precisely represent an FM signal. The frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems.
Mathematically, a baseband modulating signal may be approximated by a sinusoidal continuous wave signal with a frequency fm. This method is also named as single-tone modulation. The integral of such a signal is:
In this case, the expression for y(t) above simplifies to:
where the amplitude A m {\displaystyle A_{m}\,} of the modulating sinusoid is represented in the peak deviation f Δ = K f A m {\displaystyle f_{\Delta }=K_{f}A_{m}} (see frequency deviation).
The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be represented with Bessel functions; this provides the basis for a mathematical understanding of frequency modulation in the frequency domain.
As in other modulation systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. It relates to variations in the carrier frequency:
where f m {\displaystyle f_{m}\,} is the highest frequency component present in the modulating signal xm(t), and Δ f {\displaystyle \Delta {}f\,} is the peak frequency-deviation – i.e. the maximum deviation of the instantaneous frequency from the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave.
If h ≪ 1 {\displaystyle h\ll 1} , the modulation is called narrowband FM (NFM), and its bandwidth is approximately 2 f m {\displaystyle 2f_{m}\,} . Sometimes modulation index h < 0.3 {\displaystyle h<0.3} is considered as NFM, otherwise wideband FM (WFM or FM).
For digital modulation systems, for example binary frequency shift keying (BFSK), where a binary signal modulates the carrier, the modulation index is given by:
where T s {\displaystyle T_{s}\,} is the symbol period, and f m = 1 2 T s {\displaystyle f_{m}={\frac {1}{2T_{s}}}\,} is used as the highest frequency of the modulating binary waveform by convention, even though it would be more accurate to say it is the highest fundamental of the modulating binary waveform. In the case of digital modulation, the carrier f c {\displaystyle f_{c}\,} is never transmitted. Rather, one of two frequencies is transmitted, either f c + Δ f {\displaystyle f_{c}+\Delta f} or f c − Δ f {\displaystyle f_{c}-\Delta f} , depending on the binary state 0 or 1 of the modulation signal.
If h ≫ 1 {\displaystyle h\gg 1} , the modulation is called wideband FM and its bandwidth is approximately 2 f Δ {\displaystyle 2f_{\Delta }\,} . While wideband FM uses more bandwidth, it can improve the signal-to-noise ratio significantly; for example, doubling the value of Δ f {\displaystyle \Delta {}f\,} , while keeping f m {\displaystyle f_{m}} constant, results in an eight-fold improvement in the signal-to-noise ratio. (Compare this with chirp spread spectrum, which uses extremely wide frequency deviations to achieve processing gains comparable to traditional, better-known spread-spectrum modes).
With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing between spectra remains the same; some spectral components decrease in strength as others increase. If the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases.
Frequency modulation can be classified as narrowband if the change in the carrier frequency is about the same as the signal frequency, or as wideband if the change in the carrier frequency is much higher (modulation index > 1) than the signal frequency. For example, narrowband FM (NFM) is used for two-way radio systems such as Family Radio Service, in which the carrier is allowed to deviate only 2.5 kHz above and below the center frequency with speech signals of no more than 3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a 20 kHz bandwidth and subcarriers up to 92 kHz.
For the case of a carrier modulated by a single sine wave, the resulting frequency spectrum can be calculated using Bessel functions of the first kind, as a function of the sideband number and the modulation index. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals. For particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands.
Since the sidebands are on both sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example, 3 kHz deviation modulated by a 2.2 kHz audio tone produces a modulation index of 1.36. Suppose that we limit ourselves to only those sidebands that have a relative amplitude of at least 0.01. Then, examining the chart shows this modulation index will produce three sidebands. These three sidebands, when doubled, gives us (6 × 2.2 kHz) or a 13.2 kHz required bandwidth.
A rule of thumb, Carson's rule states that nearly all (≈98 percent) of the power of a frequency-modulated signal lies within a bandwidth B T {\displaystyle B_{T}\,} of:
where Δ f {\displaystyle \Delta f\,} , as defined above, is the peak deviation of the instantaneous frequency f ( t ) {\displaystyle f(t)\,} from the center carrier frequency f c {\displaystyle f_{c}} , β {\displaystyle \beta } is the Modulation index which is the ratio of frequency deviation to highest frequency in the modulating signal and f m {\displaystyle f_{m}\,} is the highest frequency in the modulating signal. Condition for application of Carson's rule is only sinusoidal signals. For non-sinusoidal signals:
where W is the highest frequency in the modulating signal but non-sinusoidal in nature and D is the Deviation ratio which the ratio of frequency deviation to highest frequency of modulating non-sinusoidal signal.
FM provides improved signal-to-noise ratio (SNR), as compared for example with AM. Compared with an optimum AM scheme, FM typically has poorer SNR below a certain signal level called the noise threshold, but above a higher level – the full improvement or full quieting threshold – the SNR is much improved over AM. The improvement depends on modulation level and deviation. For typical voice communications channels, improvements are typically 5–15 dB. FM broadcasting using wider deviation can achieve even greater improvements. Additional techniques, such as pre-emphasis of higher audio frequencies with corresponding de-emphasis in the receiver, are generally used to improve overall SNR in FM circuits. Since FM signals have constant amplitude, FM receivers normally have limiters that remove AM noise, further improving SNR.
FM signals can be generated using either direct or indirect frequency modulation:
Many FM detector circuits exist. A common method for recovering the information signal is through a Foster–Seeley discriminator or ratio detector. A phase-locked loop can be used as an FM demodulator. Slope detection demodulates an FM signal by using a tuned circuit which has its resonant frequency slightly offset from the carrier. As the frequency rises and falls the tuned circuit provides a changing amplitude of response, converting FM to AM. AM receivers may detect some FM transmissions by this means, although it does not provide an efficient means of detection for FM broadcasts. In Software-Defined Radio implementations the demodulation may be carried out by using the Hilbert transform (implemented as a filter) to recover the instantaneous phase, and thereafter differentiating this phase (using another filter) to recover the instantaneous frequency. Alternatively, a complex mixer followed by a bandpass filter may be used to translate the signal to baseband, and then proceeding as before.
When an echolocating bat approaches a target, its outgoing sounds return as echoes, which are Doppler-shifted upward in frequency. In certain species of bats, which produce constant frequency (CF) echolocation calls, the bats compensate for the Doppler shift by lowering their call frequency as they approach a target. This keeps the returning echo in the same frequency range of the normal echolocation call. This dynamic frequency modulation is called the Doppler Shift Compensation (DSC), and was discovered by Hans Schnitzler in 1968
FM is also used at intermediate frequencies by analog VCR systems (including VHS) to record the luminance (black and white) portions of the video signal. Commonly, the chrominance component is recorded as a conventional AM signal, using the higher-frequency FM signal as bias. FM is the only feasible method of recording the luminance ("black-and-white") component of video to (and retrieving video from) magnetic tape without distortion; video signals have a large range of frequency components – from a few hertz to several megahertz, too wide for equalizers to work with due to electronic noise below −60 dB. FM also keeps the tape at saturation level, acting as a form of noise reduction; a limiter can mask variations in playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal – as was done on V2000 and many Hi-band formats – can keep mechanical jitter under control and assist timebase correction.
These FM systems are unusual, in that they have a ratio of carrier to maximum modulation frequency of less than two; contrast this with FM audio broadcasting, where the ratio is around 10,000. Consider, for example, a 6-MHz carrier modulated at a 3.5-MHz rate; by Bessel analysis, the first sidebands are on 9.5 and 2.5 MHz and the second sidebands are on 13 MHz and −1 MHz. The result is a reversed-phase sideband on +1 MHz; on demodulation, this results in unwanted output at 6 – 1 = 5 MHz. The system must be designed so that this unwanted output is reduced to an acceptable level.
FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature in several generations of personal computer sound cards.
Edwin Howard Armstrong (1890–1954) was an American electrical engineer who invented wideband frequency modulation (FM) radio. He patented the regenerative circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in 1922. Armstrong presented his paper, "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", (which first described FM radio) before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936.
As the name implies, wideband FM (WFM) requires a wider signal bandwidth than amplitude modulation by an equivalent modulating signal; this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against signal-amplitude-fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission, hence the term "FM radio" (although for many years the BBC called it "VHF radio" because commercial FM broadcasting uses part of the VHF band – the FM broadcast band). FM receivers employ a special detector for FM signals and exhibit a phenomenon known as the capture effect, in which the tuner "captures" the stronger of two stations on the same frequency while rejecting the other (compare this with a similar situation on an AM receiver, where both stations can be heard simultaneously). However, frequency drift or a lack of selectivity may cause one station to be overtaken by another on an adjacent channel. Frequency drift was a problem in early (or inexpensive) receivers; inadequate selectivity may affect any tuner.
An FM signal can also be used to carry a stereo signal; this is done with multiplexing and demultiplexing before and after the FM process. The FM modulation and demodulation process is identical in stereo and monaural processes. A high-efficiency radio-frequency switching amplifier can be used to transmit FM signals (and other constant-amplitude signals). For a given signal strength (measured at the receiver antenna), switching amplifiers use less battery power and typically cost less than a linear amplifier. This gives FM another advantage over other modulation methods requiring linear amplifiers, such as AM and QAM.
FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech. Analog TV sound is also broadcast using FM. Narrowband FM is used for voice communications in commercial and amateur radio settings. In broadcast services, where audio fidelity is important, wideband FM is generally used. In two-way radio, narrowband FM (NBFM) is used to conserve bandwidth for land mobile, marine mobile and other radio services.
There are reports that on October 5, 1924, Professor Mikhail A. Bonch-Bruevich, during a scientific and technical conversation in the Nizhny Novgorod Radio Laboratory, reported about his new method of telephony, based on a change in the period of oscillations. Demonstration of frequency modulation was carried out on the laboratory model. | [
{
"paragraph_id": 0,
"text": "Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. the difference between the frequency of the carrier and its center frequency, has a functional relation to the modulating signal amplitude.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Digital data can be encoded and transmitted with a type of frequency modulation known as frequency-shift keying (FSK), in which the instantaneous frequency of the carrier is shifted among a set of frequencies. The frequencies may represent digits, such as '0' and '1'. FSK is widely used in computer modems, such as fax modems, telephone caller ID systems, garage door openers, and other low-frequency transmissions. Radioteletype also uses FSK.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, sound synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio.",
"title": ""
},
{
"paragraph_id": 4,
"text": "However, under severe enough multipath conditions it performs much more poorly than AM, with distinct high frequency noise artifacts that are audible with lower volumes and less complex tones. With high enough volume and carrier deviation audio distortion starts to occur that otherwise wouldn't be present without multipath or with an AM signal.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant.",
"title": ""
},
{
"paragraph_id": 6,
"text": "If the information to be transmitted (i.e., the baseband signal) is x m ( t ) {\\displaystyle x_{m}(t)} and the sinusoidal carrier is x c ( t ) = A c cos ( 2 π f c t ) {\\displaystyle x_{c}(t)=A_{c}\\cos(2\\pi f_{c}t)\\,} , where fc is the carrier's base frequency, and Ac is the carrier's amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted signal:",
"title": "Theory"
},
{
"paragraph_id": 7,
"text": "where f Δ = K f A m {\\displaystyle f_{\\Delta }=K_{f}A_{m}} , K f {\\displaystyle K_{f}} being the sensitivity of the frequency modulator and A m {\\displaystyle A_{m}} being the amplitude of the modulating signal or baseband signal.",
"title": "Theory"
},
{
"paragraph_id": 8,
"text": "In this equation, f ( τ ) {\\displaystyle f(\\tau )\\,} is the instantaneous frequency of the oscillator and f Δ {\\displaystyle f_{\\Delta }\\,} is the frequency deviation, which represents the maximum shift away from fc in one direction, assuming xm(t) is limited to the range ±1.",
"title": "Theory"
},
{
"paragraph_id": 9,
"text": "It is important to realize that this process of integrating the instantaneous frequency to create an instantaneous phase is quite different from what the term \"frequency modulation\" naively implies, namely directly adding the modulating signal to the carrier frequency",
"title": "Theory"
},
{
"paragraph_id": 10,
"text": "which would result in a modulated signal that has spurious local minima and maxima that do not correspond to those of the carrier.",
"title": "Theory"
},
{
"paragraph_id": 11,
"text": "While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier analysis that a wider range of frequencies is required to precisely represent an FM signal. The frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems.",
"title": "Theory"
},
{
"paragraph_id": 12,
"text": "Mathematically, a baseband modulating signal may be approximated by a sinusoidal continuous wave signal with a frequency fm. This method is also named as single-tone modulation. The integral of such a signal is:",
"title": "Theory"
},
{
"paragraph_id": 13,
"text": "In this case, the expression for y(t) above simplifies to:",
"title": "Theory"
},
{
"paragraph_id": 14,
"text": "where the amplitude A m {\\displaystyle A_{m}\\,} of the modulating sinusoid is represented in the peak deviation f Δ = K f A m {\\displaystyle f_{\\Delta }=K_{f}A_{m}} (see frequency deviation).",
"title": "Theory"
},
{
"paragraph_id": 15,
"text": "The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be represented with Bessel functions; this provides the basis for a mathematical understanding of frequency modulation in the frequency domain.",
"title": "Theory"
},
{
"paragraph_id": 16,
"text": "As in other modulation systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. It relates to variations in the carrier frequency:",
"title": "Theory"
},
{
"paragraph_id": 17,
"text": "where f m {\\displaystyle f_{m}\\,} is the highest frequency component present in the modulating signal xm(t), and Δ f {\\displaystyle \\Delta {}f\\,} is the peak frequency-deviation – i.e. the maximum deviation of the instantaneous frequency from the carrier frequency. For a sine wave modulation, the modulation index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave.",
"title": "Theory"
},
{
"paragraph_id": 18,
"text": "If h ≪ 1 {\\displaystyle h\\ll 1} , the modulation is called narrowband FM (NFM), and its bandwidth is approximately 2 f m {\\displaystyle 2f_{m}\\,} . Sometimes modulation index h < 0.3 {\\displaystyle h<0.3} is considered as NFM, otherwise wideband FM (WFM or FM).",
"title": "Theory"
},
{
"paragraph_id": 19,
"text": "For digital modulation systems, for example binary frequency shift keying (BFSK), where a binary signal modulates the carrier, the modulation index is given by:",
"title": "Theory"
},
{
"paragraph_id": 20,
"text": "where T s {\\displaystyle T_{s}\\,} is the symbol period, and f m = 1 2 T s {\\displaystyle f_{m}={\\frac {1}{2T_{s}}}\\,} is used as the highest frequency of the modulating binary waveform by convention, even though it would be more accurate to say it is the highest fundamental of the modulating binary waveform. In the case of digital modulation, the carrier f c {\\displaystyle f_{c}\\,} is never transmitted. Rather, one of two frequencies is transmitted, either f c + Δ f {\\displaystyle f_{c}+\\Delta f} or f c − Δ f {\\displaystyle f_{c}-\\Delta f} , depending on the binary state 0 or 1 of the modulation signal.",
"title": "Theory"
},
{
"paragraph_id": 21,
"text": "If h ≫ 1 {\\displaystyle h\\gg 1} , the modulation is called wideband FM and its bandwidth is approximately 2 f Δ {\\displaystyle 2f_{\\Delta }\\,} . While wideband FM uses more bandwidth, it can improve the signal-to-noise ratio significantly; for example, doubling the value of Δ f {\\displaystyle \\Delta {}f\\,} , while keeping f m {\\displaystyle f_{m}} constant, results in an eight-fold improvement in the signal-to-noise ratio. (Compare this with chirp spread spectrum, which uses extremely wide frequency deviations to achieve processing gains comparable to traditional, better-known spread-spectrum modes).",
"title": "Theory"
},
{
"paragraph_id": 22,
"text": "With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation index is increased, the (non-negligible) bandwidth of the FM signal increases but the spacing between spectra remains the same; some spectral components decrease in strength as others increase. If the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases.",
"title": "Theory"
},
{
"paragraph_id": 23,
"text": "Frequency modulation can be classified as narrowband if the change in the carrier frequency is about the same as the signal frequency, or as wideband if the change in the carrier frequency is much higher (modulation index > 1) than the signal frequency. For example, narrowband FM (NFM) is used for two-way radio systems such as Family Radio Service, in which the carrier is allowed to deviate only 2.5 kHz above and below the center frequency with speech signals of no more than 3.5 kHz bandwidth. Wideband FM is used for FM broadcasting, in which music and speech are transmitted with up to 75 kHz deviation from the center frequency and carry audio with up to a 20 kHz bandwidth and subcarriers up to 92 kHz.",
"title": "Theory"
},
{
"paragraph_id": 24,
"text": "For the case of a carrier modulated by a single sine wave, the resulting frequency spectrum can be calculated using Bessel functions of the first kind, as a function of the sideband number and the modulation index. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals. For particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands.",
"title": "Theory"
},
{
"paragraph_id": 25,
"text": "Since the sidebands are on both sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example, 3 kHz deviation modulated by a 2.2 kHz audio tone produces a modulation index of 1.36. Suppose that we limit ourselves to only those sidebands that have a relative amplitude of at least 0.01. Then, examining the chart shows this modulation index will produce three sidebands. These three sidebands, when doubled, gives us (6 × 2.2 kHz) or a 13.2 kHz required bandwidth.",
"title": "Theory"
},
{
"paragraph_id": 26,
"text": "A rule of thumb, Carson's rule states that nearly all (≈98 percent) of the power of a frequency-modulated signal lies within a bandwidth B T {\\displaystyle B_{T}\\,} of:",
"title": "Theory"
},
{
"paragraph_id": 27,
"text": "where Δ f {\\displaystyle \\Delta f\\,} , as defined above, is the peak deviation of the instantaneous frequency f ( t ) {\\displaystyle f(t)\\,} from the center carrier frequency f c {\\displaystyle f_{c}} , β {\\displaystyle \\beta } is the Modulation index which is the ratio of frequency deviation to highest frequency in the modulating signal and f m {\\displaystyle f_{m}\\,} is the highest frequency in the modulating signal. Condition for application of Carson's rule is only sinusoidal signals. For non-sinusoidal signals:",
"title": "Theory"
},
{
"paragraph_id": 28,
"text": "where W is the highest frequency in the modulating signal but non-sinusoidal in nature and D is the Deviation ratio which the ratio of frequency deviation to highest frequency of modulating non-sinusoidal signal.",
"title": "Theory"
},
{
"paragraph_id": 29,
"text": "FM provides improved signal-to-noise ratio (SNR), as compared for example with AM. Compared with an optimum AM scheme, FM typically has poorer SNR below a certain signal level called the noise threshold, but above a higher level – the full improvement or full quieting threshold – the SNR is much improved over AM. The improvement depends on modulation level and deviation. For typical voice communications channels, improvements are typically 5–15 dB. FM broadcasting using wider deviation can achieve even greater improvements. Additional techniques, such as pre-emphasis of higher audio frequencies with corresponding de-emphasis in the receiver, are generally used to improve overall SNR in FM circuits. Since FM signals have constant amplitude, FM receivers normally have limiters that remove AM noise, further improving SNR.",
"title": "Noise reduction"
},
{
"paragraph_id": 30,
"text": "FM signals can be generated using either direct or indirect frequency modulation:",
"title": "Implementation"
},
{
"paragraph_id": 31,
"text": "Many FM detector circuits exist. A common method for recovering the information signal is through a Foster–Seeley discriminator or ratio detector. A phase-locked loop can be used as an FM demodulator. Slope detection demodulates an FM signal by using a tuned circuit which has its resonant frequency slightly offset from the carrier. As the frequency rises and falls the tuned circuit provides a changing amplitude of response, converting FM to AM. AM receivers may detect some FM transmissions by this means, although it does not provide an efficient means of detection for FM broadcasts. In Software-Defined Radio implementations the demodulation may be carried out by using the Hilbert transform (implemented as a filter) to recover the instantaneous phase, and thereafter differentiating this phase (using another filter) to recover the instantaneous frequency. Alternatively, a complex mixer followed by a bandpass filter may be used to translate the signal to baseband, and then proceeding as before.",
"title": "Implementation"
},
{
"paragraph_id": 32,
"text": "When an echolocating bat approaches a target, its outgoing sounds return as echoes, which are Doppler-shifted upward in frequency. In certain species of bats, which produce constant frequency (CF) echolocation calls, the bats compensate for the Doppler shift by lowering their call frequency as they approach a target. This keeps the returning echo in the same frequency range of the normal echolocation call. This dynamic frequency modulation is called the Doppler Shift Compensation (DSC), and was discovered by Hans Schnitzler in 1968",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "FM is also used at intermediate frequencies by analog VCR systems (including VHS) to record the luminance (black and white) portions of the video signal. Commonly, the chrominance component is recorded as a conventional AM signal, using the higher-frequency FM signal as bias. FM is the only feasible method of recording the luminance (\"black-and-white\") component of video to (and retrieving video from) magnetic tape without distortion; video signals have a large range of frequency components – from a few hertz to several megahertz, too wide for equalizers to work with due to electronic noise below −60 dB. FM also keeps the tape at saturation level, acting as a form of noise reduction; a limiter can mask variations in playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal – as was done on V2000 and many Hi-band formats – can keep mechanical jitter under control and assist timebase correction.",
"title": "Applications"
},
{
"paragraph_id": 34,
"text": "These FM systems are unusual, in that they have a ratio of carrier to maximum modulation frequency of less than two; contrast this with FM audio broadcasting, where the ratio is around 10,000. Consider, for example, a 6-MHz carrier modulated at a 3.5-MHz rate; by Bessel analysis, the first sidebands are on 9.5 and 2.5 MHz and the second sidebands are on 13 MHz and −1 MHz. The result is a reversed-phase sideband on +1 MHz; on demodulation, this results in unwanted output at 6 – 1 = 5 MHz. The system must be designed so that this unwanted output is reduced to an acceptable level.",
"title": "Applications"
},
{
"paragraph_id": 35,
"text": "FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature in several generations of personal computer sound cards.",
"title": "Applications"
},
{
"paragraph_id": 36,
"text": "Edwin Howard Armstrong (1890–1954) was an American electrical engineer who invented wideband frequency modulation (FM) radio. He patented the regenerative circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in 1922. Armstrong presented his paper, \"A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation\", (which first described FM radio) before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936.",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "As the name implies, wideband FM (WFM) requires a wider signal bandwidth than amplitude modulation by an equivalent modulating signal; this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against signal-amplitude-fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission, hence the term \"FM radio\" (although for many years the BBC called it \"VHF radio\" because commercial FM broadcasting uses part of the VHF band – the FM broadcast band). FM receivers employ a special detector for FM signals and exhibit a phenomenon known as the capture effect, in which the tuner \"captures\" the stronger of two stations on the same frequency while rejecting the other (compare this with a similar situation on an AM receiver, where both stations can be heard simultaneously). However, frequency drift or a lack of selectivity may cause one station to be overtaken by another on an adjacent channel. Frequency drift was a problem in early (or inexpensive) receivers; inadequate selectivity may affect any tuner.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "An FM signal can also be used to carry a stereo signal; this is done with multiplexing and demultiplexing before and after the FM process. The FM modulation and demodulation process is identical in stereo and monaural processes. A high-efficiency radio-frequency switching amplifier can be used to transmit FM signals (and other constant-amplitude signals). For a given signal strength (measured at the receiver antenna), switching amplifiers use less battery power and typically cost less than a linear amplifier. This gives FM another advantage over other modulation methods requiring linear amplifiers, such as AM and QAM.",
"title": "Applications"
},
{
"paragraph_id": 39,
"text": "FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech. Analog TV sound is also broadcast using FM. Narrowband FM is used for voice communications in commercial and amateur radio settings. In broadcast services, where audio fidelity is important, wideband FM is generally used. In two-way radio, narrowband FM (NBFM) is used to conserve bandwidth for land mobile, marine mobile and other radio services.",
"title": "Applications"
},
{
"paragraph_id": 40,
"text": "There are reports that on October 5, 1924, Professor Mikhail A. Bonch-Bruevich, during a scientific and technical conversation in the Nizhny Novgorod Radio Laboratory, reported about his new method of telephony, based on a change in the period of oscillations. Demonstration of frequency modulation was carried out on the laboratory model.",
"title": "Applications"
}
]
| Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing. In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. the difference between the frequency of the carrier and its center frequency, has a functional relation to the modulating signal amplitude. Digital data can be encoded and transmitted with a type of frequency modulation known as frequency-shift keying (FSK), in which the instantaneous frequency of the carrier is shifted among a set of frequencies. The frequencies may represent digits, such as '0' and '1'. FSK is widely used in computer modems, such as fax modems, telephone caller ID systems, garage door openers, and other low-frequency transmissions. Radioteletype also uses FSK. Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, sound synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio. However, under severe enough multipath conditions it performs much more poorly than AM, with distinct high frequency noise artifacts that are audible with lower volumes and less complex tones. With high enough volume and carrier deviation audio distortion starts to occur that otherwise wouldn't be present without multipath or with an AM signal. Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant. | 2001-10-21T18:06:43Z | 2023-12-20T07:02:48Z | [
"Template:Analogue TV transmitter topics",
"Template:Audio broadcasting",
"Template:Citation needed",
"Template:More citations needed",
"Template:Nbsp",
"Template:Main",
"Template:See also",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:ISBN",
"Template:Authority control",
"Template:Short description",
"Template:For",
"Template:Snd",
"Template:Commons category",
"Template:Telecommunications",
"Template:Modulation techniques",
"Template:Anchor",
"Template:Patent",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Frequency_modulation |
10,837 | Faith and rationality | Faith and rationality exist in varying degrees of conflict or compatibility. Rationality is based on reason or facts. Faith is belief in inspiration, revelation, or authority. The word faith sometimes refers to a belief that is held in spite of or against reason or empirical evidence, or it can refer to belief based upon a degree of evidential warrant.
Rationalists point out that many people hold irrational beliefs, for many reasons. There may be evolutionary causes for irrational beliefs — irrational beliefs may increase our ability to survive and reproduce.
One more reason for irrational beliefs can perhaps be explained by operant conditioning. For example, in one study by B. F. Skinner in 1948, pigeons were awarded grain at regular time intervals regardless of their behaviour. The result was that each of the pigeons developed their own idiosyncratic response which had become associated with the consequence of receiving grain.
Believers in the value of faith — for example those who believe salvation is possible through faith alone — frequently suggest that everyone holds beliefs arrived at by faith, not reason.
One form of belief held "by faith" may be seen existing in a faith as based on warrant. In this view some degree of evidence provides warrant for faith; it consists in other words in "explain[ing] great things by small."
Thomas Aquinas was the first to write a full treatment of the relationship, differences, and similarities between faith, which he calls "an intellectual assent", and reason.
Dei Filius was a dogmatic constitution of the First Vatican Council on the Roman Catholic faith. It was adopted unanimously on 24 April 1870. It states that "not only can faith and reason never be opposed to one another, but they are of mutual aid one to the other".
Recent popes have spoken about faith and rationality: Fides et ratio, an encyclical letter promulgated by Pope John Paul II on 14 September 1998, deals with the relationship between faith and reason. Pope Benedict XVI's Regensburg lecture delivered on 12 September 2006 was on the subject of "faith, reason and the university".
Alvin Plantinga upholds that faith may be the result of evidence testifying to the reliability of the source of truth claims, but although it may involve this, he sees faith as being the result of hearing the truth of the gospel with the internal persuasion by the Holy Spirit moving and enabling him to believe. "Christian belief is produced in the believer by the internal instigation of the Holy Spirit, endorsing the teachings of Scripture, which is itself divinely inspired by the Holy Spirit. The result of the work of the Holy Spirit is faith."
American biblical scholar Archibald Thomas Robertson stated that the Greek word pistis used for faith in the New Testament (over two hundred forty times), and rendered "assurance" in Acts 17:31 (KJV), is "an old verb to furnish, used regularly by Demosthenes for bringing forward evidence." Likewise Tom Price (Oxford Centre for Christian Apologetics) affirms that when the New Testament talks about faith positively it only uses words derived from the Greek root [pistis] which means "to be persuaded."
In contrast to faith meaning blind trust, in the absence of evidence, even in the teeth of evidence, Alister McGrath quotes Oxford Anglican theologian W. H. Griffith-Thomas, (1861-1924), who states faith is "not blind, but intelligent" and "commences with the conviction of the mind based on adequate evidence", which McGrath sees as "a good and reliable definition, synthesizing the core elements of the characteristic Christian understanding of faith."
The 14th-century Jewish philosopher Levi ben Gerson tried to reconcile faith and reason. He wrote: "the Law cannot prevent us from considering to be true that which our reason urges us to believe." | [
{
"paragraph_id": 0,
"text": "Faith and rationality exist in varying degrees of conflict or compatibility. Rationality is based on reason or facts. Faith is belief in inspiration, revelation, or authority. The word faith sometimes refers to a belief that is held in spite of or against reason or empirical evidence, or it can refer to belief based upon a degree of evidential warrant.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Rationalists point out that many people hold irrational beliefs, for many reasons. There may be evolutionary causes for irrational beliefs — irrational beliefs may increase our ability to survive and reproduce.",
"title": "Relationship between faith and reason"
},
{
"paragraph_id": 2,
"text": "One more reason for irrational beliefs can perhaps be explained by operant conditioning. For example, in one study by B. F. Skinner in 1948, pigeons were awarded grain at regular time intervals regardless of their behaviour. The result was that each of the pigeons developed their own idiosyncratic response which had become associated with the consequence of receiving grain.",
"title": "Relationship between faith and reason"
},
{
"paragraph_id": 3,
"text": "Believers in the value of faith — for example those who believe salvation is possible through faith alone — frequently suggest that everyone holds beliefs arrived at by faith, not reason.",
"title": "Relationship between faith and reason"
},
{
"paragraph_id": 4,
"text": "One form of belief held \"by faith\" may be seen existing in a faith as based on warrant. In this view some degree of evidence provides warrant for faith; it consists in other words in \"explain[ing] great things by small.\"",
"title": "Relationship between faith and reason"
},
{
"paragraph_id": 5,
"text": "Thomas Aquinas was the first to write a full treatment of the relationship, differences, and similarities between faith, which he calls \"an intellectual assent\", and reason.",
"title": "Christianity"
},
{
"paragraph_id": 6,
"text": "Dei Filius was a dogmatic constitution of the First Vatican Council on the Roman Catholic faith. It was adopted unanimously on 24 April 1870. It states that \"not only can faith and reason never be opposed to one another, but they are of mutual aid one to the other\".",
"title": "Christianity"
},
{
"paragraph_id": 7,
"text": "Recent popes have spoken about faith and rationality: Fides et ratio, an encyclical letter promulgated by Pope John Paul II on 14 September 1998, deals with the relationship between faith and reason. Pope Benedict XVI's Regensburg lecture delivered on 12 September 2006 was on the subject of \"faith, reason and the university\".",
"title": "Christianity"
},
{
"paragraph_id": 8,
"text": "Alvin Plantinga upholds that faith may be the result of evidence testifying to the reliability of the source of truth claims, but although it may involve this, he sees faith as being the result of hearing the truth of the gospel with the internal persuasion by the Holy Spirit moving and enabling him to believe. \"Christian belief is produced in the believer by the internal instigation of the Holy Spirit, endorsing the teachings of Scripture, which is itself divinely inspired by the Holy Spirit. The result of the work of the Holy Spirit is faith.\"",
"title": "Christianity"
},
{
"paragraph_id": 9,
"text": "American biblical scholar Archibald Thomas Robertson stated that the Greek word pistis used for faith in the New Testament (over two hundred forty times), and rendered \"assurance\" in Acts 17:31 (KJV), is \"an old verb to furnish, used regularly by Demosthenes for bringing forward evidence.\" Likewise Tom Price (Oxford Centre for Christian Apologetics) affirms that when the New Testament talks about faith positively it only uses words derived from the Greek root [pistis] which means \"to be persuaded.\"",
"title": "Christianity"
},
{
"paragraph_id": 10,
"text": "In contrast to faith meaning blind trust, in the absence of evidence, even in the teeth of evidence, Alister McGrath quotes Oxford Anglican theologian W. H. Griffith-Thomas, (1861-1924), who states faith is \"not blind, but intelligent\" and \"commences with the conviction of the mind based on adequate evidence\", which McGrath sees as \"a good and reliable definition, synthesizing the core elements of the characteristic Christian understanding of faith.\"",
"title": "Christianity"
},
{
"paragraph_id": 11,
"text": "The 14th-century Jewish philosopher Levi ben Gerson tried to reconcile faith and reason. He wrote: \"the Law cannot prevent us from considering to be true that which our reason urges us to believe.\"",
"title": "Jewish views"
}
]
| Faith and rationality exist in varying degrees of conflict or compatibility. Rationality is based on reason or facts. Faith is belief in inspiration, revelation, or authority. The word faith sometimes refers to a belief that is held in spite of or against reason or empirical evidence, or it can refer to belief based upon a degree of evidential warrant. | 2001-06-02T18:37:54Z | 2023-12-23T16:19:48Z | [
"Template:Cite journal",
"Template:Philosophy topics",
"Template:Authority control",
"Template:Short description",
"Template:Empty section",
"Template:See also",
"Template:Cite book",
"Template:Related",
"Template:Further",
"Template:Cite web",
"Template:Columns-list",
"Template:Citation",
"Template:Philosophy of religion",
"Template:Philosophy of science",
"Template:Anchor",
"Template:Reflist",
"Template:Epistemology"
]
| https://en.wikipedia.org/wiki/Faith_and_rationality |
10,839 | List of film institutes | Some notable institutions celebrating film, including both national film institutes and independent and non-profit organizations. For the purposes of this list, institutions that do not have their own article on Wikipedia are not considered notable. | [
{
"paragraph_id": 0,
"text": "Some notable institutions celebrating film, including both national film institutes and independent and non-profit organizations. For the purposes of this list, institutions that do not have their own article on Wikipedia are not considered notable.",
"title": ""
}
]
| Some notable institutions celebrating film, including both national film institutes and independent and non-profit organizations. For the purposes of this list, institutions that do not have their own article on Wikipedia are not considered notable. American Film Institute
Asia Pacific Film Institute
Asian Academy of Film & Television
Australian Film Institute
British Film Institute
Canadian Film Institute
Danish Film Institute
Doha Film Institute
Film and Television Institute of India
Finnish Film Foundation
German Film Institute
Irish Film Institute
K. R. Narayanan National Institute of Visual Science and Arts
Mowelfund Film Institute
Norwegian Film Institute
Satyajit Ray Film and Television Institute
Sundance Institute
Swedish Film Institute
University of the Philippines Film Institute | 2002-02-25T15:51:15Z | 2023-08-03T12:30:37Z | [
"Template:Short description",
"Template:Portal"
]
| https://en.wikipedia.org/wiki/List_of_film_institutes |
10,841 | Forth | Forth or FORTH may refer to: | [
{
"paragraph_id": 0,
"text": "Forth or FORTH may refer to:",
"title": ""
}
]
| Forth or FORTH may refer to: | 2022-10-14T16:47:13Z | [
"Template:HMS",
"Template:Ship",
"Template:Srt",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right"
]
| https://en.wikipedia.org/wiki/Forth |
|
10,842 | F wave | In neuroscience, an F wave is one of several motor responses which may follow the direct motor response (M) evoked by electrical stimulation of peripheral motor or mixed (sensory and motor) nerves. F-waves are the second of two late voltage changes observed after stimulation is applied to the skin surface above the distal region of a nerve, in addition to the H-reflex (Hoffman's Reflex) which is a muscle reaction in response to electrical stimulation of innervating sensory fibers. Traversal of F-waves along the entire length of peripheral nerves between the spinal cord and muscle, allows for assessment of motor nerve conduction between distal stimulation sites in the arm and leg, and related motoneurons (MN's) in the cervical and lumbosacral cord. F-waves are able to assess both afferent and efferent loops of the alpha motor neuron in its entirety. As such, various properties of F-wave motor nerve conduction are analyzed in nerve conduction studies (NCS), and often used to assess polyneuropathies, resulting from states of neuronal demyelination and loss of peripheral axonal integrity.
With respect to its nomenclature, the F-wave is so named as it was initially studied in the smaller muscles of the foot. The observation of F-waves in the same motor units (MU) as those present in the direct motor response (M), along with the presence of F-waves in deafferented animal and human models, indicates that F-waves require direct activation of motor axons to be elicited, and do not involve conduction along afferent sensory nerves. Thus, the F-wave is considered a wave, as opposed to a reflex.
F-waves are evoked by strong electrical stimuli (supramaximal) applied to the skin surface above the distal portion of a nerve. This impulse travels both in orthodromic fashion (towards the muscle fibers) and antidromic fashion (towards the cell body in the spinal cord) along the alpha motor neuron. As the orthodromic impulse reaches innervated muscle fibers, a strong direct motor response (M) is evoked in these muscle fibers, resulting in a primary compound muscle action potential (CMAP). As the antidromic impulse reaches the cell bodies within the anterior horn of the motor neuron pool by retrograde transmission, a select portion of these alpha motor neurons, (roughly 5-10% of available motor neurons), 'backfire' or rebound. This antidromic 'backfiring' elicits an orthodromic impulse that follows back down the alpha motor neuron, towards innervated muscle fibers. Conventionally, axonal segments of motor neurons previously depolarized by preceding antidromic impulses enter a hyperpolarized state, disallowing the travel of impulses along them. However, these same axonal segments remains excitable or relatively depolarized for a sufficient period of time, allowing for rapid antidromic backfiring, and thus the continuation of the orthodromic impulse towards innervated muscle fibers. This successive orthodromic stimulus then evokes a smaller population of muscle fibers, resulting in a smaller CMAP known as an F-wave.
Several physiological factors may possibly influence the presence of F-waves after peripheral nerve stimulation. The shape and size of F-waves, along with the probability of their presence is small, as a high degree of variability exists in motor unit (MU) activation for any given stimulation. Thus, the generation of CMAP's which elicit F-waves is subject to the variability in activation of motor units in a given pool over successive stimuli. Moreover, stimulation of peripheral nerve fibers account for both orthodromic impulses (along sensory fibers, towards the dorsal horn), as well as antidromic activity (along alpha motor neurons towards the ventral horn). Antidromic activity along collateral branches of alpha motor neurons may result in the activation of inhibitory Renshaw cells or direct inhibitory collaterals between motorneurons. Inhibition by these means may lower excitability of adjacent motor neurons and decrease the potential for antidromic backfiring and resultant F-waves; although it has been argued Renshaw cells preferentially inhibit smaller alpha motor neurons limited influence on modulation of antidromic backfiring.
Because a different population of anterior horn cells is stimulated with each stimulation, F waves are characterized as ubiquitous, low amplitude, late motor responses, which can vary in amplitude, latency and configuration across a series of stimuli.
F waves can be analyzed by several properties including:
Several measurements can be done on the F responses, including:
The minimal F wave latency is typically 25-32 ms in the upper extremities and 45-56 ms in the lower extremities.
F wave persistence is the number of F waves obtained per the number of stimulations, which is normally 80-100% (or above 50%). | [
{
"paragraph_id": 0,
"text": "In neuroscience, an F wave is one of several motor responses which may follow the direct motor response (M) evoked by electrical stimulation of peripheral motor or mixed (sensory and motor) nerves. F-waves are the second of two late voltage changes observed after stimulation is applied to the skin surface above the distal region of a nerve, in addition to the H-reflex (Hoffman's Reflex) which is a muscle reaction in response to electrical stimulation of innervating sensory fibers. Traversal of F-waves along the entire length of peripheral nerves between the spinal cord and muscle, allows for assessment of motor nerve conduction between distal stimulation sites in the arm and leg, and related motoneurons (MN's) in the cervical and lumbosacral cord. F-waves are able to assess both afferent and efferent loops of the alpha motor neuron in its entirety. As such, various properties of F-wave motor nerve conduction are analyzed in nerve conduction studies (NCS), and often used to assess polyneuropathies, resulting from states of neuronal demyelination and loss of peripheral axonal integrity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "With respect to its nomenclature, the F-wave is so named as it was initially studied in the smaller muscles of the foot. The observation of F-waves in the same motor units (MU) as those present in the direct motor response (M), along with the presence of F-waves in deafferented animal and human models, indicates that F-waves require direct activation of motor axons to be elicited, and do not involve conduction along afferent sensory nerves. Thus, the F-wave is considered a wave, as opposed to a reflex.",
"title": ""
},
{
"paragraph_id": 2,
"text": "F-waves are evoked by strong electrical stimuli (supramaximal) applied to the skin surface above the distal portion of a nerve. This impulse travels both in orthodromic fashion (towards the muscle fibers) and antidromic fashion (towards the cell body in the spinal cord) along the alpha motor neuron. As the orthodromic impulse reaches innervated muscle fibers, a strong direct motor response (M) is evoked in these muscle fibers, resulting in a primary compound muscle action potential (CMAP). As the antidromic impulse reaches the cell bodies within the anterior horn of the motor neuron pool by retrograde transmission, a select portion of these alpha motor neurons, (roughly 5-10% of available motor neurons), 'backfire' or rebound. This antidromic 'backfiring' elicits an orthodromic impulse that follows back down the alpha motor neuron, towards innervated muscle fibers. Conventionally, axonal segments of motor neurons previously depolarized by preceding antidromic impulses enter a hyperpolarized state, disallowing the travel of impulses along them. However, these same axonal segments remains excitable or relatively depolarized for a sufficient period of time, allowing for rapid antidromic backfiring, and thus the continuation of the orthodromic impulse towards innervated muscle fibers. This successive orthodromic stimulus then evokes a smaller population of muscle fibers, resulting in a smaller CMAP known as an F-wave.",
"title": "Physiology"
},
{
"paragraph_id": 3,
"text": "Several physiological factors may possibly influence the presence of F-waves after peripheral nerve stimulation. The shape and size of F-waves, along with the probability of their presence is small, as a high degree of variability exists in motor unit (MU) activation for any given stimulation. Thus, the generation of CMAP's which elicit F-waves is subject to the variability in activation of motor units in a given pool over successive stimuli. Moreover, stimulation of peripheral nerve fibers account for both orthodromic impulses (along sensory fibers, towards the dorsal horn), as well as antidromic activity (along alpha motor neurons towards the ventral horn). Antidromic activity along collateral branches of alpha motor neurons may result in the activation of inhibitory Renshaw cells or direct inhibitory collaterals between motorneurons. Inhibition by these means may lower excitability of adjacent motor neurons and decrease the potential for antidromic backfiring and resultant F-waves; although it has been argued Renshaw cells preferentially inhibit smaller alpha motor neurons limited influence on modulation of antidromic backfiring.",
"title": "Physiology"
},
{
"paragraph_id": 4,
"text": "Because a different population of anterior horn cells is stimulated with each stimulation, F waves are characterized as ubiquitous, low amplitude, late motor responses, which can vary in amplitude, latency and configuration across a series of stimuli.",
"title": "Physiology"
},
{
"paragraph_id": 5,
"text": "F waves can be analyzed by several properties including:",
"title": "Properties"
},
{
"paragraph_id": 6,
"text": "Several measurements can be done on the F responses, including:",
"title": "Measurements"
},
{
"paragraph_id": 7,
"text": "The minimal F wave latency is typically 25-32 ms in the upper extremities and 45-56 ms in the lower extremities.",
"title": "Measurements"
},
{
"paragraph_id": 8,
"text": "F wave persistence is the number of F waves obtained per the number of stimulations, which is normally 80-100% (or above 50%).",
"title": "Measurements"
}
]
| In neuroscience, an F wave is one of several motor responses which may follow the direct motor response (M) evoked by electrical stimulation of peripheral motor or mixed nerves. F-waves are the second of two late voltage changes observed after stimulation is applied to the skin surface above the distal region of a nerve, in addition to the H-reflex which is a muscle reaction in response to electrical stimulation of innervating sensory fibers. Traversal of F-waves along the entire length of peripheral nerves between the spinal cord and muscle, allows for assessment of motor nerve conduction between distal stimulation sites in the arm and leg, and related motoneurons (MN's) in the cervical and lumbosacral cord. F-waves are able to assess both afferent and efferent loops of the alpha motor neuron in its entirety. As such, various properties of F-wave motor nerve conduction are analyzed in nerve conduction studies (NCS), and often used to assess polyneuropathies, resulting from states of neuronal demyelination and loss of peripheral axonal integrity. With respect to its nomenclature, the F-wave is so named as it was initially studied in the smaller muscles of the foot. The observation of F-waves in the same motor units (MU) as those present in the direct motor response (M), along with the presence of F-waves in deafferented animal and human models, indicates that F-waves require direct activation of motor axons to be elicited, and do not involve conduction along afferent sensory nerves. Thus, the F-wave is considered a wave, as opposed to a reflex. | 2002-02-25T15:51:15Z | 2023-12-04T02:50:08Z | [
"Template:About",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/F_wave |
10,843 | Fruit | In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering (see Fruit anatomy).
Fruits are the means by which flowering plants (also known as angiosperms) disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; in fact, humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.
In common language usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.
Many common language terms used for fruit and seeds differ from botanical classifications. For example, in botany, a fruit is a ripened ovary or carpel that contains seeds, e.g., an orange, pomegranate, tomato or a pumpkin. A nut is a type of fruit (and not a seed), and a seed is a ripened ovule.
In culinary language, a fruit is the sweet- or not sweet- (even sour-) tasting produce of a specific plant (e.g., a peach, pear or lemon); nuts are hard, oily, non-sweet plant produce in shells (hazelnut, acorn). Vegetables, so called, typically are savory or non-sweet produce (zucchini, lettuce, broccoli, and tomato); but some may be sweet-tasting (sweet potato).
Examples of botanically classified fruit that are typically called vegetables include: cucumber, pumpkin, and squash (all are cucurbits); beans, peanuts, and peas (all legumes); corn, eggplant, bell pepper (or sweet pepper), and tomato. The spices chili pepper and allspice are fruits, botanically speaking. In contrast, rhubarb is often called a fruit when used in making pies, but the edible produce of rhubarb is actually the leaf stalk or petiole of the plant. Edible gymnosperm seeds are often given fruit names, e.g., ginkgo nuts and pine nuts.
Botanically, a cereal grain, such as corn, rice, or wheat is a kind of fruit (termed a caryopsis). However, the fruit wall is thin and fused to the seed coat, so almost all the edible grain-fruit is actually a seed.
The outer layer, often edible, of most fruits is called the pericarp. Typically formed from the ovary, it surrounds the seeds; in some species, however, other structural tissues contribute to or form the edible portion. The pericarp may be described in three layers from outer to inner, i.e., the epicarp, mesocarp and endocarp.
Fruit that bears a prominent pointed terminal projection is said to be beaked.
A fruit results from the fertilizing and maturing of one or more flowers. The gynoecium, which contains the stigma-style-ovary system, is centered in the flower-head, and it forms all or part of the fruit. Inside the ovary(ies) are one or more ovules. Here begins a complex sequence called double fertilization: a female gametophyte produces an egg cell for the purpose of fertilization. (A female gametophyte is called a megagametophyte, and also called the embryo sac.) After double fertilization, the ovules will become seeds.
Ovules are fertilized in a process that starts with pollination, which is the movement of pollen from the stamens to the stigma-style-ovary system within the flower-head. After pollination, a pollen tube grows from the (deposited) pollen through the stigma down the style into the ovary to the ovule. Two sperm are transferred from the pollen to a megagametophyte. Within the megagametophyte, one sperm unites with the egg, forming a zygote, while the second sperm enters the central cell forming the endosperm mother cell, which completes the double fertilization process. Later, the zygote will give rise to the embryo of the seed, and the endosperm mother cell will give rise to endosperm, a nutritive tissue used by the embryo.
As the ovules develop into seeds, the ovary begins to ripen and the ovary wall, the pericarp, may become fleshy (as in berries or drupes), or it may form a hard outer covering (as in nuts). In some multi-seeded fruits, the extent to which a fleshy structure develops is proportional to the number of fertilized ovules. The pericarp typically is differentiated into two or three distinct layers; these are called the exocarp (outer layer, also called epicarp), mesocarp (middle layer), and endocarp (inner layer).
In some fruits, the sepals, petals, stamens and/or the style of the flower fall away as the fleshy fruit ripens. However, for simple fruits derived from an inferior ovary – i.e., one that lies below the attachment of other floral parts – there are parts (including petals, sepals, and stamens) that fuse with the ovary and ripen with it. For such a case, when floral parts other than the ovary form a significant part of the fruit that develops, it is called an accessory fruit. Examples of accessory fruits include apple, rose hip, strawberry, and pineapple.
Because several parts of the flower besides the ovary may contribute to the structure of a fruit, it is important to study flower structure to understand how a particular fruit forms. There are three general modes of fruit development:
Consistent with the three modes of fruit development, plant scientists have classified fruits into three main groups: simple fruits, aggregate fruits, and multiple (or composite) fruits. The groupings reflect how the ovary and other flower organs are arranged and how the fruits develop, but they are not evolutionarily relevant as diverse plant taxa may be in the same group.
While the section of a fungus that produces spores is called a fruiting body, fungi are members of the fungi kingdom and not of the plant kingdom.
Simple fruits are the result of the ripening-to-fruit of a simple or compound ovary in a single flower with a single pistil. In contrast, a single flower with numerous pistils typically produces an aggregate fruit; and the merging of several flowers, or a 'multiple' of flowers, results in a 'multiple' fruit. A simple fruit is further classified as either dry or fleshy.
To distribute their seeds, dry fruits may split open and discharge their seeds to the winds, which is called dehiscence. Or the distribution process may rely upon the decay and degradation of the fruit to expose the seeds; or it may rely upon the eating of fruit and excreting of seeds by frugivores – both are called indehiscence. Fleshy fruits do not split open, but they also are indehiscent and they may also rely on frugivores for distribution of their seeds. Typically, the entire outer layer of the ovary wall ripens into a potentially edible pericarp.
Types of dry simple fruits, (with examples) include:
Fruits in which part or all of the pericarp (fruit wall) is fleshy at maturity are termed fleshy simple fruits.
Types of fleshy simple fruits, (with examples) include:
Berries are a type of simple fleshy fruit that issue from a single ovary. (The ovary itself may be compound, with several carpels.) The botanical term true berry includes grapes, currants, cucumbers, eggplants (aubergines), tomatoes, chili peppers, and bananas, but excludes certain fruits that are called "-berry" by culinary custom or by common usage of the term – such as strawberries and raspberries. Berries may be formed from one or more carpels (i.e., from the simple or compound ovary) from the same, single flower. Seeds typically are embedded in the fleshy interior of the ovary.
Examples include:
The strawberry, regardless of its appearance, is classified as a dry, not a fleshy fruit. Botanically, it is not a berry; it is an aggregate-accessory fruit, the latter term meaning the fleshy part is derived not from the plant's ovaries but from the receptacle that holds the ovaries. Numerous dry achenes are attached to the outside of the fruit-flesh; they appear to be seeds but each is actually an ovary of a flower, with a seed inside.
Schizocarps are dry fruits, though some appear to be fleshy. They originate from syncarpous ovaries but do not actually dehisce; rather, they split into segments with one or more seeds. They include a number of different forms from a wide range of families, including carrot, parsnip, parsley, cumin.
An aggregate fruit is also called an aggregation, or etaerio; it develops from a single flower that presents numerous simple pistils. Each pistil contains one carpel; together, they form a fruitlet. The ultimate (fruiting) development of the aggregation of pistils is called an aggregate fruit, etaerio fruit, or simply an etaerio.
Different types of aggregate fruits can produce different etaerios, such as achenes, drupelets, follicles, and berries.
Some other broadly recognized species and their etaerios (or aggregations) are:
The pistils of the raspberry are called drupelets because each pistil is like a small drupe attached to the receptacle. In some bramble fruits, such as blackberry, the receptacle, an accessory part, elongates and then develops as part of the fruit, making the blackberry an aggregate-accessory fruit. The strawberry is also an aggregate-accessory fruit, of which the seeds are contained in the achenes. Notably in all these examples, the fruit develops from a single flower, with numerous pistils.
A multiple fruit is formed from a cluster of flowers, (a 'multiple' of flowers) – also called an inflorescence. Each ('smallish') flower produces a single fruitlet, which, as all develop, all merge into one mass of fruit. Examples include pineapple, fig, mulberry, Osage orange, and breadfruit. An inflorescence (a cluster) of white flowers, called a head, is produced first. After fertilization, each flower in the cluster develops into a drupe; as the drupes expand, they develop as a connate organ, merging into a multiple fleshy fruit called a syncarp.
Progressive stages of multiple flowering and fruit development can be observed on a single branch of the Indian mulberry, or noni. During the sequence of development, a progression of second, third, and more inflorescences are initiated in turn at the head of the branch or stem.
Fruits may incorporate tissues derived from other floral parts besides the ovary, including the receptacle, hypanthium, petals, or sepals. Accessory fruits occur in all three classes of fruit development – simple, aggregate, and multiple. Accessory fruits are frequently designated by the hyphenated term showing both characters. For example, a pineapple is a multiple-accessory fruit, a blackberry is an aggregate-accessory fruit, and an apple is a simple-accessory fruit.
Seedlessness is an important feature of some fruits of commerce. Commercial cultivars of bananas and pineapples are examples of seedless fruits. Some cultivars of citrus fruits (especially grapefruit, mandarin oranges, navel oranges), satsumas, table grapes, and of watermelons are valued for their seedlessness. In some species, seedlessness is the result of parthenocarpy, where fruits set without fertilization. Parthenocarpic fruit-set may (or may not) require pollination, but most seedless citrus fruits require a stimulus from pollination to produce fruit. Seedless bananas and grapes are triploids, and seedlessness results from the abortion of the embryonic plant that is produced by fertilization, a phenomenon known as stenospermocarpy, which requires normal pollination and fertilization.
Variations in fruit structures largely depend on the modes of dispersal applied to their seeds. Dispersal is achieved by wind or water, by explosive dehiscence, and by interactions with animals.
Some fruits present their outer skins or shells coated with spikes or hooked burrs; these evolved either to deter would-be foragers from feeding on them or to serve to attach themselves to the hair, feathers, legs, or clothing of animals, thereby using them as dispersal agents. These plants are termed zoochorous; common examples include cocklebur, unicorn plant, and beggarticks (or Spanish needle).
By developments of mutual evolution, the fleshy produce of fruits typically appeals to hungry animals, such that the seeds contained within are taken in, carried away, and later deposited (i.e., defecated) at a distance from the parent plant. Likewise, the nutritious, oily kernels of nuts typically motivate birds and squirrels to hoard them, burying them in soil to retrieve later during the winter of scarcity; thereby, uneaten seeds are sown effectively under natural conditions to germinate and grow a new plant some distance away from the parent.
Other fruits have evolved flattened and elongated wings or helicopter-like blades, e.g., elm, maple, and tuliptree. This mechanism increases dispersal distance away from the parent via wind. Other wind-dispersed fruit have tiny "parachutes", e.g., dandelion, milkweed, salsify.
Coconut fruits can float thousands of miles in the ocean, thereby spreading their seeds. Other fruits that can disperse via water are nipa palm and screw pine.
Some fruits have evolved propulsive mechanisms that fling seeds substantial distances – perhaps up to 100 m (330 ft) in the case of the sandbox tree – via explosive dehiscence or other such mechanisms (see impatiens and squirting cucumber).
A cornucopia of fruits – fleshy (simple) fruits from apples to berries to watermelon; dry (simple) fruits including beans and rice and coconuts; aggregate fruits including strawberries, raspberries, blackberries, pawpaw; and multiple fruits such as pineapple, fig, mulberries – are commercially valuable as human food. They are eaten both fresh and as jams, marmalade and other fruit preserves. They are used extensively in manufactured and processed foods (cakes, cookies, baked goods, flavorings, ice cream, yogurt, canned vegetables, frozen vegetables and meals) and beverages such as fruit juices and alcoholic beverages (brandy, fruit beer, wine). Spices like vanilla, black pepper, paprika, and allspice are derived from berries. Olive fruit is pressed for olive oil and similar processing is applied to other oil-bearing fruits and vegetables. Some fruits are available all year round, while others (such as blackberries and apricots in the UK) are subject to seasonal availability.
Fruits are also used for socializing and gift-giving in the form of fruit baskets and fruit bouquets.
Typically, many botanical fruits – "vegetables" in culinary parlance – (including tomato, green beans, leaf greens, bell pepper, cucumber, eggplant, okra, pumpkin, squash, zucchini) are bought and sold daily in fresh produce markets and greengroceries and carried back to kitchens, at home or restaurant, for preparation of meals.
All fruits benefit from proper post-harvest care, and in many fruits, the plant hormone ethylene causes ripening. Therefore, maintaining most fruits in an efficient cold chain is optimal for post-harvest storage, with the aim of extending and ensuring shelf life.
Various culinary fruits provide significant amounts of fiber and water, and many are generally high in vitamin C. An overview of numerous studies showed that fruits (e.g., whole apples or whole oranges) are satisfying (filling) by simply eating and chewing them.
The dietary fiber consumed in eating fruit promotes satiety, and may help to control body weight and aid reduction of blood cholesterol, a risk factor for cardiovascular diseases. Fruit consumption is under preliminary research for the potential to improve nutrition and affect chronic diseases. Regular consumption of fruit is generally associated with reduced risks of several diseases and functional declines associated with aging.
For food safety, the CDC recommends proper fruit handling and preparation to reduce the risk of food contamination and foodborne illness. Fresh fruits and vegetables should be carefully selected; at the store, they should not be damaged or bruised; and precut pieces should be refrigerated or surrounded by ice.
All fruits and vegetables should be rinsed before eating. This recommendation also applies to produce with rinds or skins that are not eaten. It should be done just before preparing or eating to avoid premature spoilage.
Fruits and vegetables should be kept separate from raw foods like meat, poultry, and seafood, as well as from utensils that have come in contact with raw foods. Fruits and vegetables that are not going to be cooked should be thrown away if they have touched raw meat, poultry, seafood, or eggs.
All cut, peeled, or cooked fruits and vegetables should be refrigerated within two hours. After a certain time, harmful bacteria may grow on them and increase the risk of foodborne illness.
Fruit allergies make up about 10 percent of all food-related allergies.
Because fruits have been such a major part of the human diet, various cultures have developed many different uses for fruits they do not depend on for food. For example: | [
{
"paragraph_id": 0,
"text": "In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering (see Fruit anatomy).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fruits are the means by which flowering plants (also known as angiosperms) disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; in fact, humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In common language usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many common language terms used for fruit and seeds differ from botanical classifications. For example, in botany, a fruit is a ripened ovary or carpel that contains seeds, e.g., an orange, pomegranate, tomato or a pumpkin. A nut is a type of fruit (and not a seed), and a seed is a ripened ovule.",
"title": "Botanical vs. culinary"
},
{
"paragraph_id": 4,
"text": "In culinary language, a fruit is the sweet- or not sweet- (even sour-) tasting produce of a specific plant (e.g., a peach, pear or lemon); nuts are hard, oily, non-sweet plant produce in shells (hazelnut, acorn). Vegetables, so called, typically are savory or non-sweet produce (zucchini, lettuce, broccoli, and tomato); but some may be sweet-tasting (sweet potato).",
"title": "Botanical vs. culinary"
},
{
"paragraph_id": 5,
"text": "Examples of botanically classified fruit that are typically called vegetables include: cucumber, pumpkin, and squash (all are cucurbits); beans, peanuts, and peas (all legumes); corn, eggplant, bell pepper (or sweet pepper), and tomato. The spices chili pepper and allspice are fruits, botanically speaking. In contrast, rhubarb is often called a fruit when used in making pies, but the edible produce of rhubarb is actually the leaf stalk or petiole of the plant. Edible gymnosperm seeds are often given fruit names, e.g., ginkgo nuts and pine nuts.",
"title": "Botanical vs. culinary"
},
{
"paragraph_id": 6,
"text": "Botanically, a cereal grain, such as corn, rice, or wheat is a kind of fruit (termed a caryopsis). However, the fruit wall is thin and fused to the seed coat, so almost all the edible grain-fruit is actually a seed.",
"title": "Botanical vs. culinary"
},
{
"paragraph_id": 7,
"text": "The outer layer, often edible, of most fruits is called the pericarp. Typically formed from the ovary, it surrounds the seeds; in some species, however, other structural tissues contribute to or form the edible portion. The pericarp may be described in three layers from outer to inner, i.e., the epicarp, mesocarp and endocarp.",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "Fruit that bears a prominent pointed terminal projection is said to be beaked.",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "A fruit results from the fertilizing and maturing of one or more flowers. The gynoecium, which contains the stigma-style-ovary system, is centered in the flower-head, and it forms all or part of the fruit. Inside the ovary(ies) are one or more ovules. Here begins a complex sequence called double fertilization: a female gametophyte produces an egg cell for the purpose of fertilization. (A female gametophyte is called a megagametophyte, and also called the embryo sac.) After double fertilization, the ovules will become seeds.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "Ovules are fertilized in a process that starts with pollination, which is the movement of pollen from the stamens to the stigma-style-ovary system within the flower-head. After pollination, a pollen tube grows from the (deposited) pollen through the stigma down the style into the ovary to the ovule. Two sperm are transferred from the pollen to a megagametophyte. Within the megagametophyte, one sperm unites with the egg, forming a zygote, while the second sperm enters the central cell forming the endosperm mother cell, which completes the double fertilization process. Later, the zygote will give rise to the embryo of the seed, and the endosperm mother cell will give rise to endosperm, a nutritive tissue used by the embryo.",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "As the ovules develop into seeds, the ovary begins to ripen and the ovary wall, the pericarp, may become fleshy (as in berries or drupes), or it may form a hard outer covering (as in nuts). In some multi-seeded fruits, the extent to which a fleshy structure develops is proportional to the number of fertilized ovules. The pericarp typically is differentiated into two or three distinct layers; these are called the exocarp (outer layer, also called epicarp), mesocarp (middle layer), and endocarp (inner layer).",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "In some fruits, the sepals, petals, stamens and/or the style of the flower fall away as the fleshy fruit ripens. However, for simple fruits derived from an inferior ovary – i.e., one that lies below the attachment of other floral parts – there are parts (including petals, sepals, and stamens) that fuse with the ovary and ripen with it. For such a case, when floral parts other than the ovary form a significant part of the fruit that develops, it is called an accessory fruit. Examples of accessory fruits include apple, rose hip, strawberry, and pineapple.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "Because several parts of the flower besides the ovary may contribute to the structure of a fruit, it is important to study flower structure to understand how a particular fruit forms. There are three general modes of fruit development:",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "Consistent with the three modes of fruit development, plant scientists have classified fruits into three main groups: simple fruits, aggregate fruits, and multiple (or composite) fruits. The groupings reflect how the ovary and other flower organs are arranged and how the fruits develop, but they are not evolutionarily relevant as diverse plant taxa may be in the same group.",
"title": "Classification of fruits"
},
{
"paragraph_id": 15,
"text": "While the section of a fungus that produces spores is called a fruiting body, fungi are members of the fungi kingdom and not of the plant kingdom.",
"title": "Classification of fruits"
},
{
"paragraph_id": 16,
"text": "Simple fruits are the result of the ripening-to-fruit of a simple or compound ovary in a single flower with a single pistil. In contrast, a single flower with numerous pistils typically produces an aggregate fruit; and the merging of several flowers, or a 'multiple' of flowers, results in a 'multiple' fruit. A simple fruit is further classified as either dry or fleshy.",
"title": "Classification of fruits"
},
{
"paragraph_id": 17,
"text": "To distribute their seeds, dry fruits may split open and discharge their seeds to the winds, which is called dehiscence. Or the distribution process may rely upon the decay and degradation of the fruit to expose the seeds; or it may rely upon the eating of fruit and excreting of seeds by frugivores – both are called indehiscence. Fleshy fruits do not split open, but they also are indehiscent and they may also rely on frugivores for distribution of their seeds. Typically, the entire outer layer of the ovary wall ripens into a potentially edible pericarp.",
"title": "Classification of fruits"
},
{
"paragraph_id": 18,
"text": "Types of dry simple fruits, (with examples) include:",
"title": "Classification of fruits"
},
{
"paragraph_id": 19,
"text": "Fruits in which part or all of the pericarp (fruit wall) is fleshy at maturity are termed fleshy simple fruits.",
"title": "Classification of fruits"
},
{
"paragraph_id": 20,
"text": "Types of fleshy simple fruits, (with examples) include:",
"title": "Classification of fruits"
},
{
"paragraph_id": 21,
"text": "Berries are a type of simple fleshy fruit that issue from a single ovary. (The ovary itself may be compound, with several carpels.) The botanical term true berry includes grapes, currants, cucumbers, eggplants (aubergines), tomatoes, chili peppers, and bananas, but excludes certain fruits that are called \"-berry\" by culinary custom or by common usage of the term – such as strawberries and raspberries. Berries may be formed from one or more carpels (i.e., from the simple or compound ovary) from the same, single flower. Seeds typically are embedded in the fleshy interior of the ovary.",
"title": "Classification of fruits"
},
{
"paragraph_id": 22,
"text": "Examples include:",
"title": "Classification of fruits"
},
{
"paragraph_id": 23,
"text": "The strawberry, regardless of its appearance, is classified as a dry, not a fleshy fruit. Botanically, it is not a berry; it is an aggregate-accessory fruit, the latter term meaning the fleshy part is derived not from the plant's ovaries but from the receptacle that holds the ovaries. Numerous dry achenes are attached to the outside of the fruit-flesh; they appear to be seeds but each is actually an ovary of a flower, with a seed inside.",
"title": "Classification of fruits"
},
{
"paragraph_id": 24,
"text": "Schizocarps are dry fruits, though some appear to be fleshy. They originate from syncarpous ovaries but do not actually dehisce; rather, they split into segments with one or more seeds. They include a number of different forms from a wide range of families, including carrot, parsnip, parsley, cumin.",
"title": "Classification of fruits"
},
{
"paragraph_id": 25,
"text": "An aggregate fruit is also called an aggregation, or etaerio; it develops from a single flower that presents numerous simple pistils. Each pistil contains one carpel; together, they form a fruitlet. The ultimate (fruiting) development of the aggregation of pistils is called an aggregate fruit, etaerio fruit, or simply an etaerio.",
"title": "Classification of fruits"
},
{
"paragraph_id": 26,
"text": "Different types of aggregate fruits can produce different etaerios, such as achenes, drupelets, follicles, and berries.",
"title": "Classification of fruits"
},
{
"paragraph_id": 27,
"text": "Some other broadly recognized species and their etaerios (or aggregations) are:",
"title": "Classification of fruits"
},
{
"paragraph_id": 28,
"text": "The pistils of the raspberry are called drupelets because each pistil is like a small drupe attached to the receptacle. In some bramble fruits, such as blackberry, the receptacle, an accessory part, elongates and then develops as part of the fruit, making the blackberry an aggregate-accessory fruit. The strawberry is also an aggregate-accessory fruit, of which the seeds are contained in the achenes. Notably in all these examples, the fruit develops from a single flower, with numerous pistils.",
"title": "Classification of fruits"
},
{
"paragraph_id": 29,
"text": "A multiple fruit is formed from a cluster of flowers, (a 'multiple' of flowers) – also called an inflorescence. Each ('smallish') flower produces a single fruitlet, which, as all develop, all merge into one mass of fruit. Examples include pineapple, fig, mulberry, Osage orange, and breadfruit. An inflorescence (a cluster) of white flowers, called a head, is produced first. After fertilization, each flower in the cluster develops into a drupe; as the drupes expand, they develop as a connate organ, merging into a multiple fleshy fruit called a syncarp.",
"title": "Classification of fruits"
},
{
"paragraph_id": 30,
"text": "Progressive stages of multiple flowering and fruit development can be observed on a single branch of the Indian mulberry, or noni. During the sequence of development, a progression of second, third, and more inflorescences are initiated in turn at the head of the branch or stem.",
"title": "Classification of fruits"
},
{
"paragraph_id": 31,
"text": "Fruits may incorporate tissues derived from other floral parts besides the ovary, including the receptacle, hypanthium, petals, or sepals. Accessory fruits occur in all three classes of fruit development – simple, aggregate, and multiple. Accessory fruits are frequently designated by the hyphenated term showing both characters. For example, a pineapple is a multiple-accessory fruit, a blackberry is an aggregate-accessory fruit, and an apple is a simple-accessory fruit.",
"title": "Classification of fruits"
},
{
"paragraph_id": 32,
"text": "Seedlessness is an important feature of some fruits of commerce. Commercial cultivars of bananas and pineapples are examples of seedless fruits. Some cultivars of citrus fruits (especially grapefruit, mandarin oranges, navel oranges), satsumas, table grapes, and of watermelons are valued for their seedlessness. In some species, seedlessness is the result of parthenocarpy, where fruits set without fertilization. Parthenocarpic fruit-set may (or may not) require pollination, but most seedless citrus fruits require a stimulus from pollination to produce fruit. Seedless bananas and grapes are triploids, and seedlessness results from the abortion of the embryonic plant that is produced by fertilization, a phenomenon known as stenospermocarpy, which requires normal pollination and fertilization.",
"title": "Seedless fruits"
},
{
"paragraph_id": 33,
"text": "Variations in fruit structures largely depend on the modes of dispersal applied to their seeds. Dispersal is achieved by wind or water, by explosive dehiscence, and by interactions with animals.",
"title": "Seed dissemination"
},
{
"paragraph_id": 34,
"text": "Some fruits present their outer skins or shells coated with spikes or hooked burrs; these evolved either to deter would-be foragers from feeding on them or to serve to attach themselves to the hair, feathers, legs, or clothing of animals, thereby using them as dispersal agents. These plants are termed zoochorous; common examples include cocklebur, unicorn plant, and beggarticks (or Spanish needle).",
"title": "Seed dissemination"
},
{
"paragraph_id": 35,
"text": "By developments of mutual evolution, the fleshy produce of fruits typically appeals to hungry animals, such that the seeds contained within are taken in, carried away, and later deposited (i.e., defecated) at a distance from the parent plant. Likewise, the nutritious, oily kernels of nuts typically motivate birds and squirrels to hoard them, burying them in soil to retrieve later during the winter of scarcity; thereby, uneaten seeds are sown effectively under natural conditions to germinate and grow a new plant some distance away from the parent.",
"title": "Seed dissemination"
},
{
"paragraph_id": 36,
"text": "Other fruits have evolved flattened and elongated wings or helicopter-like blades, e.g., elm, maple, and tuliptree. This mechanism increases dispersal distance away from the parent via wind. Other wind-dispersed fruit have tiny \"parachutes\", e.g., dandelion, milkweed, salsify.",
"title": "Seed dissemination"
},
{
"paragraph_id": 37,
"text": "Coconut fruits can float thousands of miles in the ocean, thereby spreading their seeds. Other fruits that can disperse via water are nipa palm and screw pine.",
"title": "Seed dissemination"
},
{
"paragraph_id": 38,
"text": "Some fruits have evolved propulsive mechanisms that fling seeds substantial distances – perhaps up to 100 m (330 ft) in the case of the sandbox tree – via explosive dehiscence or other such mechanisms (see impatiens and squirting cucumber).",
"title": "Seed dissemination"
},
{
"paragraph_id": 39,
"text": "A cornucopia of fruits – fleshy (simple) fruits from apples to berries to watermelon; dry (simple) fruits including beans and rice and coconuts; aggregate fruits including strawberries, raspberries, blackberries, pawpaw; and multiple fruits such as pineapple, fig, mulberries – are commercially valuable as human food. They are eaten both fresh and as jams, marmalade and other fruit preserves. They are used extensively in manufactured and processed foods (cakes, cookies, baked goods, flavorings, ice cream, yogurt, canned vegetables, frozen vegetables and meals) and beverages such as fruit juices and alcoholic beverages (brandy, fruit beer, wine). Spices like vanilla, black pepper, paprika, and allspice are derived from berries. Olive fruit is pressed for olive oil and similar processing is applied to other oil-bearing fruits and vegetables. Some fruits are available all year round, while others (such as blackberries and apricots in the UK) are subject to seasonal availability.",
"title": "Food uses"
},
{
"paragraph_id": 40,
"text": "Fruits are also used for socializing and gift-giving in the form of fruit baskets and fruit bouquets.",
"title": "Food uses"
},
{
"paragraph_id": 41,
"text": "Typically, many botanical fruits – \"vegetables\" in culinary parlance – (including tomato, green beans, leaf greens, bell pepper, cucumber, eggplant, okra, pumpkin, squash, zucchini) are bought and sold daily in fresh produce markets and greengroceries and carried back to kitchens, at home or restaurant, for preparation of meals.",
"title": "Food uses"
},
{
"paragraph_id": 42,
"text": "All fruits benefit from proper post-harvest care, and in many fruits, the plant hormone ethylene causes ripening. Therefore, maintaining most fruits in an efficient cold chain is optimal for post-harvest storage, with the aim of extending and ensuring shelf life.",
"title": "Food uses"
},
{
"paragraph_id": 43,
"text": "Various culinary fruits provide significant amounts of fiber and water, and many are generally high in vitamin C. An overview of numerous studies showed that fruits (e.g., whole apples or whole oranges) are satisfying (filling) by simply eating and chewing them.",
"title": "Food uses"
},
{
"paragraph_id": 44,
"text": "The dietary fiber consumed in eating fruit promotes satiety, and may help to control body weight and aid reduction of blood cholesterol, a risk factor for cardiovascular diseases. Fruit consumption is under preliminary research for the potential to improve nutrition and affect chronic diseases. Regular consumption of fruit is generally associated with reduced risks of several diseases and functional declines associated with aging.",
"title": "Food uses"
},
{
"paragraph_id": 45,
"text": "For food safety, the CDC recommends proper fruit handling and preparation to reduce the risk of food contamination and foodborne illness. Fresh fruits and vegetables should be carefully selected; at the store, they should not be damaged or bruised; and precut pieces should be refrigerated or surrounded by ice.",
"title": "Food uses"
},
{
"paragraph_id": 46,
"text": "All fruits and vegetables should be rinsed before eating. This recommendation also applies to produce with rinds or skins that are not eaten. It should be done just before preparing or eating to avoid premature spoilage.",
"title": "Food uses"
},
{
"paragraph_id": 47,
"text": "Fruits and vegetables should be kept separate from raw foods like meat, poultry, and seafood, as well as from utensils that have come in contact with raw foods. Fruits and vegetables that are not going to be cooked should be thrown away if they have touched raw meat, poultry, seafood, or eggs.",
"title": "Food uses"
},
{
"paragraph_id": 48,
"text": "All cut, peeled, or cooked fruits and vegetables should be refrigerated within two hours. After a certain time, harmful bacteria may grow on them and increase the risk of foodborne illness.",
"title": "Food uses"
},
{
"paragraph_id": 49,
"text": "Fruit allergies make up about 10 percent of all food-related allergies.",
"title": "Food uses"
},
{
"paragraph_id": 50,
"text": "Because fruits have been such a major part of the human diet, various cultures have developed many different uses for fruits they do not depend on for food. For example:",
"title": "Nonfood uses"
}
]
| In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering. Fruits are the means by which flowering plants disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; in fact, humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some have acquired extensive cultural and symbolic meanings. In common language usage, fruit normally means the seed-associated fleshy structures of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains. | 2001-08-21T03:32:08Z | 2023-12-16T01:25:34Z | [
"Template:Short description",
"Template:Webarchive",
"Template:Dead link",
"Template:Fruits",
"Template:Veganism and vegetarianism",
"Template:Plant-based diets",
"Template:Main",
"Template:Cite web",
"Template:Cite journal",
"Template:Cite EB1911",
"Template:Authority control",
"Template:Also",
"Template:Em",
"Template:Cvt",
"Template:Portal",
"Template:Cite book",
"Template:ISBN",
"Template:Sister bar",
"Template:Other uses",
"Template:Pp-semiprotected",
"Template:Pp-move-indef",
"Template:Citation needed",
"Template:Reflist",
"Template:Botany",
"Template:Agriculture country lists"
]
| https://en.wikipedia.org/wiki/Fruit |
10,844 | French materialism | French materialism is the name given to a handful of French 18th-century philosophers during the Age of Enlightenment, many of them clustered around the salon of Baron d'Holbach. Although there are important differences between them, all of them were materialists who believed that the world was made up of a single substance, matter, the motions and properties of which could be used to explain all phenomena.
Prominent French materialists of the 18th century include: | [
{
"paragraph_id": 0,
"text": "French materialism is the name given to a handful of French 18th-century philosophers during the Age of Enlightenment, many of them clustered around the salon of Baron d'Holbach. Although there are important differences between them, all of them were materialists who believed that the world was made up of a single substance, matter, the motions and properties of which could be used to explain all phenomena.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Prominent French materialists of the 18th century include:",
"title": ""
}
]
| French materialism is the name given to a handful of French 18th-century philosophers during the Age of Enlightenment, many of them clustered around the salon of Baron d'Holbach. Although there are important differences between them, all of them were materialists who believed that the world was made up of a single substance, matter, the motions and properties of which could be used to explain all phenomena. Prominent French materialists of the 18th century include: Julien Offray de La Mettrie
Denis Diderot
Baron d'Holbach
Claude Adrien Helvétius
Pierre Jean Georges Cabanis
Jacques-André Naigeon | 2023-07-07T03:22:56Z | []
| https://en.wikipedia.org/wiki/French_materialism |
|
10,845 | February | February is the second month of the year in the Julian and Gregorian calendars. The month has 28 days in common years or 29 in leap years, with the 29th day being called the leap day. It is the first of five months not to have 31 days (the other four being April, June, September, and November) and the only one to have fewer than 30 days. February is the third and last month of meteorological winter in the Northern Hemisphere. In the Southern Hemisphere, February is the third and last month of meteorological summer (being the seasonal equivalent of what is August in the Northern Hemisphere).
"February" is pronounced in several different ways. The beginning of the word is commonly pronounced either as /ˈfɛbju-/ FEB-yoo- or /ˈfɛbru-/ FEB-roo-; many people drop the first "r", replacing it with /j/, as if it were spelled "Febuary". This comes about by analogy with "January" (/ˈdʒæn.ju-/ ), as well as by a dissimilation effect whereby having two "r"s close to each other causes one to change. The ending of the word is pronounced /-ɛri/ -err-ee in the US and /-əri/ -ər-ee in the UK.
The Roman month Februarius was named after the Latin term februum, which means "purification", via the purification ritual Februa held on February 15 (full moon) in the old lunar Roman calendar. January and February were the last two months to be added to the Roman calendar, since the Romans originally considered winter a monthless period. They were added by Numa Pompilius about 713 BC. February remained the last month of the calendar year until the time of the decemvirs (c. 450 BC), when it became the second month. At certain times February was truncated to 23 or 24 days, and a 27-day intercalary month, Intercalaris, was occasionally inserted immediately after February to realign the year with the seasons.
February observances in Ancient Rome included Amburbium (precise date unknown), Sementivae (February 2), Februa (February 13–15), Lupercalia (February 13–15), Parentalia (February 13–22), Quirinalia (February 17), Feralia (February 21), Caristia (February 22), Terminalia (February 23), Regifugium (February 24), and Agonium Martiale (February 27). These days do not correspond to the modern Gregorian calendar.
Under the reforms that instituted the Julian calendar, Intercalaris was abolished, leap years occurred regularly every fourth year, and in leap years February gained a 29th day. Thereafter, it remained the second month of the calendar year, meaning the order that months are displayed (January, February, March, ..., December) within a year-at-a-glance calendar. Even during the Middle Ages, when the numbered Anno Domini year began on March 25 or December 25, the second month was February whenever all twelve months were displayed in order. The Gregorian calendar reforms made slight changes to the system for determining which years were leap years, but also contained a 29-day February.
Historical names for February include the Old English terms Solmonath (mud month) and Kale-monath (named for cabbage) as well as Charlemagne's designation Hornung. In Finnish, the month is called helmikuu, meaning "month of the pearl"; when snow melts on tree branches, it forms droplets, and as these freeze again, they are like pearls of ice. In Polish and Ukrainian, respectively, the month is called luty or лютий (lyutiy), meaning the month of ice or hard frost. In Macedonian the month is sechko (сечко), meaning month of cutting (wood). In Czech, it is called únor, meaning month of submerging (of river ice).
In Slovene, February is traditionally called svečan, related to icicles or Candlemas. This name originates from sičan, written as svičan in the New Carniolan Almanac from 1775 and changed to its final form by Franc Metelko in his New Almanac from 1824. The name was also spelled sečan, meaning "the month of cutting down of trees".
In 1848, a proposal was put forward in Kmetijske in rokodelske novice by the Slovene Society of Ljubljana to call this month talnik (related to ice melting), but it did not stick. The idea was proposed by a priest, Blaž Potočnik. Another name of February in Slovene was vesnar, after the mythological character Vesna.
Having only 28 days in common years, February is the only month of the year that can pass without a single full moon. Using Coordinated Universal Time as the basis for determining the date and time of a full moon, this last happened in 2018 and will next happen in 2037. The same is true regarding a new moon: again using Coordinated Universal Time as the basis, this last happened in 2014 and will next happen in 2033.
February is also the only month of the calendar that, at intervals alternating between one of six years and two of eleven years, has exactly four full 7-day weeks. In countries that start their week on a Monday, it occurs as part of a common year starting on Friday, in which February 1st is a Monday and the 28th is a Sunday; the most recent occurrence was 2021, and the next one will be 2027. In countries that start their week on a Sunday, it occurs in a common year starting on Thursday; the most recent occurrence was 2015 and the next occurrence will be 2026. The pattern is broken by a skipped leap year, but no leap year has been skipped since 1900 and no others will be skipped until 2100.
February meteor showers include the Alpha Centaurids (appearing in early February), the March Virginids (lasting from February 14 to April 25, peaking around March 20), the Delta Cancrids (appearing December 14 to February 14, peaking on January 17), the Omicron Centaurids (late January through February, peaking in mid-February), Theta Centaurids (January 23 – March 12, only visible in the southern hemisphere), Eta Virginids (February 24 and March 27, peaking around March 18), and Pi Virginids (February 13 and April 8, peaking between March 3 and March 9).
Its birth flowers are the violet (Viola) and the common primrose (Primula vulgaris), and the Iris.
Its birthstone is the amethyst. It symbolizes piety, humility, spiritual wisdom, and sincerity. The zodiac signs are Aquarius (until February 18) and Pisces (February 19 onward).
This list does not necessarily imply either official status nor general observance.
(All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.)
First Saturday
First Sunday
First Week of February (first Monday, ending on Sunday)
First Monday
First Friday
Second Saturday
Second Sunday
Second Monday
Second Tuesday
Week of February 22
Third Monday
Third Thursday
Third Friday
Last Friday
Last Saturday
Last day of February | [
{
"paragraph_id": 0,
"text": "February is the second month of the year in the Julian and Gregorian calendars. The month has 28 days in common years or 29 in leap years, with the 29th day being called the leap day. It is the first of five months not to have 31 days (the other four being April, June, September, and November) and the only one to have fewer than 30 days. February is the third and last month of meteorological winter in the Northern Hemisphere. In the Southern Hemisphere, February is the third and last month of meteorological summer (being the seasonal equivalent of what is August in the Northern Hemisphere).",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"February\" is pronounced in several different ways. The beginning of the word is commonly pronounced either as /ˈfɛbju-/ FEB-yoo- or /ˈfɛbru-/ FEB-roo-; many people drop the first \"r\", replacing it with /j/, as if it were spelled \"Febuary\". This comes about by analogy with \"January\" (/ˈdʒæn.ju-/ ), as well as by a dissimilation effect whereby having two \"r\"s close to each other causes one to change. The ending of the word is pronounced /-ɛri/ -err-ee in the US and /-əri/ -ər-ee in the UK.",
"title": "Pronunciation"
},
{
"paragraph_id": 2,
"text": "The Roman month Februarius was named after the Latin term februum, which means \"purification\", via the purification ritual Februa held on February 15 (full moon) in the old lunar Roman calendar. January and February were the last two months to be added to the Roman calendar, since the Romans originally considered winter a monthless period. They were added by Numa Pompilius about 713 BC. February remained the last month of the calendar year until the time of the decemvirs (c. 450 BC), when it became the second month. At certain times February was truncated to 23 or 24 days, and a 27-day intercalary month, Intercalaris, was occasionally inserted immediately after February to realign the year with the seasons.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "February observances in Ancient Rome included Amburbium (precise date unknown), Sementivae (February 2), Februa (February 13–15), Lupercalia (February 13–15), Parentalia (February 13–22), Quirinalia (February 17), Feralia (February 21), Caristia (February 22), Terminalia (February 23), Regifugium (February 24), and Agonium Martiale (February 27). These days do not correspond to the modern Gregorian calendar.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Under the reforms that instituted the Julian calendar, Intercalaris was abolished, leap years occurred regularly every fourth year, and in leap years February gained a 29th day. Thereafter, it remained the second month of the calendar year, meaning the order that months are displayed (January, February, March, ..., December) within a year-at-a-glance calendar. Even during the Middle Ages, when the numbered Anno Domini year began on March 25 or December 25, the second month was February whenever all twelve months were displayed in order. The Gregorian calendar reforms made slight changes to the system for determining which years were leap years, but also contained a 29-day February.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Historical names for February include the Old English terms Solmonath (mud month) and Kale-monath (named for cabbage) as well as Charlemagne's designation Hornung. In Finnish, the month is called helmikuu, meaning \"month of the pearl\"; when snow melts on tree branches, it forms droplets, and as these freeze again, they are like pearls of ice. In Polish and Ukrainian, respectively, the month is called luty or лютий (lyutiy), meaning the month of ice or hard frost. In Macedonian the month is sechko (сечко), meaning month of cutting (wood). In Czech, it is called únor, meaning month of submerging (of river ice).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In Slovene, February is traditionally called svečan, related to icicles or Candlemas. This name originates from sičan, written as svičan in the New Carniolan Almanac from 1775 and changed to its final form by Franc Metelko in his New Almanac from 1824. The name was also spelled sečan, meaning \"the month of cutting down of trees\".",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1848, a proposal was put forward in Kmetijske in rokodelske novice by the Slovene Society of Ljubljana to call this month talnik (related to ice melting), but it did not stick. The idea was proposed by a priest, Blaž Potočnik. Another name of February in Slovene was vesnar, after the mythological character Vesna.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Having only 28 days in common years, February is the only month of the year that can pass without a single full moon. Using Coordinated Universal Time as the basis for determining the date and time of a full moon, this last happened in 2018 and will next happen in 2037. The same is true regarding a new moon: again using Coordinated Universal Time as the basis, this last happened in 2014 and will next happen in 2033.",
"title": "Patterns"
},
{
"paragraph_id": 9,
"text": "February is also the only month of the calendar that, at intervals alternating between one of six years and two of eleven years, has exactly four full 7-day weeks. In countries that start their week on a Monday, it occurs as part of a common year starting on Friday, in which February 1st is a Monday and the 28th is a Sunday; the most recent occurrence was 2021, and the next one will be 2027. In countries that start their week on a Sunday, it occurs in a common year starting on Thursday; the most recent occurrence was 2015 and the next occurrence will be 2026. The pattern is broken by a skipped leap year, but no leap year has been skipped since 1900 and no others will be skipped until 2100.",
"title": "Patterns"
},
{
"paragraph_id": 10,
"text": "February meteor showers include the Alpha Centaurids (appearing in early February), the March Virginids (lasting from February 14 to April 25, peaking around March 20), the Delta Cancrids (appearing December 14 to February 14, peaking on January 17), the Omicron Centaurids (late January through February, peaking in mid-February), Theta Centaurids (January 23 – March 12, only visible in the southern hemisphere), Eta Virginids (February 24 and March 27, peaking around March 18), and Pi Virginids (February 13 and April 8, peaking between March 3 and March 9).",
"title": "Astronomy"
},
{
"paragraph_id": 11,
"text": "Its birth flowers are the violet (Viola) and the common primrose (Primula vulgaris), and the Iris.",
"title": "Symbols"
},
{
"paragraph_id": 12,
"text": "Its birthstone is the amethyst. It symbolizes piety, humility, spiritual wisdom, and sincerity. The zodiac signs are Aquarius (until February 18) and Pisces (February 19 onward).",
"title": "Symbols"
},
{
"paragraph_id": 13,
"text": "This list does not necessarily imply either official status nor general observance.",
"title": "Observances"
},
{
"paragraph_id": 14,
"text": "(All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.)",
"title": "Observances"
},
{
"paragraph_id": 15,
"text": "First Saturday",
"title": "Observances"
},
{
"paragraph_id": 16,
"text": "First Sunday",
"title": "Observances"
},
{
"paragraph_id": 17,
"text": "First Week of February (first Monday, ending on Sunday)",
"title": "Observances"
},
{
"paragraph_id": 18,
"text": "First Monday",
"title": "Observances"
},
{
"paragraph_id": 19,
"text": "First Friday",
"title": "Observances"
},
{
"paragraph_id": 20,
"text": "Second Saturday",
"title": "Observances"
},
{
"paragraph_id": 21,
"text": "Second Sunday",
"title": "Observances"
},
{
"paragraph_id": 22,
"text": "Second Monday",
"title": "Observances"
},
{
"paragraph_id": 23,
"text": "Second Tuesday",
"title": "Observances"
},
{
"paragraph_id": 24,
"text": "Week of February 22",
"title": "Observances"
},
{
"paragraph_id": 25,
"text": "Third Monday",
"title": "Observances"
},
{
"paragraph_id": 26,
"text": "Third Thursday",
"title": "Observances"
},
{
"paragraph_id": 27,
"text": "Third Friday",
"title": "Observances"
},
{
"paragraph_id": 28,
"text": "Last Friday",
"title": "Observances"
},
{
"paragraph_id": 29,
"text": "Last Saturday",
"title": "Observances"
},
{
"paragraph_id": 30,
"text": "Last day of February",
"title": "Observances"
}
]
| February is the second month of the year in the Julian and Gregorian calendars. The month has 28 days in common years or 29 in leap years, with the 29th day being called the leap day. It is the first of five months not to have 31 days and the only one to have fewer than 30 days.
February is the third and last month of meteorological winter in the Northern Hemisphere. In the Southern Hemisphere, February is the third and last month of meteorological summer. | 2001-06-11T19:08:57Z | 2023-11-15T09:29:06Z | [
"Template:About",
"Template:Pp-move",
"Template:Respell",
"Template:Cite web",
"Template:Lang",
"Template:Transliteration",
"Template:Reflist",
"Template:Citation",
"Template:Cite book",
"Template:Wikiquote",
"Template:Authority control",
"Template:IPAc-en",
"Template:Cite LPD",
"Template:Cite journal",
"Template:Wiktionary",
"Template:Cite EB1911",
"Template:Short description",
"Template:Redirect",
"Template:Calendar",
"Template:Months"
]
| https://en.wikipedia.org/wiki/February |
10,846 | February 1 | February 1 is the 32nd day of the year in the Gregorian calendar; 333 days remain until the end of the year (334 in leap years). | [
{
"paragraph_id": 0,
"text": "February 1 is the 32nd day of the year in the Gregorian calendar; 333 days remain until the end of the year (334 in leap years).",
"title": ""
}
]
| February 1 is the 32nd day of the year in the Gregorian calendar; 333 days remain until the end of the year. | 2001-06-30T13:55:09Z | 2023-12-18T05:23:42Z | [
"Template:Pp-pc1",
"Template:Cite web",
"Template:Commons",
"Template:Months",
"Template:Cite journal",
"Template:Cite instagram",
"Template:Calendar",
"Template:This date in recent years",
"Template:Day",
"Template:Use mdy dates",
"Template:Cite conference",
"Template:Cite news",
"Template:NYT On this day",
"Template:Cite EB1911",
"Template:Pp-move-indef",
"Template:Reflist",
"Template:Cite book",
"Template:Cite magazine",
"Template:Cite ODNB"
]
| https://en.wikipedia.org/wiki/February_1 |
10,847 | First Lady of the United States | First Lady of the United States (FLOTUS) is the title held by the hostess of the White House, usually the wife of the president of the United States, concurrent with the president's term in office. Although the first lady's role has never been codified or officially defined, she figures prominently in the political and social life of the United States. Since the early 20th century, the first lady has been assisted by official staff, known as the Office of the First Lady and headquartered in the East Wing of the White House.
Jill Biden has served as the first lady of the United States since 2021, as the wife of the 46th president, Joe Biden.
While the title was not in general use until much later, Martha Washington, the wife of George Washington, the first U.S. president (1789–1797), is considered to be the inaugural first lady of the United States. During her lifetime, she was often referred to as "Lady Washington".
Since the 1900s, the role of first lady has changed considerably. It has come to include involvement in political campaigns, management of the White House, championship of social causes, and representation of the president at official and ceremonial occasions.
Additionally, over the years individual first ladies have held influence in a range of sectors, from fashion to public opinion on policy, as well as advocacy for female empowerment. Historically, when a president has been unmarried or a widower, he has usually asked a relative to act as White House hostess.
The use of the title First Lady to describe the spouse or hostess of an executive began in the United States. In the early days of the republic, there was not a generally accepted title for the wife of the president. Many early first ladies expressed their own preference for how they were addressed, including the use of such titles as "Lady", "Mrs. President" and "Mrs. Presidentress"; Martha Washington was often referred to as "Lady Washington". One of the earliest uses of the term "First Lady" was applied to her in an 1838 newspaper article that appeared in the St. Johnsbury Caledonian, the author, "Mrs. Sigourney", discussing how Martha Washington had not changed, even after her husband George became president. She wrote that "The first lady of the nation still preserved the habits of early life. Indulging in no indolence, she left the pillow at dawn, and after breakfast, retired to her chamber for an hour for the study of the scriptures and devotion."
According to a legend, Dolley Madison was referred to as first lady in 1849 at her funeral in a eulogy delivered by President Zachary Taylor; however, no written record of this eulogy exists, nor did any of the newspapers of her day refer to her by that title. Sometime after 1849, the title began being used in Washington, D.C., social circles. The first person to have the title applied to her while she was actually holding the office was Harriet Lane, the niece of James Buchanan; Leslie's Illustrated Newspaper used the phrase to describe her in an 1860 article about her duties as White House hostess. Another of the earliest known written examples comes from November 3, 1863, diary entry of William Howard Russell, in which he referred to gossip about "the First Lady in the Land", referring to Mary Todd Lincoln. The title first gained nationwide recognition in 1877, when newspaper journalist Mary C. Ames referred to Lucy Webb Hayes as "the First Lady of the Land" while reporting on the inauguration of Rutherford B. Hayes. The frequent reporting on Lucy Hayes' activities helped spread use of the title outside Washington. A popular 1911 comedic play about Dolley Madison by playwright Charles Nirdlinger, titled The First Lady in the Land, popularized the title further. By the 1930s, it was in wide use. Use of the title later spread from the United States to other nations.
When Edith Wilson took control of her husband's schedule in 1919 after he had a debilitating stroke, one Republican senator labeled her "the Presidentress who had fulfilled the dream of the suffragettes by changing her title from First Lady to Acting First Man".
According to the Nexis database, the abbreviation FLOTUS (pronounced /ˈfləʊtɪs/) was first used in 1983 by Donnie Radcliffe, writing in The Washington Post.
Several women (at least thirteen) who were not presidents' wives have served as first lady, as when the president was a bachelor or widower, or when the wife of the president was unable to fulfill the duties of the first lady herself. In these cases, the position has been filled by a female relative of the president, such as Jefferson's daughter Martha Jefferson Randolph, Jackson's daughter-in-law Sarah Yorke Jackson and his wife's niece Emily Donelson, Taylor's daughter Mary Elizabeth Bliss, Benjamin Harrison's daughter Mary Harrison McKee, Buchanan's niece Harriet Lane, and Cleveland's sister Rose Cleveland.
Each of the 45 presidents of the United States have been male, all of whom have either had their wives, or a female hostess, assume the role of first lady. Thus, a male equivalent for the title of first lady has never been needed. However, in 2016, as Hillary Clinton became the first woman to win a major party's presidential nomination, questions were raised as to what her husband Bill would be titled if she were to win the presidency. During the campaign, the title of First Gentleman of the United States was most frequently suggested for Bill Clinton, although as a former president himself, he may be called "Mr. President". In addition, state governors' male spouses are typically called the First Gentleman of their respective state (for example, Michael Haley was the first gentleman of South Carolina while his wife, Nikki, served as governor). Ultimately, Hillary Clinton lost the election, rendering this a moot point.
In 2021, Kamala Harris became the first woman to hold a nationally elected office when she took office as vice president, making her husband Doug Emhoff the first male spouse of a nationally elected officeholder. Emhoff assumed the title of Second Gentleman of the United States ("gentleman" replacing "lady" in the title) making it likely that any future male spouse of a president will be given the title of first gentleman.
The position of the first lady is not an elected one and carries only ceremonial duties. Nonetheless, first ladies have held a highly visible position in American society. The role of the first lady has evolved over the centuries. She is, first and foremost, the hostess of the White House. She organizes and attends official ceremonies and functions of state either along with, or in place of, the president. Lisa Burns identifies four successive main themes of the first ladyship: as public woman (1900–1929); as political celebrity (1932–1961); as political activist (1964–1977); and as political interloper (1980–2001).
Martha Washington created the role and hosted many affairs of state at the national capital (New York and Philadelphia). This socializing became known as the Republican Court and provided elite women with opportunities to play backstage political roles. Both Martha Washington and Abigail Adams were treated as if they were "ladies" of the British royal court.
Dolley Madison popularized the first ladyship by engaging in efforts to assist orphans and women, by dressing in elegant fashions and attracting newspaper coverage, and by risking her life to save iconic treasures during the War of 1812. Madison set the standard for the ladyship and her actions were the model for nearly every first lady until Eleanor Roosevelt in the 1930s. Roosevelt traveled widely and spoke to many groups, often voicing personal opinions to the left of the president's. She authored a weekly newspaper column and hosted a radio show. Jacqueline Kennedy led an effort to redecorate and restore the White House.
Many first ladies became significant fashion trendsetters. Some have exercised a degree of political influence by virtue of being an important adviser to the president.
Over the course of the 20th century, it became increasingly common for first ladies to select specific causes to promote, usually ones that are not politically divisive. It is common for the first lady to hire a staff to support these activities. Lady Bird Johnson pioneered environmental protection and beautification. Pat Nixon encouraged volunteerism and traveled extensively abroad; Betty Ford supported women's rights; Rosalynn Carter aided those with mental disabilities; Nancy Reagan founded the Just Say No drug awareness campaign; Barbara Bush promoted literacy; Hillary Clinton sought to reform the healthcare system in the U.S.; Laura Bush supported women's rights groups, and encouraged childhood literacy. Michelle Obama became identified with supporting military families and tackling childhood obesity; and Melania Trump used her position to help children, including prevention of cyberbullying and support for those whose lives are affected by drugs.
Since 1964, the incumbent and all living former first ladies are honorary members of the board of trustees of the National Cultural Center, the John F. Kennedy Center for the Performing Arts.
Near the end of her husband's presidency, Hillary Clinton became the first first lady to seek political office, when she ran for United States Senate. During the campaign, her daughter Chelsea took over much of the first lady's role. Victorious, Clinton served as junior senator from New York from 2001 to 2009, when she resigned to become President Obama's secretary of state. Later, she was the Democratic Party nominee for president in the 2016 election, but lost to Donald Trump.
The Office of the First Lady of the United States is accountable to the first lady for her to carry out her duties as hostess of the White House, and is also in charge of all social and ceremonial events of the White House. The first lady has her own staff that includes a chief of staff, press secretary, White House Social Secretary, and Chief Floral Designer. The Office of the First Lady is an entity of the White House Office, a branch of the Executive Office of the President. When First Lady Hillary Clinton decided to pursue a run for Senator of New York, she set aside her duties as first lady and moved to Chappaqua, New York, to establish state residency. She resumed her duties as first lady after winning her senatorial campaign, and retained her duties as both first lady and a U.S. senator for the seventeen-day overlap before Bill Clinton's term came to an end.
Established in 1912, the First Ladies Collection has been one of the most popular attractions at the Smithsonian Institution. The original exhibition opened in 1914 and was one of the first at the Smithsonian to prominently feature women. Originally focused largely on fashion, the exhibition now delves deeper into the contributions of first ladies to the presidency and American society. In 2008, "First Ladies at the Smithsonian" opened at the National Museum of American History as part of its reopening year celebration. That exhibition served as a bridge to the museum's expanded exhibition on first ladies' history that opened on November 19, 2011. "The First Ladies" explores the unofficial but important position of first lady and the ways that different women have shaped the role to make their own contributions to the presidential administrations and the nation. The exhibition features 26 dresses and more than 160 other objects, ranging from those of Martha Washington to Michelle Obama, and includes White House china, personal possessions and other objects from the Smithsonian's unique collection of first ladies' materials.
Some first ladies have garnered attention for their dress and style. Jacqueline Kennedy Onassis, for instance, became a global fashion icon: her style was copied by commercial manufacturers and imitated by many young women, and she was named to the International Best Dressed List Hall of Fame in 1965. Mamie Eisenhower was named one of the twelve best-dressed women in the country by the New York Dress Institute every year that she was First Lady. The "Mamie Look" involved a full-skirted dress, charm bracelets, pearls, little hats, and bobbed, banged hair. Michelle Obama also received significant attention for her fashion choices: style writer Robin Givhan praised her in The Daily Beast, arguing that the First Lady's style had helped to enhance the public image of the office.
Since the 1920s, many first ladies have become public speakers, adopting specific causes. It also became common for the First Lady to hire a staff to support her agenda. Recent causes of the First Lady are: | [
{
"paragraph_id": 0,
"text": "First Lady of the United States (FLOTUS) is the title held by the hostess of the White House, usually the wife of the president of the United States, concurrent with the president's term in office. Although the first lady's role has never been codified or officially defined, she figures prominently in the political and social life of the United States. Since the early 20th century, the first lady has been assisted by official staff, known as the Office of the First Lady and headquartered in the East Wing of the White House.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Jill Biden has served as the first lady of the United States since 2021, as the wife of the 46th president, Joe Biden.",
"title": ""
},
{
"paragraph_id": 2,
"text": "While the title was not in general use until much later, Martha Washington, the wife of George Washington, the first U.S. president (1789–1797), is considered to be the inaugural first lady of the United States. During her lifetime, she was often referred to as \"Lady Washington\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since the 1900s, the role of first lady has changed considerably. It has come to include involvement in political campaigns, management of the White House, championship of social causes, and representation of the president at official and ceremonial occasions.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Additionally, over the years individual first ladies have held influence in a range of sectors, from fashion to public opinion on policy, as well as advocacy for female empowerment. Historically, when a president has been unmarried or a widower, he has usually asked a relative to act as White House hostess.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The use of the title First Lady to describe the spouse or hostess of an executive began in the United States. In the early days of the republic, there was not a generally accepted title for the wife of the president. Many early first ladies expressed their own preference for how they were addressed, including the use of such titles as \"Lady\", \"Mrs. President\" and \"Mrs. Presidentress\"; Martha Washington was often referred to as \"Lady Washington\". One of the earliest uses of the term \"First Lady\" was applied to her in an 1838 newspaper article that appeared in the St. Johnsbury Caledonian, the author, \"Mrs. Sigourney\", discussing how Martha Washington had not changed, even after her husband George became president. She wrote that \"The first lady of the nation still preserved the habits of early life. Indulging in no indolence, she left the pillow at dawn, and after breakfast, retired to her chamber for an hour for the study of the scriptures and devotion.\"",
"title": "Origins of the title"
},
{
"paragraph_id": 6,
"text": "According to a legend, Dolley Madison was referred to as first lady in 1849 at her funeral in a eulogy delivered by President Zachary Taylor; however, no written record of this eulogy exists, nor did any of the newspapers of her day refer to her by that title. Sometime after 1849, the title began being used in Washington, D.C., social circles. The first person to have the title applied to her while she was actually holding the office was Harriet Lane, the niece of James Buchanan; Leslie's Illustrated Newspaper used the phrase to describe her in an 1860 article about her duties as White House hostess. Another of the earliest known written examples comes from November 3, 1863, diary entry of William Howard Russell, in which he referred to gossip about \"the First Lady in the Land\", referring to Mary Todd Lincoln. The title first gained nationwide recognition in 1877, when newspaper journalist Mary C. Ames referred to Lucy Webb Hayes as \"the First Lady of the Land\" while reporting on the inauguration of Rutherford B. Hayes. The frequent reporting on Lucy Hayes' activities helped spread use of the title outside Washington. A popular 1911 comedic play about Dolley Madison by playwright Charles Nirdlinger, titled The First Lady in the Land, popularized the title further. By the 1930s, it was in wide use. Use of the title later spread from the United States to other nations.",
"title": "Origins of the title"
},
{
"paragraph_id": 7,
"text": "When Edith Wilson took control of her husband's schedule in 1919 after he had a debilitating stroke, one Republican senator labeled her \"the Presidentress who had fulfilled the dream of the suffragettes by changing her title from First Lady to Acting First Man\".",
"title": "Origins of the title"
},
{
"paragraph_id": 8,
"text": "According to the Nexis database, the abbreviation FLOTUS (pronounced /ˈfləʊtɪs/) was first used in 1983 by Donnie Radcliffe, writing in The Washington Post.",
"title": "Origins of the title"
},
{
"paragraph_id": 9,
"text": "Several women (at least thirteen) who were not presidents' wives have served as first lady, as when the president was a bachelor or widower, or when the wife of the president was unable to fulfill the duties of the first lady herself. In these cases, the position has been filled by a female relative of the president, such as Jefferson's daughter Martha Jefferson Randolph, Jackson's daughter-in-law Sarah Yorke Jackson and his wife's niece Emily Donelson, Taylor's daughter Mary Elizabeth Bliss, Benjamin Harrison's daughter Mary Harrison McKee, Buchanan's niece Harriet Lane, and Cleveland's sister Rose Cleveland.",
"title": "Origins of the title"
},
{
"paragraph_id": 10,
"text": "Each of the 45 presidents of the United States have been male, all of whom have either had their wives, or a female hostess, assume the role of first lady. Thus, a male equivalent for the title of first lady has never been needed. However, in 2016, as Hillary Clinton became the first woman to win a major party's presidential nomination, questions were raised as to what her husband Bill would be titled if she were to win the presidency. During the campaign, the title of First Gentleman of the United States was most frequently suggested for Bill Clinton, although as a former president himself, he may be called \"Mr. President\". In addition, state governors' male spouses are typically called the First Gentleman of their respective state (for example, Michael Haley was the first gentleman of South Carolina while his wife, Nikki, served as governor). Ultimately, Hillary Clinton lost the election, rendering this a moot point.",
"title": "Origins of the title"
},
{
"paragraph_id": 11,
"text": "In 2021, Kamala Harris became the first woman to hold a nationally elected office when she took office as vice president, making her husband Doug Emhoff the first male spouse of a nationally elected officeholder. Emhoff assumed the title of Second Gentleman of the United States (\"gentleman\" replacing \"lady\" in the title) making it likely that any future male spouse of a president will be given the title of first gentleman.",
"title": "Origins of the title"
},
{
"paragraph_id": 12,
"text": "The position of the first lady is not an elected one and carries only ceremonial duties. Nonetheless, first ladies have held a highly visible position in American society. The role of the first lady has evolved over the centuries. She is, first and foremost, the hostess of the White House. She organizes and attends official ceremonies and functions of state either along with, or in place of, the president. Lisa Burns identifies four successive main themes of the first ladyship: as public woman (1900–1929); as political celebrity (1932–1961); as political activist (1964–1977); and as political interloper (1980–2001).",
"title": "Role"
},
{
"paragraph_id": 13,
"text": "Martha Washington created the role and hosted many affairs of state at the national capital (New York and Philadelphia). This socializing became known as the Republican Court and provided elite women with opportunities to play backstage political roles. Both Martha Washington and Abigail Adams were treated as if they were \"ladies\" of the British royal court.",
"title": "Role"
},
{
"paragraph_id": 14,
"text": "Dolley Madison popularized the first ladyship by engaging in efforts to assist orphans and women, by dressing in elegant fashions and attracting newspaper coverage, and by risking her life to save iconic treasures during the War of 1812. Madison set the standard for the ladyship and her actions were the model for nearly every first lady until Eleanor Roosevelt in the 1930s. Roosevelt traveled widely and spoke to many groups, often voicing personal opinions to the left of the president's. She authored a weekly newspaper column and hosted a radio show. Jacqueline Kennedy led an effort to redecorate and restore the White House.",
"title": "Role"
},
{
"paragraph_id": 15,
"text": "Many first ladies became significant fashion trendsetters. Some have exercised a degree of political influence by virtue of being an important adviser to the president.",
"title": "Role"
},
{
"paragraph_id": 16,
"text": "Over the course of the 20th century, it became increasingly common for first ladies to select specific causes to promote, usually ones that are not politically divisive. It is common for the first lady to hire a staff to support these activities. Lady Bird Johnson pioneered environmental protection and beautification. Pat Nixon encouraged volunteerism and traveled extensively abroad; Betty Ford supported women's rights; Rosalynn Carter aided those with mental disabilities; Nancy Reagan founded the Just Say No drug awareness campaign; Barbara Bush promoted literacy; Hillary Clinton sought to reform the healthcare system in the U.S.; Laura Bush supported women's rights groups, and encouraged childhood literacy. Michelle Obama became identified with supporting military families and tackling childhood obesity; and Melania Trump used her position to help children, including prevention of cyberbullying and support for those whose lives are affected by drugs.",
"title": "Role"
},
{
"paragraph_id": 17,
"text": "Since 1964, the incumbent and all living former first ladies are honorary members of the board of trustees of the National Cultural Center, the John F. Kennedy Center for the Performing Arts.",
"title": "Role"
},
{
"paragraph_id": 18,
"text": "Near the end of her husband's presidency, Hillary Clinton became the first first lady to seek political office, when she ran for United States Senate. During the campaign, her daughter Chelsea took over much of the first lady's role. Victorious, Clinton served as junior senator from New York from 2001 to 2009, when she resigned to become President Obama's secretary of state. Later, she was the Democratic Party nominee for president in the 2016 election, but lost to Donald Trump.",
"title": "Role"
},
{
"paragraph_id": 19,
"text": "The Office of the First Lady of the United States is accountable to the first lady for her to carry out her duties as hostess of the White House, and is also in charge of all social and ceremonial events of the White House. The first lady has her own staff that includes a chief of staff, press secretary, White House Social Secretary, and Chief Floral Designer. The Office of the First Lady is an entity of the White House Office, a branch of the Executive Office of the President. When First Lady Hillary Clinton decided to pursue a run for Senator of New York, she set aside her duties as first lady and moved to Chappaqua, New York, to establish state residency. She resumed her duties as first lady after winning her senatorial campaign, and retained her duties as both first lady and a U.S. senator for the seventeen-day overlap before Bill Clinton's term came to an end.",
"title": "Office of the First Lady"
},
{
"paragraph_id": 20,
"text": "Established in 1912, the First Ladies Collection has been one of the most popular attractions at the Smithsonian Institution. The original exhibition opened in 1914 and was one of the first at the Smithsonian to prominently feature women. Originally focused largely on fashion, the exhibition now delves deeper into the contributions of first ladies to the presidency and American society. In 2008, \"First Ladies at the Smithsonian\" opened at the National Museum of American History as part of its reopening year celebration. That exhibition served as a bridge to the museum's expanded exhibition on first ladies' history that opened on November 19, 2011. \"The First Ladies\" explores the unofficial but important position of first lady and the ways that different women have shaped the role to make their own contributions to the presidential administrations and the nation. The exhibition features 26 dresses and more than 160 other objects, ranging from those of Martha Washington to Michelle Obama, and includes White House china, personal possessions and other objects from the Smithsonian's unique collection of first ladies' materials.",
"title": "Exhibitions and collections"
},
{
"paragraph_id": 21,
"text": "Some first ladies have garnered attention for their dress and style. Jacqueline Kennedy Onassis, for instance, became a global fashion icon: her style was copied by commercial manufacturers and imitated by many young women, and she was named to the International Best Dressed List Hall of Fame in 1965. Mamie Eisenhower was named one of the twelve best-dressed women in the country by the New York Dress Institute every year that she was First Lady. The \"Mamie Look\" involved a full-skirted dress, charm bracelets, pearls, little hats, and bobbed, banged hair. Michelle Obama also received significant attention for her fashion choices: style writer Robin Givhan praised her in The Daily Beast, arguing that the First Lady's style had helped to enhance the public image of the office.",
"title": "Fashion influence"
},
{
"paragraph_id": 22,
"text": "Since the 1920s, many first ladies have become public speakers, adopting specific causes. It also became common for the First Lady to hire a staff to support her agenda. Recent causes of the First Lady are:",
"title": "Causes and initiatives"
}
]
| First Lady of the United States (FLOTUS) is the title held by the hostess of the White House, usually the wife of the president of the United States, concurrent with the president's term in office. Although the first lady's role has never been codified or officially defined, she figures prominently in the political and social life of the United States. Since the early 20th century, the first lady has been assisted by official staff, known as the Office of the First Lady and headquartered in the East Wing of the White House. Jill Biden has served as the first lady of the United States since 2021, as the wife of the 46th president, Joe Biden. While the title was not in general use until much later, Martha Washington, the wife of George Washington, the first U.S. president (1789–1797), is considered to be the inaugural first lady of the United States. During her lifetime, she was often referred to as "Lady Washington". Since the 1900s, the role of first lady has changed considerably. It has come to include involvement in political campaigns, management of the White House, championship of social causes, and representation of the president at official and ceremonial occasions. Additionally, over the years individual first ladies have held influence in a range of sectors, from fashion to public opinion on policy, as well as advocacy for female empowerment. Historically, when a president has been unmarried or a widower, he has usually asked a relative to act as White House hostess. | 2001-06-11T22:57:05Z | 2023-12-29T19:18:19Z | [
"Template:Refbegin",
"Template:ISBN",
"Template:US First Ladies",
"Template:For",
"Template:Further",
"Template:See also",
"Template:Webarchive",
"Template:Refend",
"Template:Use mdy dates",
"Template:Cite web",
"Template:Cite book",
"Template:Cite news",
"Template:Redirect",
"Template:Deadlink",
"Template:First Ladies and Gentlemen",
"Template:Notelist-ua",
"Template:Reflist",
"Template:Cite journal",
"Template:Page needed",
"Template:Cite magazine",
"Template:Use American English",
"Template:Infobox official post",
"Template:Circa",
"Template:Short description",
"Template:Efn-ua",
"Template:IPAc-en",
"Template:Full citation needed",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/First_Lady_of_the_United_States |
10,852 | Frank Herbert | Franklin Patrick Herbert Jr. (October 8, 1920 – February 11, 1986) was an American science fiction author best known for the 1965 novel Dune and its five sequels. Though he became famous for his novels, he also wrote short stories and worked as a newspaper journalist, photographer, book reviewer, ecological consultant, and lecturer.
The Dune saga, set in the distant future, and taking place over millennia, explores complex themes, such as the long-term survival of the human species, human evolution, planetary science and ecology, and the intersection of religion, politics, economics and power in a future where humanity has long since developed interstellar travel and settled many thousands of worlds. Dune is the best-selling science fiction novel of all time, and the entire series is considered to be among the classics of the genre.
Frank Patrick Herbert Jr. was born on October 8, 1920, in Tacoma, Washington, to Frank Patrick Herbert Sr. and Eileen (née McCarthy) Herbert. His rural upbringing involved spending a lot of his youth on the Olympic and Kitsap Peninsulas. He was fascinated by books and could read much of the newspaper before the age of five, had an excellent memory, and learned things quickly. He had an early interest in photography, and bought a Kodak box camera at age ten, a new folding camera in his early teens, and a color film camera in the mid-1930s. Because of an impoverished home environment, largely due to the Great Depression, he left home in 1938 to live with an aunt and uncle in Salem, Oregon.
He enrolled in high school at Salem High School (now North Salem High School), where he graduated the next year. In 1939 he lied about his age to get his first newspaper job at the Glendale Star. Herbert then returned to Salem in 1940 where he worked for the Oregon Statesman newspaper (now Statesman Journal) in a variety of positions, including photographer.
Herbert married Flora Lillian Parkinson in San Pedro, California, in 1941. They had one daughter, Penelope (b. February 16, 1942), and divorced in 1943. During 1942, after the U.S. entry into World War II, he served in the Navy's Seabees for six months as a photographer, but suffered a head injury and was given a medical discharge. Herbert subsequently moved to Portland, Oregon where he reported for The Oregon Journal.
After the war, Herbert attended the University of Washington, where he met Beverly Ann Stuart at a creative writing class in 1946. They were the only students who had sold any work for publication; Herbert had sold two pulp adventure stories to magazines, the first to Esquire in 1945, and Stuart had sold a story to Modern Romance magazine. They married in Seattle in 1946, and had two sons, Brian (b. 1947) and Bruce (1951–1993). In 1949 Herbert and his wife moved to California to work on the Santa Rosa Press-Democrat. Here they befriended the psychologists Ralph and Irene Slattery. The Slatterys introduced Herbert to the work of several thinkers who would influence his writing, including Freud, Jung, Jaspers and Heidegger; they also familiarized Herbert with Zen Buddhism.
Herbert never graduated from college. According to his son Brian, he wanted to study only what interested him and so did not complete the required curriculum. He returned to journalism and worked at the Seattle Star and the Oregon Statesman. He was a writer and editor for the San Francisco Examiner's California Living magazine for a decade.
In a 1973 interview, Herbert stated that he had been reading science fiction "about ten years" before he began writing in the genre, and he listed his favorite authors as H. G. Wells, Robert A. Heinlein, Poul Anderson and Jack Vance.
Herbert's first science fiction story, "Looking for Something", was published in the April 1952 issue of Startling Stories, then a monthly edited by Samuel Mines. Three more of his stories appeared in 1954 issues of Astounding Science Fiction and Amazing Stories. His career as a novelist began in 1955 with the serial publication of Under Pressure in Astounding from November 1955; afterward it was issued as a book by Doubleday titled The Dragon in the Sea. The story explored sanity and madness in the environment of a 21st-century submarine and predicted worldwide conflicts over oil consumption and production. It was a critical success but not a major commercial one. During this time Herbert also worked as a speechwriter for Republican senator Guy Cordon.
Herbert began researching Dune in 1959. He was able to devote himself wholeheartedly to his writing career because his wife returned to work full-time as an advertising writer for department stores, becoming the breadwinner during the 1960s. The novel Dune was published in 1965, which spearheaded the Dune franchise. He later told Willis E. McNelly that the novel originated when he was assigned to write a magazine article about sand dunes in the Oregon Dunes near Florence, Oregon. He got overinvolved and ended up with far more raw material than needed for an article. The article was never written, but it planted the seed that led to Dune. Another significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account.
Dune took six years of research and writing to complete and was much longer than other commercial science fiction of the time. Analog (the renamed Astounding, still edited by John W. Campbell) published it in two parts comprising eight installments, "Dune World" from December 1963 and "Prophet of Dune" in 1965. It was then rejected by nearly twenty book publishers. One editor prophetically wrote, "I might be making the mistake of the decade, but..."
Sterling E. Lanier, an editor of Chilton Book Company (known mainly for its auto-repair manuals) had read the Dune serials and offered a $7,500 advance plus future royalties for the rights to publish them as a hardcover book. Herbert rewrote much of his text. Dune was soon a critical success. It won the Nebula Award for Best Novel in 1965 and shared the Hugo Award in 1966 with ...And Call Me Conrad by Roger Zelazny.
Dune was not an immediate bestseller. By 1968 Herbert had made $20,000 from it, far more than most science fiction novels of the time were generating, but not enough to let him take up full-time writing. However, the publication of Dune did open doors for him. He was the Seattle Post-Intelligencer's education writer from 1969 to 1972 and lecturer in general studies and interdisciplinary studies at the University of Washington (1970–1972). He worked in Vietnam and Pakistan as a social and ecological consultant in 1972. In 1973 he was director-photographer of the television show The Tillers.
I don't worry about inspiration or anything like that.... later, coming back and reading what I have produced, I am unable to detect the difference between what came easily and when I had to sit down and say, "Well, now it's writing time and now I'll write."
By the end of 1972, Herbert had retired from newspaper writing and became a full-time fiction writer. During the 1970s and 1980s, he enjoyed considerable commercial success as an author. He divided his time between homes in Hawaii and Washington's Olympic Peninsula; his home in Port Townsend on the peninsula was intended to be an "ecological demonstration project". During this time he wrote numerous books and pushed ecological and philosophical ideas. He continued his Dune saga with Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984) and Chapterhouse: Dune (1985). Herbert planned to write a seventh novel to conclude the series, but his death in 1986 left storylines unresolved.
Other works by Herbert include The Godmakers (1972), The Dosadi Experiment (1977), The White Plague (1982) and the books he wrote in partnership with Bill Ransom: The Jesus Incident (1979), The Lazarus Effect (1983) and The Ascension Factor (1988), which were sequels to Herbert's 1966 novel Destination: Void. He also helped launch the career of Terry Brooks with a very positive review of Brooks' first novel, The Sword of Shannara, in 1977.
Herbert's change in fortune was shadowed by tragedy. In 1974, his wife Beverly underwent an operation for lung cancer. She lived ten more years, but her health was adversely affected by the surgery. During this period, Herbert was the featured speaker at the Octocon II science fiction convention held at the El Rancho Tropicana in Santa Rosa, California, in October 1978. In 1979, he met anthropologist Jim Funaro with whom he conceived the Contact Conference. Beverly Herbert died on February 7, 1984. Herbert completed and published Heretics of Dune that year. In his afterword to 1985's Chapterhouse: Dune, Herbert included a dedication to Beverly.
1984 was a tumultuous year in Herbert's life. During this same year of his wife's death, his career took off with the release of David Lynch's film version of Dune. Despite high expectations, a big-budget production design and an A-list cast, the movie drew mostly poor reviews in the United States. However, despite a disappointing response in the US, the film was a critical and commercial success in Europe and Japan.
In 1985, after Beverly's death, Herbert married his former Putnam representative Theresa Shackleford. The same year he published Chapterhouse: Dune, which tied up many of the saga's story threads. This would be Herbert's final single work (the collection Eye was published that year, and Man of Two Worlds was published in 1986). He died of a massive pulmonary embolism while recovering from surgery for pancreatic cancer on February 11, 1986, in Madison, Wisconsin, aged 65.
Herbert was a strong critic of the Soviet Union. He was a distant relative of the Republican senator Joseph McCarthy, whom he referred to as "Cousin Joe". However, he was appalled to learn of McCarthy's blacklisting of suspected communists from working in certain careers and believed that he was endangering essential freedoms of citizens of the United States. Herbert believed that governments lie to protect themselves and that, following the Watergate scandal, President Richard Nixon had unwittingly taught an important lesson in not trusting government. Herbert also opposed American involvement in the war in Vietnam.
In Chapterhouse: Dune, he wrote:
All governments suffer a recurring problem: Power attracts pathological personalities. It is not that power corrupts but that it is magnetic to the corruptible. Such people have a tendency to become drunk on violence, a condition to which they are quickly addicted.
Frank Herbert used his science fiction novels to explore complex ideas involving philosophy, religion, psychology, politics and ecology. The underlying thrust of his work was a fascination with the question of human survival and evolution. Herbert has attracted a sometimes fanatical fan base, many of whom have tried to read everything he wrote, fiction or non-fiction, and see Herbert as something of an authority on the subject matters of his books. Indeed, such was the devotion of some of his readers that Herbert was at times asked if he was founding a cult, something he was very much against.
There are a number of key themes in Herbert's work:
Frank Herbert refrained from offering his readers formulaic answers to many of the questions he explored.
Dune and the Dune saga constitute one of the world's best-selling science fiction series and novels; Dune in particular has received widespread critical acclaim, winning the Nebula Award in 1965 and sharing the Hugo Award in 1966, and is frequently considered one of the best science fiction novels ever, if not the best. Locus subscribers voted it the all-time best SF novel in 1975, again in 1987, and the best "before 1990" in 1998.
Dune is considered a landmark novel for a number of reasons:
Herbert never again equalled the critical acclaim he received for Dune. Neither his sequels to Dune nor any of his other books won a Hugo or Nebula Award, although almost all of them were New York Times Best Sellers.
Malcolm Edwards wrote, in The Encyclopedia of Science Fiction:
Much of Herbert's work makes difficult reading. His ideas were genuinely developed concepts, not merely decorative notions, but they were sometimes embodied in excessively complicated plots and articulated in prose which did not always match the level of thinking [...] His best novels, however, were the work of a speculative intellect with few rivals in modern science fiction.
The Science Fiction Hall of Fame inducted Herbert in 2006.
California State University, Fullerton's Pollack Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.
Metro Parks Tacoma built Dune Peninsula and the Frank Herbert Trail at Point Defiance Park in July 2019 to honor the hometown writer.
Beginning in 2012, Herbert's estate and WordFire Press have released four previously unpublished novels in e-book and paperback formats: High-Opp (2012), Angels' Fall (2013), A Game of Authors (2013), and A Thorn in the Bush (2014).
In recent years, Frank Herbert's son Brian Herbert and author Kevin J. Anderson have added to the Dune franchise, using notes left behind by Frank Herbert and discovered over a decade after his death. Brian Herbert and Anderson have written three prequel trilogies (Prelude to Dune, Legends of Dune and Great Schools of Dune) exploring the history of the Dune universe before the events of the original novel, two novels that take place between novels of the original Dune sequels (with plans for more), as well as two post-Chapterhouse Dune novels that complete the original series (Hunters of Dune and Sandworms of Dune) based on Frank Herbert's own Dune 7 outline. | [
{
"paragraph_id": 0,
"text": "Franklin Patrick Herbert Jr. (October 8, 1920 – February 11, 1986) was an American science fiction author best known for the 1965 novel Dune and its five sequels. Though he became famous for his novels, he also wrote short stories and worked as a newspaper journalist, photographer, book reviewer, ecological consultant, and lecturer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Dune saga, set in the distant future, and taking place over millennia, explores complex themes, such as the long-term survival of the human species, human evolution, planetary science and ecology, and the intersection of religion, politics, economics and power in a future where humanity has long since developed interstellar travel and settled many thousands of worlds. Dune is the best-selling science fiction novel of all time, and the entire series is considered to be among the classics of the genre.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Frank Patrick Herbert Jr. was born on October 8, 1920, in Tacoma, Washington, to Frank Patrick Herbert Sr. and Eileen (née McCarthy) Herbert. His rural upbringing involved spending a lot of his youth on the Olympic and Kitsap Peninsulas. He was fascinated by books and could read much of the newspaper before the age of five, had an excellent memory, and learned things quickly. He had an early interest in photography, and bought a Kodak box camera at age ten, a new folding camera in his early teens, and a color film camera in the mid-1930s. Because of an impoverished home environment, largely due to the Great Depression, he left home in 1938 to live with an aunt and uncle in Salem, Oregon.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "He enrolled in high school at Salem High School (now North Salem High School), where he graduated the next year. In 1939 he lied about his age to get his first newspaper job at the Glendale Star. Herbert then returned to Salem in 1940 where he worked for the Oregon Statesman newspaper (now Statesman Journal) in a variety of positions, including photographer.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "Herbert married Flora Lillian Parkinson in San Pedro, California, in 1941. They had one daughter, Penelope (b. February 16, 1942), and divorced in 1943. During 1942, after the U.S. entry into World War II, he served in the Navy's Seabees for six months as a photographer, but suffered a head injury and was given a medical discharge. Herbert subsequently moved to Portland, Oregon where he reported for The Oregon Journal.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "After the war, Herbert attended the University of Washington, where he met Beverly Ann Stuart at a creative writing class in 1946. They were the only students who had sold any work for publication; Herbert had sold two pulp adventure stories to magazines, the first to Esquire in 1945, and Stuart had sold a story to Modern Romance magazine. They married in Seattle in 1946, and had two sons, Brian (b. 1947) and Bruce (1951–1993). In 1949 Herbert and his wife moved to California to work on the Santa Rosa Press-Democrat. Here they befriended the psychologists Ralph and Irene Slattery. The Slatterys introduced Herbert to the work of several thinkers who would influence his writing, including Freud, Jung, Jaspers and Heidegger; they also familiarized Herbert with Zen Buddhism.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Herbert never graduated from college. According to his son Brian, he wanted to study only what interested him and so did not complete the required curriculum. He returned to journalism and worked at the Seattle Star and the Oregon Statesman. He was a writer and editor for the San Francisco Examiner's California Living magazine for a decade.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "In a 1973 interview, Herbert stated that he had been reading science fiction \"about ten years\" before he began writing in the genre, and he listed his favorite authors as H. G. Wells, Robert A. Heinlein, Poul Anderson and Jack Vance.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Herbert's first science fiction story, \"Looking for Something\", was published in the April 1952 issue of Startling Stories, then a monthly edited by Samuel Mines. Three more of his stories appeared in 1954 issues of Astounding Science Fiction and Amazing Stories. His career as a novelist began in 1955 with the serial publication of Under Pressure in Astounding from November 1955; afterward it was issued as a book by Doubleday titled The Dragon in the Sea. The story explored sanity and madness in the environment of a 21st-century submarine and predicted worldwide conflicts over oil consumption and production. It was a critical success but not a major commercial one. During this time Herbert also worked as a speechwriter for Republican senator Guy Cordon.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Herbert began researching Dune in 1959. He was able to devote himself wholeheartedly to his writing career because his wife returned to work full-time as an advertising writer for department stores, becoming the breadwinner during the 1960s. The novel Dune was published in 1965, which spearheaded the Dune franchise. He later told Willis E. McNelly that the novel originated when he was assigned to write a magazine article about sand dunes in the Oregon Dunes near Florence, Oregon. He got overinvolved and ended up with far more raw material than needed for an article. The article was never written, but it planted the seed that led to Dune. Another significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "Dune took six years of research and writing to complete and was much longer than other commercial science fiction of the time. Analog (the renamed Astounding, still edited by John W. Campbell) published it in two parts comprising eight installments, \"Dune World\" from December 1963 and \"Prophet of Dune\" in 1965. It was then rejected by nearly twenty book publishers. One editor prophetically wrote, \"I might be making the mistake of the decade, but...\"",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Sterling E. Lanier, an editor of Chilton Book Company (known mainly for its auto-repair manuals) had read the Dune serials and offered a $7,500 advance plus future royalties for the rights to publish them as a hardcover book. Herbert rewrote much of his text. Dune was soon a critical success. It won the Nebula Award for Best Novel in 1965 and shared the Hugo Award in 1966 with ...And Call Me Conrad by Roger Zelazny.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Dune was not an immediate bestseller. By 1968 Herbert had made $20,000 from it, far more than most science fiction novels of the time were generating, but not enough to let him take up full-time writing. However, the publication of Dune did open doors for him. He was the Seattle Post-Intelligencer's education writer from 1969 to 1972 and lecturer in general studies and interdisciplinary studies at the University of Washington (1970–1972). He worked in Vietnam and Pakistan as a social and ecological consultant in 1972. In 1973 he was director-photographer of the television show The Tillers.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "I don't worry about inspiration or anything like that.... later, coming back and reading what I have produced, I am unable to detect the difference between what came easily and when I had to sit down and say, \"Well, now it's writing time and now I'll write.\"",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "By the end of 1972, Herbert had retired from newspaper writing and became a full-time fiction writer. During the 1970s and 1980s, he enjoyed considerable commercial success as an author. He divided his time between homes in Hawaii and Washington's Olympic Peninsula; his home in Port Townsend on the peninsula was intended to be an \"ecological demonstration project\". During this time he wrote numerous books and pushed ecological and philosophical ideas. He continued his Dune saga with Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984) and Chapterhouse: Dune (1985). Herbert planned to write a seventh novel to conclude the series, but his death in 1986 left storylines unresolved.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Other works by Herbert include The Godmakers (1972), The Dosadi Experiment (1977), The White Plague (1982) and the books he wrote in partnership with Bill Ransom: The Jesus Incident (1979), The Lazarus Effect (1983) and The Ascension Factor (1988), which were sequels to Herbert's 1966 novel Destination: Void. He also helped launch the career of Terry Brooks with a very positive review of Brooks' first novel, The Sword of Shannara, in 1977.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "Herbert's change in fortune was shadowed by tragedy. In 1974, his wife Beverly underwent an operation for lung cancer. She lived ten more years, but her health was adversely affected by the surgery. During this period, Herbert was the featured speaker at the Octocon II science fiction convention held at the El Rancho Tropicana in Santa Rosa, California, in October 1978. In 1979, he met anthropologist Jim Funaro with whom he conceived the Contact Conference. Beverly Herbert died on February 7, 1984. Herbert completed and published Heretics of Dune that year. In his afterword to 1985's Chapterhouse: Dune, Herbert included a dedication to Beverly.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "1984 was a tumultuous year in Herbert's life. During this same year of his wife's death, his career took off with the release of David Lynch's film version of Dune. Despite high expectations, a big-budget production design and an A-list cast, the movie drew mostly poor reviews in the United States. However, despite a disappointing response in the US, the film was a critical and commercial success in Europe and Japan.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "In 1985, after Beverly's death, Herbert married his former Putnam representative Theresa Shackleford. The same year he published Chapterhouse: Dune, which tied up many of the saga's story threads. This would be Herbert's final single work (the collection Eye was published that year, and Man of Two Worlds was published in 1986). He died of a massive pulmonary embolism while recovering from surgery for pancreatic cancer on February 11, 1986, in Madison, Wisconsin, aged 65.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "Herbert was a strong critic of the Soviet Union. He was a distant relative of the Republican senator Joseph McCarthy, whom he referred to as \"Cousin Joe\". However, he was appalled to learn of McCarthy's blacklisting of suspected communists from working in certain careers and believed that he was endangering essential freedoms of citizens of the United States. Herbert believed that governments lie to protect themselves and that, following the Watergate scandal, President Richard Nixon had unwittingly taught an important lesson in not trusting government. Herbert also opposed American involvement in the war in Vietnam.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "In Chapterhouse: Dune, he wrote:",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "All governments suffer a recurring problem: Power attracts pathological personalities. It is not that power corrupts but that it is magnetic to the corruptible. Such people have a tendency to become drunk on violence, a condition to which they are quickly addicted.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "Frank Herbert used his science fiction novels to explore complex ideas involving philosophy, religion, psychology, politics and ecology. The underlying thrust of his work was a fascination with the question of human survival and evolution. Herbert has attracted a sometimes fanatical fan base, many of whom have tried to read everything he wrote, fiction or non-fiction, and see Herbert as something of an authority on the subject matters of his books. Indeed, such was the devotion of some of his readers that Herbert was at times asked if he was founding a cult, something he was very much against.",
"title": "Ideas and themes"
},
{
"paragraph_id": 23,
"text": "There are a number of key themes in Herbert's work:",
"title": "Ideas and themes"
},
{
"paragraph_id": 24,
"text": "Frank Herbert refrained from offering his readers formulaic answers to many of the questions he explored.",
"title": "Ideas and themes"
},
{
"paragraph_id": 25,
"text": "Dune and the Dune saga constitute one of the world's best-selling science fiction series and novels; Dune in particular has received widespread critical acclaim, winning the Nebula Award in 1965 and sharing the Hugo Award in 1966, and is frequently considered one of the best science fiction novels ever, if not the best. Locus subscribers voted it the all-time best SF novel in 1975, again in 1987, and the best \"before 1990\" in 1998.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 26,
"text": "Dune is considered a landmark novel for a number of reasons:",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 27,
"text": "Herbert never again equalled the critical acclaim he received for Dune. Neither his sequels to Dune nor any of his other books won a Hugo or Nebula Award, although almost all of them were New York Times Best Sellers.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 28,
"text": "Malcolm Edwards wrote, in The Encyclopedia of Science Fiction:",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 29,
"text": "Much of Herbert's work makes difficult reading. His ideas were genuinely developed concepts, not merely decorative notions, but they were sometimes embodied in excessively complicated plots and articulated in prose which did not always match the level of thinking [...] His best novels, however, were the work of a speculative intellect with few rivals in modern science fiction.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 30,
"text": "The Science Fiction Hall of Fame inducted Herbert in 2006.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 31,
"text": "California State University, Fullerton's Pollack Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 32,
"text": "Metro Parks Tacoma built Dune Peninsula and the Frank Herbert Trail at Point Defiance Park in July 2019 to honor the hometown writer.",
"title": "Status and influence on science fiction"
},
{
"paragraph_id": 33,
"text": "Beginning in 2012, Herbert's estate and WordFire Press have released four previously unpublished novels in e-book and paperback formats: High-Opp (2012), Angels' Fall (2013), A Game of Authors (2013), and A Thorn in the Bush (2014).",
"title": "Bibliography"
},
{
"paragraph_id": 34,
"text": "In recent years, Frank Herbert's son Brian Herbert and author Kevin J. Anderson have added to the Dune franchise, using notes left behind by Frank Herbert and discovered over a decade after his death. Brian Herbert and Anderson have written three prequel trilogies (Prelude to Dune, Legends of Dune and Great Schools of Dune) exploring the history of the Dune universe before the events of the original novel, two novels that take place between novels of the original Dune sequels (with plans for more), as well as two post-Chapterhouse Dune novels that complete the original series (Hunters of Dune and Sandworms of Dune) based on Frank Herbert's own Dune 7 outline.",
"title": "Bibliography"
}
]
| Franklin Patrick Herbert Jr. was an American science fiction author best known for the 1965 novel Dune and its five sequels. Though he became famous for his novels, he also wrote short stories and worked as a newspaper journalist, photographer, book reviewer, ecological consultant, and lecturer. The Dune saga, set in the distant future, and taking place over millennia, explores complex themes, such as the long-term survival of the human species, human evolution, planetary science and ecology, and the intersection of religion, politics, economics and power in a future where humanity has long since developed interstellar travel and settled many thousands of worlds. Dune is the best-selling science fiction novel of all time, and the entire series is considered to be among the classics of the genre. | 2001-06-16T12:04:58Z | 2023-12-31T13:47:28Z | [
"Template:Portal bar",
"Template:Infobox writer",
"Template:Cite news",
"Template:IBList",
"Template:OL author",
"Template:Dune franchise",
"Template:Cite book",
"Template:Frank Herbert",
"Template:Dead link",
"Template:Ibdof name",
"Template:Nebula Award Best Novel",
"Template:Wikiquote",
"Template:Librivox author",
"Template:For",
"Template:Snd",
"Template:Blockquote",
"Template:Page needed",
"Template:Cite web",
"Template:ISBN",
"Template:Isfdb name",
"Template:Authority control",
"Template:Sfn",
"Template:'s",
"Template:Main",
"Template:Citation",
"Template:Commons category",
"Template:Gutenberg author",
"Template:Short description",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Sfhof",
"Template:Internet Archive author",
"Template:Hugo Award Best Novel",
"Template:Use mdy dates",
"Template:Herbert notes",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Frank_Herbert |
10,853 | Fictional language | Fictional languages are the subset of constructed languages (conlangs) that have been created as part of a fictional setting (e.g. for use in a book, movie, television show, or video game). Typically they are the creation of one individual, while natural languages evolve out of a particular culture or people group, and other conlangs may have group involvement. Fictional languages are also distinct from natural languages in that they have no native speakers. By contrast, the constructed language of Esperanto now has native speakers.
Fictional languages are intended to be the languages of a fictional world and are often designed with the intent of giving more depth, and an appearance of plausibility, to the fictional worlds with which they are associated. The goal of the author may be to have their characters communicate in a fashion which is both alien and dislocated. Within their fictional world, these languages do function as natural languages, helping to identify certain races or people groups and set them apart from others.
While some less-formed fictional languages are created as distorted versions or dialects of a pre-existing natural language, many are independently designed conlangs with their own lexicon (some more robust than others) and rules of grammar. Some of the latter are fully formed enough to be learned as a speakable language, and many subcultures exist of those who are 'fluent' in one or more of these fictional languages. Often after the creator of a fictional language has accomplished their task, the fandom of that fictional universe will pick up where the creator left off and continue to flesh out the language, making it more like a natural language and therefore more usable.
Fictional languages are separated from artistic languages by both purpose and relative completion: a fictional language often has the least amount of grammar and vocabulary possible, and rarely extends beyond the absolutely necessary. At the same time, some others have developed languages in detail for their own sake, such as J. R. R. Tolkien's Quenya and Sindarin (two Elvish languages), Star Trek's Klingon language and Avatar's Na'vi language which exist as functioning, usable languages.
By analogy with the word "conlang", the term conworld is used to describe these fictional worlds, inhabited by fictional constructed cultures. The conworld influences vocabulary (what words the language will have for flora and fauna, articles of clothing, objects of technology, religious concepts, names of places and tribes, etc.), as well as influencing other factors such as pronouns, or how their cultures view the break-off points between colors or the gender and age of family members. Sound is also a directing factor, as creators seek to show their audience through phonology the type of race or people group to whom the language belongs.
Commercial fictional languages are those languages created for use in various commercial media, such as:
While some languages are created purely from the desire of the creator, language creation can be a profession. In 1974, Victoria Fromkin was the first person hired to create a language (Land of the Lost's Paku). Since then, notable professional language creators have included Marc Okrand (Klingon), David Peterson (Dothraki and others in Game of Thrones), and Paul Frommer (Na'vi).
A notable subgenre of fictional languages are alien languages, the ones that are used or might be used by putative extraterrestrial life forms. Alien languages are subject of both science fiction and scientific research. Perhaps the most fully developed fictional alien language is the Klingon language of the Star Trek universe – a fully developed constructed language.
The problem of alien language has confronted generations of science fiction writers; some have created fictional languages for their characters to use, while others have circumvented the problem through translation devices or other fantastic technology. For example, the Star Trek universe makes use of a "universal translator", which explains why such different races, often meeting for the first time, are able to communicate with each other. Another more humorous example would be the Babel fish from The Hitchhiker's Guide to the Galaxy, an aurally-inserted fish that instantaneously translates alien speech to the speaker's native language.
While in many cases an alien language is but an element of a fictional reality, in a number of science fiction works the core of the plot involves linguistic and psychological problems of communication between various alien species.
A further subgenre of alien languages are those that are visual, rather than auditory. Notable examples of this type are Sherman's Circular Gallifreyan from BBC's Doctor Who series (although this language was entirely created and spread by fans and all appearances of Gallifreyan in the show are merely meaningless symbols) and the Heptapod language from the 2016 film Arrival.
Internet-based fictional languages are hosted along with their "conworlds" on the internet, and based at these sites, becoming known to the world through the visitors to these sites. Verdurian, the language of Mark Rosenfelder's Verduria on the planet of Almea, is a flagship Internet-based fictional language. Rosenfelder's website includes resources for other aspiring language creators.
Many other fictional languages and their associated conworlds are created privately by their inventor, known only to the inventor and perhaps a few friends. | [
{
"paragraph_id": 0,
"text": "Fictional languages are the subset of constructed languages (conlangs) that have been created as part of a fictional setting (e.g. for use in a book, movie, television show, or video game). Typically they are the creation of one individual, while natural languages evolve out of a particular culture or people group, and other conlangs may have group involvement. Fictional languages are also distinct from natural languages in that they have no native speakers. By contrast, the constructed language of Esperanto now has native speakers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fictional languages are intended to be the languages of a fictional world and are often designed with the intent of giving more depth, and an appearance of plausibility, to the fictional worlds with which they are associated. The goal of the author may be to have their characters communicate in a fashion which is both alien and dislocated. Within their fictional world, these languages do function as natural languages, helping to identify certain races or people groups and set them apart from others.",
"title": ""
},
{
"paragraph_id": 2,
"text": "While some less-formed fictional languages are created as distorted versions or dialects of a pre-existing natural language, many are independently designed conlangs with their own lexicon (some more robust than others) and rules of grammar. Some of the latter are fully formed enough to be learned as a speakable language, and many subcultures exist of those who are 'fluent' in one or more of these fictional languages. Often after the creator of a fictional language has accomplished their task, the fandom of that fictional universe will pick up where the creator left off and continue to flesh out the language, making it more like a natural language and therefore more usable.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fictional languages are separated from artistic languages by both purpose and relative completion: a fictional language often has the least amount of grammar and vocabulary possible, and rarely extends beyond the absolutely necessary. At the same time, some others have developed languages in detail for their own sake, such as J. R. R. Tolkien's Quenya and Sindarin (two Elvish languages), Star Trek's Klingon language and Avatar's Na'vi language which exist as functioning, usable languages.",
"title": "Purpose"
},
{
"paragraph_id": 4,
"text": "By analogy with the word \"conlang\", the term conworld is used to describe these fictional worlds, inhabited by fictional constructed cultures. The conworld influences vocabulary (what words the language will have for flora and fauna, articles of clothing, objects of technology, religious concepts, names of places and tribes, etc.), as well as influencing other factors such as pronouns, or how their cultures view the break-off points between colors or the gender and age of family members. Sound is also a directing factor, as creators seek to show their audience through phonology the type of race or people group to whom the language belongs.",
"title": "Purpose"
},
{
"paragraph_id": 5,
"text": "Commercial fictional languages are those languages created for use in various commercial media, such as:",
"title": "Commercial fictional languages"
},
{
"paragraph_id": 6,
"text": "While some languages are created purely from the desire of the creator, language creation can be a profession. In 1974, Victoria Fromkin was the first person hired to create a language (Land of the Lost's Paku). Since then, notable professional language creators have included Marc Okrand (Klingon), David Peterson (Dothraki and others in Game of Thrones), and Paul Frommer (Na'vi).",
"title": "Commercial fictional languages"
},
{
"paragraph_id": 7,
"text": "A notable subgenre of fictional languages are alien languages, the ones that are used or might be used by putative extraterrestrial life forms. Alien languages are subject of both science fiction and scientific research. Perhaps the most fully developed fictional alien language is the Klingon language of the Star Trek universe – a fully developed constructed language.",
"title": "Alien languages"
},
{
"paragraph_id": 8,
"text": "The problem of alien language has confronted generations of science fiction writers; some have created fictional languages for their characters to use, while others have circumvented the problem through translation devices or other fantastic technology. For example, the Star Trek universe makes use of a \"universal translator\", which explains why such different races, often meeting for the first time, are able to communicate with each other. Another more humorous example would be the Babel fish from The Hitchhiker's Guide to the Galaxy, an aurally-inserted fish that instantaneously translates alien speech to the speaker's native language.",
"title": "Alien languages"
},
{
"paragraph_id": 9,
"text": "While in many cases an alien language is but an element of a fictional reality, in a number of science fiction works the core of the plot involves linguistic and psychological problems of communication between various alien species.",
"title": "Alien languages"
},
{
"paragraph_id": 10,
"text": "A further subgenre of alien languages are those that are visual, rather than auditory. Notable examples of this type are Sherman's Circular Gallifreyan from BBC's Doctor Who series (although this language was entirely created and spread by fans and all appearances of Gallifreyan in the show are merely meaningless symbols) and the Heptapod language from the 2016 film Arrival.",
"title": "Alien languages"
},
{
"paragraph_id": 11,
"text": "Internet-based fictional languages are hosted along with their \"conworlds\" on the internet, and based at these sites, becoming known to the world through the visitors to these sites. Verdurian, the language of Mark Rosenfelder's Verduria on the planet of Almea, is a flagship Internet-based fictional language. Rosenfelder's website includes resources for other aspiring language creators.",
"title": "Internet-based fictional languages"
},
{
"paragraph_id": 12,
"text": "Many other fictional languages and their associated conworlds are created privately by their inventor, known only to the inventor and perhaps a few friends.",
"title": "Internet-based fictional languages"
}
]
| Fictional languages are the subset of constructed languages (conlangs) that have been created as part of a fictional setting. Typically they are the creation of one individual, while natural languages evolve out of a particular culture or people group, and other conlangs may have group involvement. Fictional languages are also distinct from natural languages in that they have no native speakers. By contrast, the constructed language of Esperanto now has native speakers. Fictional languages are intended to be the languages of a fictional world and are often designed with the intent of giving more depth, and an appearance of plausibility, to the fictional worlds with which they are associated. The goal of the author may be to have their characters communicate in a fashion which is both alien and dislocated. Within their fictional world, these languages do function as natural languages, helping to identify certain races or people groups and set them apart from others. While some less-formed fictional languages are created as distorted versions or dialects of a pre-existing natural language, many are independently designed conlangs with their own lexicon and rules of grammar. Some of the latter are fully formed enough to be learned as a speakable language, and many subcultures exist of those who are 'fluent' in one or more of these fictional languages. Often after the creator of a fictional language has accomplished their task, the fandom of that fictional universe will pick up where the creator left off and continue to flesh out the language, making it more like a natural language and therefore more usable. | 2001-06-16T06:40:00Z | 2023-12-15T00:20:33Z | [
"Template:Main",
"Template:Commons category",
"Template:'",
"Template:Distinguish",
"Template:Multiple issues",
"Template:Sfn",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite book",
"Template:Short description",
"Template:Authority control",
"Template:Constructed languages"
]
| https://en.wikipedia.org/wiki/Fictional_language |
10,854 | Formula One | Formula One (more commonly known as Formula 1 or F1) is the highest class of international racing for open-wheel single-seater formula racing cars sanctioned by the Fédération Internationale de l'Automobile (FIA). The FIA Formula One World Championship has been one of the premier forms of racing around the world since its inaugural season in 1950. The word formula in the name refers to the set of rules to which all participants' cars must conform. A Formula One season consists of a series of races, known as Grands Prix. Grands Prix take place in multiple countries and continents around the world on either purpose-built circuits or closed public roads.
A points system is used at Grands Prix to determine two annual World Championships: one for the drivers, and one for the constructors (the teams). Each driver must hold a valid Super Licence, the highest class of racing licence issued by the FIA, and the races must be held on grade one tracks, the highest grade-rating issued by the FIA for tracks.
Formula One cars are the fastest regulated road-course racing cars in the world, owing to very high cornering speeds achieved through generating large amounts of aerodynamic downforce. Much of this downforce is generated by front and rear wings, which have the side effect of causing severe turbulence behind each car. The turbulence reduces the downforce generated by the cars following directly behind, making it hard to overtake. Major changes made to the cars for the 2022 season have resulted in greater use of ground effect aerodynamics and modified wings to reduce the turbulence behind the cars, with the goal of making overtaking easier. The cars are dependent on electronics, aerodynamics, suspension and tyres. Traction control, launch control, and automatic shifting, plus other electronic driving aids, were first banned in 1994. They were briefly reintroduced in 2001, and have more recently been banned since 2004 and 2008, respectively.
With the average annual cost of running a team – designing, building, and maintaining cars, pay, transport – being approximately £220,000,000 (or $265,000,000), its financial and political battles are widely reported. On 23 January 2017, Liberty Media completed its acquisition of the Formula One Group, from private-equity firm CVC Capital Partners for £6.4bn ($8bn).
Formula One originated from the European Motor Racing Championships of the 1920s and 1930s. The formula consists of a set of rules that all participants' cars must follow. Formula One was a new formula agreed upon during 1946 to officially become effective from 1st January 1947. The first Grand Prix in accordance with the new regulations was the 1946 Turin Grand Prix anticipating the official start of the formula.
Before World War II, a number of Grand Prix racing organisations had made suggestions for a new championship to replace the European Championship before but due to the suspension of racing during the conflict, the new International Formula for cars did not become formalised until 1946, to become effective from 1st January 1947. The new World Championship was instituted to commence in 1950.
The first world championship race took place at Silverstone Circuit in the United Kingdom on 13 May 1950. Giuseppe Farina, competing for Alfa Romeo, won the first Drivers' World Championship, narrowly defeating his teammate Juan Manuel Fangio. Fangio went on to win the championship in 1951, 1954, 1955, 1956, and 1957. This set the record for the most World Championships won by a single driver, a record that stood for 46 years until Michael Schumacher won his sixth championship in 2003.
A Constructors' Championship was added in the 1958 season. Stirling Moss, despite being regarded as one of the greatest Formula One drivers in the 1950s and 1960s, never won the Formula One championship. Between 1955 and 1961, Moss finished second place in the championship four times and in third place the other three times. Fangio won 24 of the 52 races he entered – still the record for the highest Formula One wins percentage by an individual driver. National championships existed in South Africa and the UK in the 1960s and 1970s. Non-championship Formula One events were held by promoters for many years. Due to the increasing cost of competition, the last of these was held in 1983.
This era featured teams managed by road-car manufacturers, such as: Alfa Romeo, Ferrari, Mercedes-Benz and Maserati. The first seasons featured pre-war cars like Alfa Romeo‘s 158, which were front-engined, with narrow tyres and 1.5-litre supercharged or 4.5-litre naturally aspirated engines. The 1952 and 1953 seasons were run to Formula Two regulations, for smaller, less powerful cars, due to concerns over the lack of Formula One cars available. When a new Formula One formula for engines limited to 2.5 litres was reinstated to the world championship for 1954, Mercedes-Benz introduced their W196. The W196 featured things never seen on Formula One cars before, such as: desmodromic valves, fuel injection and enclosed streamlined bodywork. Mercedes drivers won the championship for the next two years, before the team withdrew from all motorsport competitions due to the 1955 Le Mans disaster.
The first major technological development in the sport was Bugatti's introduction of mid-engined cars. Jack Brabham, the world champion in 1959, 1960, and 1966, soon proved the mid-engine's superiority over all other engine positions. By 1961 all teams had switched to mid-engined cars. The Ferguson P99, a four-wheel drive design, was the last front-engined Formula One car to enter a world championship race. It was entered in the 1961 British Grand Prix, the only front-engined car to compete that year.
During 1962, Lotus introduced a car with an aluminium-sheet monocoque chassis instead of the traditional space-frame design. This proved to be the greatest technological breakthrough since the introduction of mid-engined cars.
In 1968 sponsorship was introduced to the sport. Team Gunston became the first team to run cigarette sponsorship on their Brabham cars, which privately entered in orange, brown and gold colours of Gunston cigarettes in the 1968 South African Grand Prix on 1 January 1968. Five months later, Lotus as the first works team followed this example when they entered their cars painted in the red, gold and white colours of the Imperial Tobacco's Gold Leaf livery at the 1968 Spanish Grand Prix.
Aerodynamic downforce slowly gained importance in car design with the appearance of aerofoils during the 1968 season. During the late 1970s, Lotus introduced ground-effect aerodynamics, previously used on Jim Hall's Chaparral 2J in 1970, that provided enormous downforce and greatly increased cornering speeds. The aerodynamic forces pressing the cars to the track were up to five times the car's weight. As a result, extremely stiff springs were needed to maintain a constant ride height, leaving the suspension virtually solid. This meant that the drivers were depending entirely on the tyres for any small amount of cushioning of the car and driver from irregularities of the road surface.
Beginning in the 1970s, Bernie Ecclestone rearranged the management of Formula One's commercial rights; he is widely credited with transforming the sport into the multibillion-dollar business it now is. When Ecclestone bought the Brabham team during 1971, he gained a seat on the Formula One Constructors' Association and during 1978, he became its president. Previously, the circuit owners controlled the income of the teams and negotiated with each individually; however, Ecclestone persuaded the teams to "hunt as a pack" through FOCA. He offered Formula One to circuit owners as a package, which they could take or leave. In return for the package, almost all that was required was to surrender trackside advertising.
The formation of the Fédération Internationale du Sport Automobile (FISA) during 1979 set off the FISA–FOCA war, during which FISA and its president Jean-Marie Balestre argued repeatedly with FOCA over television revenues and technical regulations. The Guardian said that Ecclestone and Max Mosley "used [FOCA] to wage a guerrilla war with a very long-term aim in view". FOCA threatened to establish a rival series, boycotted a Grand Prix and FISA withdrew its sanction from races. The result was the 1981 Concorde Agreement, which guaranteed technical stability, as teams were to be given reasonable notice of new regulations. Although FISA asserted its right to the TV revenues, it handed the administration of those rights to FOCA.
FISA imposed a ban on ground-effect aerodynamics during 1983. By then, however, turbocharged engines, which Renault had pioneered in 1977, were producing over 520 kW (700 bhp) and were essential to be competitive. By 1986, a BMW turbocharged engine achieved a flash reading of 5.5 bar (80 psi) pressure, estimated to be over 970 kW (1,300 bhp) in qualifying for the Italian Grand Prix. The next year, power in race trim reached around 820 kW (1,100 bhp), with boost pressure limited to only 4.0 bar. These cars were the most powerful open-wheel circuit racing cars ever. To reduce engine power output and thus speeds, the FIA limited fuel tank capacity in 1984, and boost pressures in 1988, before banning turbocharged engines completely in 1989.
The development of electronic driver aids began during the 1980s. Lotus began to develop a system of active suspension, which first appeared during 1983 on the Lotus 92. By 1987, this system had been perfected and was driven to victory by Ayrton Senna in the Monaco Grand Prix that year. In the early 1990s, other teams followed suit and semi-automatic gearboxes and traction control were a natural progression. The FIA, due to complaints that technology was determining the outcome of races more than driver skill, banned many such aids for the 1994 season. This resulted in cars that were previously dependent on electronic aids becoming very "twitchy" and difficult to drive. Observers felt the ban on driver aids was in name only, as they "proved difficult to police effectively".
The teams signed a second Concorde Agreement during 1992 and a third in 1997.
On the track, the McLaren and Williams teams dominated the 1980s and 1990s. Brabham were also being competitive during the early part of the 1980s, winning two Drivers' Championships with Nelson Piquet. Powered by Porsche, Honda, and Mercedes-Benz, McLaren won sixteen championships (seven constructors' and nine drivers') in that period, while Williams used engines from Ford, Honda, and Renault to also win sixteen titles (nine constructors' and seven drivers'). The rivalry between racers Ayrton Senna and Alain Prost became F1's central focus during 1988 and continued until Prost retired at the end of 1993. Senna died at the 1994 San Marino Grand Prix after crashing into a wall on the exit of the notorious curve Tamburello. The FIA worked to improve the sport's safety standards since that weekend, during which Roland Ratzenberger also died in an accident during Saturday qualifying. No driver died of injuries sustained on the track at the wheel of a Formula One car for 20 years until the 2014 Japanese Grand Prix, where Jules Bianchi collided with a recovery vehicle after aquaplaning off the circuit, dying nine months later from his injuries. Since 1994, three track marshals have died, one at the 2000 Italian Grand Prix, the second at the 2001 Australian Grand Prix and the third at the 2013 Canadian Grand Prix.
Since the deaths of Senna and Ratzenberger, the FIA has used safety as a reason to impose rule changes that otherwise, under the Concorde Agreement, would have had to be agreed upon by all the teams – most notably the changes introduced for 1998. This so-called 'narrow track' era resulted in cars with smaller rear tyres, a narrower track overall, and the introduction of grooved tyres to reduce mechanical grip. The objective was to reduce cornering speeds and to produce racing similar to rainy conditions by enforcing a smaller contact patch between tyre and track. This, according to the FIA, was to reduce cornering speeds in the interest of safety.
Results were mixed, as the lack of mechanical grip resulted in the more ingenious designers clawing back the deficit with aerodynamic grip. This resulted in pushing more force onto the tyres through wings and aerodynamic devices, which in turn resulted in less overtaking as these devices tended to make the wake behind the car turbulent or 'dirty'. This prevented other cars from following closely due to their dependence on 'clean' air to make the car stick to the track. The grooved tyres also had the unfortunate side effect of initially being of a harder compound to be able to hold the grooved tread blocks, which resulted in spectacular accidents in times of aerodynamic grip failure, as the harder compound could not grip the track as well.
Drivers from McLaren, Williams, Renault (formerly Benetton), and Ferrari, dubbed the "Big Four", won every World Championship from 1984 to 2008. The teams won every Constructors' Championship from 1979 to 2008, as well as placing themselves as the top four teams in the Constructors' Championship in every season between 1989 and 1997, and winning every race but one (the 1996 Monaco Grand Prix) between 1988 and 1997. Due to the technological advances of the 1990s, the cost of competing in Formula One increased dramatically, thus increasing financial burdens. This, combined with the dominance of four teams (largely funded by big car manufacturers such as Mercedes-Benz), caused the poorer independent teams to struggle not only to remain competitive but to stay in business. This effectively forced several teams to withdraw.
Michael Schumacher and Ferrari won five consecutive Drivers' Championships (2000–2004) and six consecutive Constructors' Championships (1999–2004). Schumacher set many new records, including those for Grand Prix wins (91, since beaten by Lewis Hamilton), wins in a season (thirteen, since beaten by Max Verstappen), and most Drivers' Championships (seven, tied with Lewis Hamilton as of 2021). Schumacher's championship streak ended on 25 September 2005, when Renault driver Fernando Alonso became Formula One's youngest champion at that time (until Lewis Hamilton in 2008 and followed by Sebastian Vettel in 2010). During 2006, Renault and Alonso won both titles again. Schumacher retired at the end of 2006 after sixteen years in Formula One, but came out of retirement for the 2010 season, racing for the newly formed Mercedes works team, following the rebrand of Brawn GP.
During this period, the championship rules were changed frequently by the FIA with the intention of improving the on-track action and cutting costs. Team orders, legal since the championship started during 1950, were banned during 2002, after several incidents, in which teams openly manipulated race results, generating negative publicity, most famously by Ferrari at the 2002 Austrian Grand Prix. Other changes included the qualifying format, the points scoring system, the technical regulations, and rules specifying how long engines and tyres must last. A "tyre war" between suppliers Michelin and Bridgestone saw lap times fall, although, at the 2005 United States Grand Prix at Indianapolis, seven out of ten teams did not race when their Michelin tyres were deemed unsafe for use, leading to Bridgestone becoming the sole tyre supplier to Formula One for the 2007 season by default. Bridgestone then went on to sign a contract on 20 December 2007 that officially made them the exclusive tyre supplier for the next three seasons.
During 2006, Max Mosley outlined a "green" future for Formula One, in which efficient use of energy would become an important factor.
Starting in 2000, with Ford's purchase of Stewart Grand Prix to form the Jaguar Racing team, new manufacturer-owned teams entered Formula One for the first time since the departure of Alfa Romeo and Renault at the end of 1985. By 2006, the manufacturer teams – Renault, BMW, Toyota, Honda, and Ferrari – dominated the championship, taking five of the first six places in the Constructors' Championship. The sole exception was McLaren, which at the time was part-owned by Mercedes-Benz. Through the Grand Prix Manufacturers Association (GPMA), the manufacturers negotiated a larger share of Formula One's commercial profit and a greater say in the running of the sport.
In 2008 and 2009, Honda, BMW, and Toyota all withdrew from Formula One racing within the space of a year, blaming the economic recession. This resulted in the end of manufacturer dominance within the sport. The Honda F1 team went through a management buyout to become Brawn GP with Ross Brawn and Nick Fry running and owning the majority of the organisation. Brawn GP laid off hundreds of employees, but eventually won the year's world championships. BMW F1 was bought out by the original founder of the team, Peter Sauber. The Lotus F1 Team were another, formerly manufacturer-owned team that reverted to "privateer" ownership, together with the buy-out of the Renault team by Genii Capital investors. A link with their previous owners still survived, however, with their car continuing to be powered by a Renault engine until 2014.
McLaren also announced that it was to reacquire the shares in its team from Mercedes-Benz (McLaren's partnership with Mercedes was reported to have started to sour with the McLaren Mercedes SLR road car project and tough F1 championships which included McLaren being found guilty of spying on Ferrari). Hence, during the 2010 season, Mercedes-Benz re-entered the sport as a manufacturer after its purchase of Brawn GP and split with McLaren after 15 seasons with the team.
During the 2009 season of Formula One, the sport was gripped by the FIA–FOTA dispute. The FIA President Max Mosley proposed numerous cost-cutting measures for the following season, including an optional budget cap for the teams; teams electing to take the budget cap would be granted greater technical freedom, adjustable front and rear wings and an engine not subject to a rev limiter. The Formula One Teams Association (FOTA) believed that allowing some teams to have such technical freedom would have created a 'two-tier' championship, and thus requested urgent talks with the FIA. However, talks broke down and FOTA teams announced, with the exception of Williams and Force India, that 'they had no choice' but to form a breakaway championship series.
On 24 June, an agreement was reached between Formula One's governing body and the teams to prevent a breakaway series. It was agreed teams must cut spending to the level of the early 1990s within two years; exact figures were not specified, and Max Mosley agreed he would not stand for re-election to the FIA presidency in October. Following further disagreements, after Max Mosley suggested he would stand for re-election, FOTA made it clear that breakaway plans were still being pursued. On 8 July, FOTA issued a press release stating they had been informed they were not entered for the 2010 season, and an FIA press release said the FOTA representatives had walked out of the meeting. On 1 August, it was announced FIA and FOTA had signed a new Concorde Agreement, bringing an end to the crisis and securing the sport's future until 2012.
To compensate for the loss of manufacturer teams, four new teams were accepted entry into the 2010 season ahead of a much anticipated 'cost-cap'. Entrants included a reborn Team Lotus – which was led by a Malaysian consortium including Tony Fernandes, the boss of Air Asia; Hispania Racing – the first Spanish Formula One team; as well as Virgin Racing – Richard Branson's entry into the series following a successful partnership with Brawn the year before. They were also joined by the US F1 Team, which planned to run out of the United States as the only non-European-based team in the sport. Financial issues befell the squad before they even made the grid. Despite the entry of these new teams, the proposed cost-cap was repealed and these teams – who did not have the budgets of the midfield and top-order teams – ran around at the back of the field until they inevitably collapsed; HRT in 2012, Caterham (formerly Lotus) in 2014 and Manor (formerly Virgin then Marussia), having survived falling into administration in 2014, went under at the end of 2016.
A major rule shake-up in 2014 saw the 2.4-litre naturally aspirated V8 engines replaced by 1.6-litre turbocharged hybrid power units. This prompted Honda to return to the sport in 2015 as the championship's fourth power unit manufacturer. Mercedes emerged as the dominant force after the rule shake-up, with Lewis Hamilton winning the championship closely followed by his main rival and teammate, Nico Rosberg, with the team winning 16 out of the 19 races that season. The team continued this form in the following two seasons, again winning 16 races in 2015 before taking a record 19 wins in 2016, with Hamilton claiming the title in the former year and Rosberg winning it in the latter by five points. The 2016 season also saw a new team, Haas, join the grid, while Max Verstappen became the youngest-ever race winner at the age of 18 in Spain.
After revised aerodynamic regulations were introduced, the 2017 and 2018 seasons featured a title battle between Mercedes and Ferrari. However, Mercedes ultimately won the titles with multiple races to spare and continued to experience dominance in the next two years, eventually winning seven consecutive Drivers' Championships from 2014 to 2020 and eight consecutive Constructors' titles from 2014 to 2021. During this eight-year period between 2014 and 2021, 111 of the 160 races were won by a Mercedes driver, with Hamilton winning 81 of these races and taking six Drivers' Championships during this period to equal Schumacher's record of seven titles. In 2021, the Honda-powered Red Bull team began to seriously challenge Mercedes, with their driver Max Verstappen beating Hamilton to the Drivers' Championship after a season-long battle that saw the pair exchange the championship lead multiple times.
This era has seen an increase in car manufacturer presence in the sport. After Honda's return as an engine manufacturer in 2015, Renault came back as a team in 2016 after buying back the Lotus F1 team. In 2018, Aston Martin and Alfa Romeo became Red Bull and Sauber's title sponsors, respectively. Sauber was rebranded as Alfa Romeo Racing for the 2019 season, while Racing Point part-owner Lawrence Stroll bought a stake in Aston Martin to rebrand the Racing Point team as Aston Martin for 2021. In August 2020, a new Concorde Agreement was signed by all ten F1 teams committing them to the sport until 2025, including a $145M budget cap for car development to support equal competition and sustainable development in the future.
The COVID-19 pandemic forced the sport to adapt to budgetary and logistical limitations. A significant overhaul of the technical regulations intended to be introduced in the 2021 season was pushed back to 2022, with constructors instead using their 2020 chassis for two seasons and a token system limiting which parts could be modified was introduced. The start of the 2020 season was delayed by several months, and both it and 2021 seasons were subject to several postponements, cancellations and rescheduling of races due to the shifting restrictions on international travel. Many races took place behind closed doors and with only essential personnel present to maintain social distancing.
In 2022, a major rule and car design change was announced by the F1 governing body, intended to promote closer racing through the use of ground effects, new aerodynamics, larger wheels with low-profile tires, and redesigned nose and wing regulations. The 2022 Constructors' and Drivers' Championships were won by Red Bull and Verstappen, respectively.
A Formula One Grand Prix event spans a weekend. It typically begins with two free practice sessions on Friday, and one free practice on Saturday. Additional drivers (commonly known as third drivers) are allowed to run on Fridays, but only two cars may be used per team, requiring a race driver to give up their seat. A qualifying session is held after the last free practice session. This session determines the starting order for the race on Sunday.
Each driver may use no more than thirteen sets of dry-weather tyres, four sets of intermediate tyres, and three sets of wet-weather tyres during a race weekend.
For much of the sport's history, qualifying sessions differed little from practice sessions; drivers would have one or more sessions in which to set their fastest time, with the grid order determined by each driver's best single lap, with the fastest getting first place on the grid, referred to as pole position. From 1996 to 2002, the format was a one-hour shootout. This approach lasted until the end of 2002 before the rules were changed again because the teams were not running in the early part of the session to take advantage of better track conditions later on.
Grids were generally limited to 26 cars – if the race had more entries, qualification would also decide which drivers would start the race. During the early 1990s, the number of entries was so high that the worst-performing teams had to enter a pre-qualifying session, with the fastest cars allowed through to the main qualifying session. The qualifying format began to change in the early 2000s, with the FIA experimenting with limiting the number of laps, determining the aggregate time over two sessions, and allowing each driver only one qualifying lap.
The current qualifying system was adopted in the 2006 season. Known as "knock-out" qualifying, it is split into three periods, known as Q1, Q2, and Q3. In each period, drivers run qualifying laps to attempt to advance to the next period, with the slowest drivers being "knocked out" of qualification (but not necessarily the race) at the end of the period and their grid positions set within the rearmost five based on their best lap times. Drivers are allowed as many laps as they wish within each period. After each period, all times are reset, and only a driver's fastest lap in that period (barring infractions) counts. Any timed lap started before the end of that period may be completed and will count toward that driver's placement. The number of cars eliminated in each period is dependent on the total number of cars entered into the championship.
Currently, with 20 cars, Q1 runs for 18 minutes, and eliminates the slowest five drivers. During this period, any driver whose best lap takes longer than 107% of the fastest time in Q1 will not be allowed to start the race without permission from the stewards. Otherwise, all drivers proceed to the race albeit in the worst starting positions. This rule does not affect drivers in Q2 or Q3. In Q2, the 15 remaining drivers have 15 minutes to set one of the ten fastest times and proceed to the next period. Finally, Q3 lasts 12 minutes and sees the remaining ten drivers decide the first ten grid positions. At the beginning of the 2016 Formula 1 season, the FIA introduced a new qualifying format, whereby drivers were knocked out every 90 seconds after a certain amount of time had passed in each session. The aim was to mix up grid positions for the race, but due to unpopularity, the FIA reverted to the above qualifying format for the Chinese GP, after running the format for only two races.
Each car is allocated one set of the softest tyres for use in Q3. The cars that qualify for Q3 must return them after Q3; the cars that do not qualify for Q3 can use them during the race. As of 2022, all drivers are given a free choice of tyre to use at the start of the Grand Prix, whereas in previous years only the drivers that did not participate in Q3 had free tyre choice for the start of the race. Any penalties that affect grid position are applied at the end of qualifying. Grid penalties can be applied for driving infractions in the previous or current Grand Prix, or for changing a gearbox or engine component. If a car fails scrutineering, the driver will be excluded from qualifying but will be allowed to start the race from the back of the grid at the race steward's discretion.
2021 saw the trialling of a 'sprint qualifying' race on the Saturday of three race weekends, with the intention of testing the new approach to qualifying. The traditional qualifying would determine the starting order for the sprint, and the result of the sprint would then determine the start order for the Grand Prix. The system returned for the 2022 season, now titled the 'sprint'. From 2023, sprint races no longer impacted the start order for the main race, which would be determined by traditional qualifying. Sprints would have their own qualifying session, titled the 'sprint shootout'; such a system made its debut at the 2023 Azerbaijan Grand Prix and is set to be used throughout all sprint sessions in place of the traditional second free practice session. Sprint qualifying sessions are run much shorter than traditional qualifying, and each session required teams to fit new tyres - mediums for SQ1 and SQ2, and softs for SQ3 - otherwise they cannot participate in the session.
The race begins with a warm-up lap, after which the cars assemble on the starting grid in the order they qualified. This lap is often referred to as the formation lap, as the cars lap in formation with no overtaking (although a driver who makes a mistake may regain lost ground). The warm-up lap allows drivers to check the condition of the track and their car, gives the tyres a chance to warm up to increase traction and grip, and also gives the pit crews time to clear themselves and their equipment from the grid for the race start.
Once all the cars have formed on the grid, after the medical car positions itself behind the pack, a light system above the track indicates the start of the race: five red lights are illuminated at intervals of one second; they are all then extinguished simultaneously after an unspecified time (typically less than 3 seconds) to signal the start of the race. The start procedure may be abandoned if a driver stalls on the grid or on the track in an unsafe position, signalled by raising their arm. If this happens, the procedure restarts: a new formation lap begins with the offending car removed from the grid. The race may also be restarted in the event of a serious accident or dangerous conditions, with the original start voided. The race may be started from behind the Safety Car if race control feels a racing start would be excessively dangerous, such as extremely heavy rainfall. As of the 2019 season, there will always be a standing restart. If due to heavy rainfall a start behind the safety car is necessary, then after the track has dried sufficiently, drivers will form up for a standing start. There is no formation lap when races start behind the Safety Car.
Under normal circumstances, the winner of the race is the first driver to cross the finish line having completed a set number of laps. Race officials may end the race early (putting out a red flag) due to unsafe conditions such as extreme rainfall, and it must finish within two hours, although races are only likely to last this long in the case of extreme weather or if the safety car is deployed during the race. When a situation justifies pausing the race without terminating it, the red flag is deployed; since 2005, a ten-minute warning is given before the race is resumed behind the safety car, which leads the field for a lap before it returns to the pit lane (before then the race resumed in race order from the penultimate lap before the red flag was shown).
In the 1950s, race distances varied from 300 km (190 mi) to 600 km (370 mi). The maximum race length was reduced to 400 km (250 mi) in 1966 and 325 km (202 mi) in 1971. The race length was standardised to the current 305 km (190 mi) in 1989. However, street races like Monaco have shorter distances, to keep under the two-hour limit.
Drivers may overtake one another for position over the course of the race. If a leader comes across a backmarker (slower car) who has completed fewer laps, the back marker is shown a blue flag telling them that they are obliged to allow the leader to overtake them. The slower car is said to be "lapped" and, once the leader finishes the race, is classified as finishing the race "one lap down". A driver can be lapped numerous times, by any car in front of them. A driver who fails to complete more than 90% of the race distance is shown as "not classified" in the results.
Throughout the race, drivers may make pit stops to change tyres and repair damage (from 1994 to 2009 inclusive, they could also refuel). Different teams and drivers employ different pit stop strategies in order to maximise their car's potential. Three dry tyre compounds, with different durability and adhesion characteristics, are available to drivers. Over the course of a race, drivers must use two of the three available compounds. The different compounds have different levels of performance and choosing when to use which compound is a key tactical decision to make. Different tyres have different colours on their sidewalls; this allows spectators to understand the strategies.
Under wet conditions, drivers may switch to one of two specialised wet weather tyres with additional grooves (one "intermediate", for mild wet conditions, such as after recent rain, one "full wet", for racing in or immediately after rain). A driver must make at least one stop to use two tyre compounds; up to three stops are typically made, although further stops may be necessary to fix damage or if weather conditions change. If rain tyres are used, drivers are no longer obliged to use both types of dry tyres.
This role involves generally managing the logistics of each F1 Grand Prix, inspecting cars in parc fermé before a race, enforcing FIA rules, and controlling the lights which start each race. As the head of the race officials, the race director also plays a large role in sorting disputes among teams and drivers. Penalties, such as drive-through penalties (and stop-and-go penalties), demotions on a pre-race start grid, race disqualifications, and fines can all be handed out should parties break regulations. As of 2023, the race director is Niels Wittich, with Herbie Blash as a permanent advisor.
In the event of an incident that risks the safety of competitors or trackside race marshals, race officials may choose to deploy the safety car. This in effect suspends the race, with drivers following the safety car around the track at its speed in race order, with overtaking not permitted. Cars that have been lapped may, during the safety car period and depending on circumstances permitted by the race director, be allowed to un-lap themselves in order to ensure a smoother restart and to avoid blue flags being immediately thrown upon the resumption of the race with many of the cars in very close proximity to each other. The safety car circulates until the danger is cleared; after it comes in, the race restarts with a "rolling start". Pit stops are permitted under the safety car. Since 2000, the main safety car driver has been German ex-racing driver Bernd Mayländer. On the lap in which the safety car returns to the pits, the leading car takes over the role of the safety car until the timing line. After crossing this line, drivers are allowed to start racing for track position once more. Mercedes-Benz supplies Mercedes-AMG models to Formula One to use as the safety cars. From 2021 onwards, Aston Martin supplies the Vantage to Formula One to use as the safety car, sharing the duty with Mercedes-Benz.
Flags specifications and usage are prescribed by Appendix H of the FIA's International Sporting Code.
The format of the race has changed little through Formula One's history. The main changes have revolved around what is allowed at pit stops. In the early days of Grand Prix racing, a driver would be allowed to continue a race in their teammate's car should theirs develop a problem – in the modern era, cars are so carefully fitted to drivers that this has become impossible. In recent years, the emphasis has been on changing refuelling and tyre change regulations.
Since the 2010 season, refuelling – which was reintroduced in 1994 – has not been allowed, to encourage less tactical racing following safety concerns. The rule requiring both compounds of tyre to be used during the race was introduced in 2007, again to encourage racing on the track. The safety car is another relatively recent innovation that reduced the need to deploy the red flag, allowing races to be completed on time for a growing international live television audience.
*A driver must finish within the top ten to receive a point for setting the fastest lap of the race. If the driver who set the fastest lap finishes outside of the top ten, then the point for fastest lap will not be awarded for that race.
Various systems for awarding championship points have been used since 1950. The current system, in place since 2010, awards the top ten cars points in the Drivers' and Constructors' Championships, with the winner receiving 25 points. All points won at each race are added up, and the driver and constructor with the most points at the end of the season are crowned World Champions. Regardless of whether a driver stays with the same team throughout the season, or switches teams, all points earned by them count for the Drivers' Championship.
A driver must be classified in order to receive points, as of 2022, a driver must complete at least 90% of the race distance in order to receive points. Therefore, it is possible for a driver to receive points even if they retired before the end of the race.
From some time between the 1977 and 1980 seasons to the end of the 2021 season if less than 75% of the race laps were completed by the winner, then only half of the points listed in the table were awarded to the drivers and constructors. This has happened on only five occasions in the history of the championship, and it had a notable influence on the final standing of the 1984 season. The last occurrence was at the 2021 Belgian Grand Prix when the race was called off after just three laps behind a safety car due to torrential rain. The half points rule was replaced by a distance-dependent gradual scale system for 2022.
A Formula One constructor is the entity credited for designing the chassis and the engine. If both are designed by the same company, that company receives sole credit as the constructor (e.g., Ferrari). If they are designed by different companies, both are credited, and the name of the chassis designer is placed before that of the engine designer (e.g., McLaren-Mercedes). All constructors are scored individually, even if they share either chassis or engine with another constructor (e.g., Williams-Ford, Williams-Honda in 1983).
Since 1981, Formula One teams have been required to build the chassis in which they compete, and consequently the distinction between the terms "team" and "constructor" became less pronounced, though engines may still be produced by a different entity. This requirement distinguishes the sport from series such as the IndyCar Series which allows teams to purchase chassis, and "spec series" such as Formula 2 which require all cars be kept to an identical specification. It also effectively prohibits privateers, which were common even in Formula One well into the 1970s.
The sport's debut season, 1950, saw eighteen teams compete, but due to high costs, many dropped out quickly. In fact, such was the scarcity of competitive cars for much of the first decade of Formula One that Formula Two cars were admitted to fill the grids. Ferrari is the oldest Formula One team, the only still-active team which competed in 1950.
Early manufacturer involvement came in the form of a "factory team" or "works team" (that is, one owned and staffed by a major car company), such as those of Alfa Romeo, Ferrari, or Renault. Ferrari holds the record for having won the most Constructors' Championships (sixteen).
Companies such as Climax, Repco, Cosworth, Hart, Judd and Supertec, which had no direct team affiliation, often sold engines to teams that could not afford to manufacture them. In the early years, independently owned Formula One teams sometimes also built their engines, though this became less common with the increased involvement of major car manufacturers such as BMW, Ferrari, Honda, Mercedes-Benz, Renault, and Toyota, whose large budgets rendered privately built engines less competitive. Cosworth was the last independent engine supplier. It is estimated the major teams spend between €100 and €200 million ($125–$225 million) per year per manufacturer on engines alone.
In the 2007 season, for the first time since the 1981 rule, two teams used chassis built by other teams. Super Aguri started the season using a modified Honda Racing RA106 chassis (used by Honda the previous year), while Scuderia Toro Rosso used the same chassis used by the parent Red Bull Racing team, which was formally designed by a separate subsidiary. The usage of these loopholes was ended for 2010 with the publication of new technical regulations, which require each constructor to own the intellectual property rights to their chassis, The regulations continue to allow a team to subcontract the design and construction of the chassis to a third-party, an option used by the HRT team in 2010 and Haas currently.
Although teams rarely disclose information about their budgets, it is estimated they range from US$66 million to US$400 million each.
Entering a new team in the Formula One World Championship requires a $200 million up-front payment to the FIA, which is then shared equally among the existing teams. As a consequence, constructors desiring to enter Formula One often prefer to buy an existing team: BAR's purchase of Tyrrell and Midland's purchase of Jordan allowed both of these teams to sidestep the large deposit and secure the benefits the team already had, such as TV revenue.
Seven out of the ten teams competing in Formula One are based close to London in an area centred around Oxford. Ferrari have both their chassis and engine assembly in Maranello, Italy. The AlphaTauri team are based close to Ferrari in Faenza, whilst the Alfa Romeo team are based near Zurich in Switzerland.
Every team in Formula One must run two cars in every session in a Grand Prix weekend, and every team may use up to four drivers in a season. A team may also run two additional drivers in Free Practice sessions, which are often used to test potential new drivers for a career as a Formula One driver or gain experienced drivers to evaluate the car. Most drivers are contracted for at least the duration of a season, with driver changes taking place in-between seasons, in comparison to early years when drivers often competed on an ad hoc basis from race to race. Each competitor must be in the possession of a FIA Super Licence to compete in a Grand Prix, which is issued to drivers who have met the criteria of success in junior motorsport categories and having achieved 300 kilometres (190 mi) of running in a Formula One car. Drivers may also be issued a Super Licence by the World Motor Sport Council if they fail to meet the criteria. Although most drivers earn their seat on ability, commercial considerations also come into play with teams having to satisfy sponsors and financial demands.
Teams also contract test and reserve drivers to stand in for regular drivers when necessary and develop the team's car; although with the reduction on testing the reserve drivers' role mainly takes places on a simulator, such as rFactor Pro, which is used by most of the F1 teams.
Each driver chooses an unassigned number from 2 to 99 (excluding 17 which was retired following the death of Jules Bianchi) upon entering Formula One and keeps that number during their time in the series. The number one is reserved for the reigning Drivers' Champion, who retains their previous number and may choose to use it instead of the number one. At the onset of the championship, numbers were allocated by race organisers on an ad hoc basis from race to race.
Permanent numbers were introduced in 1973 to take effect in 1974, when teams were allocated numbers in ascending order based on the Constructors' Championship standings at the end of the 1973 season. The teams would hold those numbers from season to season with the exception of the team with the World Drivers' Champion, which would swap its numbers with the one and two of the previous champion's team. New entrants were allocated spare numbers, with the exception of the number 13 which had been unused since 1976.
As teams kept their numbers for long periods of time, car numbers became associated with a team, such as Ferrari's 27 and 28. A different system was used from 1996 to 2013: at the start of each season, the current Drivers' Champion was designated number one, their teammate number two, and the rest of the teams assigned ascending numbers according to previous season's Constructors' Championship order.
As of the conclusion of the 2022 Championship, a total of 34 separate drivers have won the World Drivers' Championship, with Michael Schumacher and Lewis Hamilton holding the record for most championships with seven. Lewis Hamilton achieved the most race wins, too, in 2020. Jochen Rindt is the only posthumous World Champion, after his points total was not surpassed despite his fatal accident at the 1970 Italian Grand Prix, with 4 races still remaining in the season. Drivers from the United Kingdom have been the most successful in the sport, with 18 championships among 10 drivers, and 308 wins.
Most F1 drivers start in kart racing competitions, and then come up through traditional European single-seater series like Formula Ford and Formula Renault to Formula 3, and finally the GP2 Series. GP2 started in 2005, replacing Formula 3000, which itself had replaced Formula Two as the last major stepping-stone into F1. GP2 was rebranded as the FIA Formula 2 Championship in 2017. Most champions from this level graduate into F1, but 2006 GP2 champion Lewis Hamilton became the first F2, F3000 or GP2 champion to win the Formula One drivers' title in 2008.
Drivers are not required to have competed at this level before entering Formula One. British F3 has supplied many F1 drivers, with champions, including Nigel Mansell, Ayrton Senna and Mika Häkkinen having moved straight from that series to Formula One, and Max Verstappen made his F1 debut following a single season in European F3. More rarely a driver may be picked from an even lower level, as was the case with 2007 World Champion Kimi Räikkönen, who went straight from Formula Renault to F1.
American open-wheel car racing has also contributed to the Formula One grid. CART champions Mario Andretti and Jacques Villeneuve became F1 World Champions, while Juan Pablo Montoya won seven races in F1. Other CART (also known as ChampCar) champions, like Michael Andretti and Alessandro Zanardi won no races in F1. Other drivers have taken different paths to F1; Damon Hill raced motorbikes, and Michael Schumacher raced in sports cars, albeit after climbing through the junior single-seater ranks. Former F1 driver Paul di Resta raced in DTM until he was signed with Force India in 2011.
The number of Grands Prix held in a season has varied over the years. The inaugural 1950 world championship season comprised only seven races, while the 2019 season contained 21 races. There were no more than 11 Grands Prix per season during the early decades of the championship, although a large number of non-championship Formula One events also took place. The number of Grands Prix increased to an average of 16 to 17 by the late 1970s, while non-championship events ended in 1983. More Grands Prix began to be held in the 2000s, and recent seasons have seen an average of 19 races. In 2021 and 2022, the calendar peaked at 22 events, the highest number of world championship races in one season.
Six of the original seven races took place in Europe; the only non-European race that counted towards the World Championship in 1950 was the Indianapolis 500, which was held to different regulations and later replaced by the United States Grand Prix. The F1 championship gradually expanded to other non-European countries. Argentina hosted the first South American Grand Prix in 1953, and Morocco hosted the first African World Championship race in 1958. Asia and Oceania followed (Japan in 1976 and Australia in 1985), and the first race in the Middle East was held in 2004. The 19 races of the 2014 season were spread over every populated continent except for Africa, with 10 Grands Prix held outside Europe.
Some of the Grands Prix pre-date the formation of the World Championship, such as the French Grand Prix and were incorporated into the championship as Formula One races in 1950. The British and Italian Grands Prix are the only events to have been held every Formula One season; other long-running races include the Belgian, German, and French Grands Prix. The Monaco Grand Prix was first held in 1929 and has run continuously since 1955 (with the exception of 2020) and is widely considered to be one of the most important and prestigious automobile races in the world.
All Grands Prix have traditionally been run during the day, until the inaugural Singapore Grand Prix hosted the first Formula One night race in 2008, which was followed by the day–night Abu Dhabi Grand Prix in 2009 and the Bahrain Grand Prix which converted to a night race in 2014. Other Grands Prix in Asia have had their start times adjusted to benefit the European television audience.
Bold denotes the Grands Prix scheduled as part of the 2023 season.
Bold denotes the Grands Prix scheduled as part of the 2023 season.
Since 2008, the Formula One Group has been targeting new "destination cities" to expand its global reach, with the aim to produce races from countries that have not previously been involved in the sport. This initiative started with the 2008 Singapore Grand Prix.
A typical circuit features a stretch of straight road on which the starting grid is situated. The pit lane, where the drivers stop for tyres, aerodynamic adjustments and minor repairs (such as changing the car's nose due to front wing damage) during the race, retirements from the race, and where the teams work on the cars before the race, is normally located next to the starting grid. The layout of the rest of the circuit varies widely, although in most cases the circuit runs in a clockwise direction. Those few circuits that run anticlockwise (and therefore have predominantly left-handed corners) can cause drivers neck problems due to the enormous lateral forces generated by F1 cars pulling their heads in the opposite direction to normal. A single race requires hotel rooms to accommodate at least 5,000 visitors.
Most of the circuits currently in use are specially constructed for competition. The current street circuits are Monaco, Melbourne, Singapore, Baku, Miami and Jeddah although races in other urban locations come and go (Las Vegas and Detroit, for example) and proposals for such races are often discussed – most recently Las Vegas. The glamour and history of the Monaco race are the primary reasons why the circuit is still in use, even though it does not meet the strict safety requirements imposed on other tracks. Three-time World champion Nelson Piquet famously described racing in Monaco as "like riding a bicycle around your living room".
Circuit design to protect the safety of drivers is becoming increasingly sophisticated, as exemplified by the Bahrain International Circuit, added in 2004 and designed – like most of F1's new circuits – by Hermann Tilke. Several of the new circuits in F1, especially those designed by Tilke, have been criticised as lacking the "flow" of such classics as Spa-Francorchamps and Imola. His redesign of the Hockenheim circuit in Germany for example, while providing more capacity for grandstands and eliminating extremely long and dangerous straights, has been frowned upon by many who argue that part of the character of the Hockenheim circuits was the long and blinding straights into dark forest sections. These newer circuits, however, are generally agreed to meet the safety standards of modern Formula One better than the older ones.
The Circuit of the Americas in Austin, the Sochi Autodrom in Sochi and the Baku City Circuit in Azerbaijan have all been introduced as brand new tracks since 2012. In 2020, Algarve International Circuit debuted on the F1 calendar as the venue of the Portuguese Grand Prix, with the country having last hosted a race in 1996. In 2021, Circuit Zandvoort returned to the F1 calendar as the Dutch Grand Prix, having last hosted a race in 1985.
Modern Formula One cars are mid-engined, hybrid, semi-open cockpit, open-wheel single-seaters. The chassis is made largely of carbon-fibre composites, rendering it light but extremely stiff and strong. The whole car, including the driver but not fuel, weighs only 795 kg (1,753 lb) – the minimum weight set by the regulations. If the construction of the car is lighter than the minimum, it can be ballasted up to add the necessary weight. The race teams take advantage of this by placing this ballast at the extreme bottom of the chassis, thereby locating the centre of gravity as low as possible in order to improve handling and weight transfer.
The cornering speed of Formula One cars is largely determined by the aerodynamic downforce that they generate, which pushes the car down onto the track. This is provided by "wings" mounted at the front and rear of the vehicle, and by ground effect created by low air pressure under the flat bottom of the car. The aerodynamic design of the cars is very heavily constrained to limit performance. The previous generation of cars sported a large number of small winglets, "barge boards", and turning vanes designed to closely control the flow of the air over, under, and around the car.
The other major factor controlling the cornering speed of the cars is the design of the tyres. From 1998 to 2008, the tyres in Formula One were not "slicks" (tyres with no tread pattern) as in most other circuit racing series. Instead, each tyre had four large circumferential grooves on its surface designed to limit the cornering speed of the cars. Slick tyres returned to Formula One in the 2009 season. Suspension is double wishbone or multilink front and rear, with pushrod operated springs and dampers on the chassis – one exception being that of the 2009 specification Red Bull Racing car (RB5) which used pullrod suspension at the rear, the first car to do so since the Minardi PS01 in 2001. Ferrari used a pullrod suspension at both the front and rear in their 2012 car. Both Ferrari (F138) and McLaren (MP4-28) of the 2013 season used a pullrod suspension at both the front and the rear. In 2022, McLaren (MCL36) and Red Bull Racing (RB18) switched to a pullrod front suspension and push rod rear suspension.
Carbon-carbon disc brakes are used for reduced weight and increased frictional performance. These provide a very high level of braking performance and are usually the element that provokes the greatest reaction from drivers new to the formula.
In 2022, the technical regulations changed considerably in order to reduce the turbulence (commonly referred to as "dirty air") produced by the aerodynamics of the car. This includes a redesigned front and rear wing, larger wheels with a lower tyre profile, wheel covers, small winglets, the banning of barge boards, and the reintroduction of Ground effect downforce production. These have been changed to promote racing, meaning cars lose less downforce when following another car. It allows cars to follow another at a much closer distance, without extending the gap due to the turbulent air. (See 2022 Formula One World Championship Technical regulations)
Formula One cars must have four wheels made of the same metallic material, which must be one of two magnesium alloys specified by the FIA. Magnesium alloy wheels made by forging are used to achieve maximum unsprung rotating weight reduction. As of 2022, the wheels are covered with "spec" (Standardised) Wheel Covers, the wheel diameter has increased from 13 inches to 18 inches (reducing the "tyre profile"), and small winglets have been placed over the front tyres.
Starting with the 2014 Formula 1 season, the engines have changed from a 2.4-litre naturally aspirated V8 to turbocharged 1.6-litre V6 "power-units". These get a significant amount of their power from electric motors. In addition, they include a lot of energy recovery technology. Engines run on unleaded fuel closely resembling publicly available petrol. The oil which lubricates and protects the engine from overheating is very similar in viscosity to water. The 2006 generation of engines spun up to 20,000 rpm and produced over 580 kW (780 bhp). For 2007, engines were restricted to 19,000 rpm with limited development areas allowed, following the engine specification freeze since the end of 2006. For the 2009 Formula One season the engines were further restricted to 18,000 rpm.
A wide variety of technologies – including active suspension are banned under the current regulations. Despite this the current generation of cars can reach speeds in excess of 350 km/h (220 mph) at some circuits. The highest straight line speed recorded during a Grand Prix was 372.6 km/h (231.5 mph), set by Juan Pablo Montoya during the 2005 Italian Grand Prix. A BAR-Honda Formula One car, running with minimum downforce on a runway in the Mojave Desert achieved a top speed of 415 km/h (258 mph) in 2006. According to Honda, the car fully met the FIA Formula One regulations.
Even with the limitations on aerodynamics, at 160 km/h (99 mph) aerodynamically generated downforce is equal to the weight of the car, and the oft-repeated claim that Formula One cars create enough downforce to "drive on the ceiling", while possible in principle, has never been put to the test. Downforce of 2.5 times the car's weight can be achieved at full speed. The downforce means that the cars can achieve a lateral force with a magnitude of up to 3.5 times that of the force of gravity (3.5g) in cornering. Consequently, the driver's head is pulled sideways with a force equivalent to the weight of 20 kg in corners. Such high lateral forces are enough to make breathing difficult and the drivers need supreme concentration and fitness to maintain their focus for the one to two hours that it takes to complete the race. A high-performance road car like the Enzo Ferrari only achieves around 1g.
As of 2019, each team may have no more than two cars available for use at any time. Each driver may use no more than four engines during a championship season unless they drive for more than one team. If more engines are used, they drop ten places on the starting grid of the event at which an additional engine is used. The only exception is where the engine is provided by a manufacturer or supplier taking part in its first championship season, in which case up to five may be used by a driver. Each driver may use no more than one gearbox for six consecutive events; every unscheduled gearbox change requires the driver to drop five places on the grid unless they failed to finish the previous race due to reasons beyond the team's control.
As of 2019, each driver is limited to three power units per season, before incurring grid penalties.
In March 2007, F1 Racing published its annual estimates of spending by Formula One teams. The total spending of all eleven teams in 2006 was estimated at $2.9 billion US. This was broken down as follows: Toyota $418.5 million, Ferrari $406.5 m, McLaren $402 m, Honda $380.5 m, BMW Sauber $355 m, Renault $324 m, Red Bull $252 m, Williams $195.5 m, Midland F1/Spyker-MF1 $120 m, Toro Rosso $75 m, and Super Aguri $57 million.
Costs vary greatly from team to team. Honda, Toyota, McLaren-Mercedes, and Ferrari were estimated to have spent approximately $200 million on engines in 2006, Renault spent approximately $125 million and Cosworth's 2006 V8 was developed for $15 million. In contrast to the 2006 season on which these figures are based, the 2007 sporting regulations banned all performance-related engine development.
Formula One teams pay entry fees of $500,000, plus $5,000 per point scored the previous year or $6,000 per point for the winner of the Constructors' Championship. Formula One drivers pay a FIA Super Licence fee, which in 2013 was €10,000 plus €1,000 per point.
There have been controversies with the way profits are shared among the teams. The smaller teams have complained that the profits are unevenly shared, favouring established top teams. In September 2015, Force India and Sauber officially lodged a complaint with the European Union against Formula One questioning the governance and stating that the system of dividing revenues and determining the rules is unfair and unlawful.
The cost of building a brand-new permanent circuit can be up to hundreds of millions of dollars, while the cost of converting a public road, such as Albert Park, into a temporary circuit is much less. Permanent circuits, however, can generate revenue all year round from leasing the track for private races and other races, such as MotoGP. The Shanghai International Circuit cost over $300 million and the Istanbul Park circuit cost $150 million to build.
A number of Formula One drivers earn the highest salary of any drivers in auto racing. The highest-paid driver in 2021 is Lewis Hamilton, who received $55 million in salary from Mercedes AMG Petronas F1 – a record for any driver. The very top Formula One drivers get paid more than IndyCar or NASCAR drivers; however, the earnings immediately fall off after the top three F1 drivers, and the majority of NASCAR racers will make more money than their F1 counterparts. Most top IndyCar drivers are paid around a tenth of their Formula One counterparts.
In the second quarter of 2020, Formula One reported a loss revenue of $122 million and an income of $24 million. This was a result of the delay of the racing championship start as a result of the COVID-19 pandemic. The company grossed revenues of $620 million for the same quarter the previous year.
The expense of Formula One has seen the FIA and the Formula One Commission attempt to create new regulations to lower the costs for a team to compete in the sport.
Following their purchase of the commercial rights to the sport in 2017, Liberty Media announced their vision for the future of Formula One at the 2018 Bahrain Grand Prix. The proposal identified five key areas, including streamlining the governance of the sport, emphasising cost-effectiveness, maintaining the sport's relevance to road cars and encouraging new manufacturers to enter the championship whilst enabling them to be competitive. Liberty cited 2021 as their target date as it coincided with the need to renew commercial agreements with the teams and the end of the seven-year cycle of engine development that started in 2014.
On 19 August 2020, it was announced that all 10 teams had signed the new Concorde Agreement. This came into effect at the start of the 2021 season and changed how prize money and TV revenue is distributed.
When I get out of the car, of course I'm thinking as well: 'Is this something we should do, travel the world, wasting resources?'
—Sebastian Vettel, former champion voicing concerns on Formula One's impact on climate change.
Formula One has launched a plan to become carbon neutral by 2030. By 2025, all events should become "sustainable", including eliminating single-use plastics and ensuring all waste is reused, recycled or composted.
A report conducted by Formula One estimated that the series was responsible for 256,000 tonnes of carbon dioxide emissions in the 2019 season, finding that 45% of emissions were from logistics and only 0.7% were from emissions from the cars themselves.
In January 2020, FIA and Formula One signed the United Nations "Sports for Climate Action" framework. After the signing was announced, FIA President Jean Todt said: "As an international Federation comprising 244 members in 140 countries and the leader in motor sport and mobility development, we are fully committed to global environmental protection. The signing of this UN Sports for Climate Action Framework reinforces the momentum that has been growing in our Federation for many years. Since the introduction of the hybrid power unit in F1 to the creation of the Environment and Sustainability Commission, the entire FIA community has been investing time, energy and financial resources to the benefit of environmental innovations. We aim to inspire greater awareness and best practice in sustainability motor sport standards."
From the 2021–22 season, all cars will increase the bio-component of their fuel, using E10 fuel, rather than the 5.75% of ethanol currently used. This percentage is expected to grow again in the future. In December 2020, the FIA claimed that it had developed a fuel with 100% sustainability, to be used in Formula One from either 2025 or 2026, when new engine regulations come into force.
Prior to the beginning of the 2020 Formula One World Championship, F1 announced and launched the #WeRaceAsOne initiative. The initiative primarily focuses on visible displays of solidarity in the fight against racism on Grand Prix Weekends, as well as the creation of a Formula 1 Task Force that will "listen to people from across the paddock [...] and make conclusions on the actions required to improve the diversity and opportunity in Formula 1 at all levels". The move spurs from the growing questions about racism and global inequalities perpetuated by the sport. The 70-year history of the World Championship has been dominated by European and white drivers, with the first (and only) black driver, Lewis Hamilton, participating in the world championship since 2007.
In addition to organization-wide measures, individual teams have also acknowledged deficiencies in the sport's cultural and political activism. During the 2020 season, the Mercedes-AMG Petronas F1 Team conducted a study of its racial composition and found that approximately 95% of its workforce was white. Due to the results of the study, the team changed the car's livery to promote anti-racism messages and also launched the Accelerate 25 programme. The program vows that approximately 25% of all new hires to the team will come from underrepresented minorities in the sport until 2025.
The 20 drivers on the grid have also stood in solidarity on multiple occasions in the fight against racism both on and off the track. Following the murder of George Floyd in the summer of 2020, all twenty drivers wore "End Racism" shirts and took part in an organised anti-racism protest during the pre-race formalities. In the year since, Lewis Hamilton has remained vocal in his pre-race attire, with other drivers occasionally wearing change-demanding clothing.
Formula One can be seen live, or tape delayed in almost every country and territory and attracts one of the largest global television audiences. The 2008 season attracted a global audience of 600 million people per race. The cumulative television audience was calculated to be 54 billion for the 2001 season, broadcast to 200 territories.
During the early 1990s, Formula One Group created a number of trademarks, an official logo, an official TV graphics package and in 2003, an official website for the sport in an attempt to give it a corporate identity.
TV stations all take what is known as the "World Feed", either produced historically by the "host broadcaster" or by Formula One Management (FOM). The host broadcaster either had one feed for all, or two separate feeds – a feed for local viewers and a feed for international viewers. The one size fits all approach meant that there was bias to a certain team or driver during the event, which led to viewers missing out on more important action and incidents, while the two-feed approach meant that replays (for when returning from an ad break) and local bias action could be overlaid on the local feed while the international feed was left unaffected.
The only station that differed from this set up was "DF1" (re-branded to "Premiere" then to "Sky Deutschland") – a German channel which offers all sessions live and interactive, with features such as the onboard and pit-lane channels. This service was purchased by Bernie Ecclestone at the end of 1996 and became F1 Digital Plus, which was made more widely available around Europe until the end of 2002, when the cost of the digital interactive service was thought too much.
On 12 January 2011, F1 announced that it would adopt the HD format for the 2011 season.
It was announced on 29 July 2011, that Sky Sports and the BBC would team up to show the races in F1 from 2012 to 2018. Sky launched a dedicated channel, Sky Sports F1 which covered all races live without commercial interruption as well as live practice and qualifying sessions, along with F1 programming, including interviews, archive action and magazine shows. In 2012 the BBC broadcast live coverage of half of the races in the season. The BBC ended its television contract after the 2015 season, three years earlier than planned. The free-to-air TV rights were picked up by Channel 4 until the end of the 2018 season. Sky Sports F1 coverage remained unaffected and BBC Radio 5 Live and 5 Sports Extra coverage was extended until 2021. As of 2022, BBC Radio 5 Live and 5 Sports Extra has rights to such coverage until 2024.
While Sky Sports and Channel 4 are the two major broadcasters of Formula 1, other countries show Formula One races. Many use commentary from either Sky Sports or Channel 4. In most of Asia (excluding China), the two main broadcasters of Formula one includes the Fox network and Star Sports (in India). In the United States, ESPN holds the official rights to broadcast the sport while ABC also holds free-to-air rights for some races under the ESPN on ABC banner. In Germany, Austria and Switzerland, the two main broadcasters are RTL Germany and n-TV. In China, there are multiple channels that broadcast Formula One which include CCTV, Tencent, Guangdong TV and Shanghai TV. Currently in France, the only channel that broadcasts Formula one is the pay TV channel Canal+, having renewed its broadcasting rights until 2024.
The official Formula One website has live timing charts that can be used during the race to follow the leaderboard in real time. An official application has been available for the Apple App Store since 2009, and on Google Play since 2011, that shows users a real-time feed of driver positions, timing and commentary. On 26 November 2017 Formula One unveiled a new logo, which replaced the previous "flying one" in use since 1993.
In March 2018, FOM announced the launch of F1 TV, an over-the-top (OTT) streaming platform that lets viewers watch multiple simultaneous video feeds and timing screens in addition to traditional directed race footage and commentary.
Currently, the terms "Formula One race" and "World Championship race" are effectively synonymous. Since 1984, every Formula One race has counted towards the World Championship, and every World Championship race has been run to Formula One regulations. However, the two terms are not interchangeable.
The distinction is most relevant when considering career summaries and all-time lists. For example, in the List of Formula One drivers, Clemente Biondetti is shown with a single race against his name. Biondetti actually competed in four Formula One races in 1950, but only one of these counted for the World Championship.
In the earlier history of Formula One, many races took place outside the World Championship, and local championships run to Formula One regulations also occurred. These events often took place on circuits that were not always suitable for the World Championship and featured local cars and drivers as well as those competing in the championship.
In the early years of Formula One, before the world championship was established, there were around twenty races held from late Spring to early Autumn in Europe, although not all of these were considered significant. Most competitive cars came from Italy, particularly Alfa Romeo. After the start of the world championship, these non-championship races continued. In the 1950s and 1960s, there were many Formula One races which did not count for the World Championship; in 1950 a total of twenty-two Formula One races were held, of which only six counted towards the World Championship. In 1952 and 1953, when the world championship was run to Formula Two regulations, non-championship events were the only Formula One races that took place.
Some races, particularly in the UK, including the Race of Champions, Oulton Park International Gold Cup and the International Trophy, were attended by the majority of the world championship contenders. Other smaller events were regularly held in locations not part of the championship, such as the Syracuse and Danish Grands Prix, although these only attracted a small amount of the championship teams and relied on private entries and lower Formula cars to make up the grid. These became less common through the 1970s and 1983 saw the last non-championship Formula One race; the 1983 Race of Champions at Brands Hatch, won by reigning World Champion Keke Rosberg in a Williams-Cosworth in a close fight with American Danny Sullivan.
South Africa's flourishing domestic Formula One championship ran from 1960 through to 1975. The frontrunning cars in the series were recently retired from the world championship although there was also a healthy selection of locally built or modified machines.
The DFV helped in making the UK domestic Formula One championship possible between 1978 and 1980. As in South Africa a decade before, second hand cars from manufacturers like Lotus and Fittipaldi Automotive were the order of the day, although some, such as the March 781, were built specifically for the series. In 1980, the series saw South African Desiré Wilson become the only woman to win a Formula One race when she triumphed at Brands Hatch in a Wolf WR3. | [
{
"paragraph_id": 0,
"text": "Formula One (more commonly known as Formula 1 or F1) is the highest class of international racing for open-wheel single-seater formula racing cars sanctioned by the Fédération Internationale de l'Automobile (FIA). The FIA Formula One World Championship has been one of the premier forms of racing around the world since its inaugural season in 1950. The word formula in the name refers to the set of rules to which all participants' cars must conform. A Formula One season consists of a series of races, known as Grands Prix. Grands Prix take place in multiple countries and continents around the world on either purpose-built circuits or closed public roads.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A points system is used at Grands Prix to determine two annual World Championships: one for the drivers, and one for the constructors (the teams). Each driver must hold a valid Super Licence, the highest class of racing licence issued by the FIA, and the races must be held on grade one tracks, the highest grade-rating issued by the FIA for tracks.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Formula One cars are the fastest regulated road-course racing cars in the world, owing to very high cornering speeds achieved through generating large amounts of aerodynamic downforce. Much of this downforce is generated by front and rear wings, which have the side effect of causing severe turbulence behind each car. The turbulence reduces the downforce generated by the cars following directly behind, making it hard to overtake. Major changes made to the cars for the 2022 season have resulted in greater use of ground effect aerodynamics and modified wings to reduce the turbulence behind the cars, with the goal of making overtaking easier. The cars are dependent on electronics, aerodynamics, suspension and tyres. Traction control, launch control, and automatic shifting, plus other electronic driving aids, were first banned in 1994. They were briefly reintroduced in 2001, and have more recently been banned since 2004 and 2008, respectively.",
"title": ""
},
{
"paragraph_id": 3,
"text": "With the average annual cost of running a team – designing, building, and maintaining cars, pay, transport – being approximately £220,000,000 (or $265,000,000), its financial and political battles are widely reported. On 23 January 2017, Liberty Media completed its acquisition of the Formula One Group, from private-equity firm CVC Capital Partners for £6.4bn ($8bn).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Formula One originated from the European Motor Racing Championships of the 1920s and 1930s. The formula consists of a set of rules that all participants' cars must follow. Formula One was a new formula agreed upon during 1946 to officially become effective from 1st January 1947. The first Grand Prix in accordance with the new regulations was the 1946 Turin Grand Prix anticipating the official start of the formula.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Before World War II, a number of Grand Prix racing organisations had made suggestions for a new championship to replace the European Championship before but due to the suspension of racing during the conflict, the new International Formula for cars did not become formalised until 1946, to become effective from 1st January 1947. The new World Championship was instituted to commence in 1950.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first world championship race took place at Silverstone Circuit in the United Kingdom on 13 May 1950. Giuseppe Farina, competing for Alfa Romeo, won the first Drivers' World Championship, narrowly defeating his teammate Juan Manuel Fangio. Fangio went on to win the championship in 1951, 1954, 1955, 1956, and 1957. This set the record for the most World Championships won by a single driver, a record that stood for 46 years until Michael Schumacher won his sixth championship in 2003.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A Constructors' Championship was added in the 1958 season. Stirling Moss, despite being regarded as one of the greatest Formula One drivers in the 1950s and 1960s, never won the Formula One championship. Between 1955 and 1961, Moss finished second place in the championship four times and in third place the other three times. Fangio won 24 of the 52 races he entered – still the record for the highest Formula One wins percentage by an individual driver. National championships existed in South Africa and the UK in the 1960s and 1970s. Non-championship Formula One events were held by promoters for many years. Due to the increasing cost of competition, the last of these was held in 1983.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "This era featured teams managed by road-car manufacturers, such as: Alfa Romeo, Ferrari, Mercedes-Benz and Maserati. The first seasons featured pre-war cars like Alfa Romeo‘s 158, which were front-engined, with narrow tyres and 1.5-litre supercharged or 4.5-litre naturally aspirated engines. The 1952 and 1953 seasons were run to Formula Two regulations, for smaller, less powerful cars, due to concerns over the lack of Formula One cars available. When a new Formula One formula for engines limited to 2.5 litres was reinstated to the world championship for 1954, Mercedes-Benz introduced their W196. The W196 featured things never seen on Formula One cars before, such as: desmodromic valves, fuel injection and enclosed streamlined bodywork. Mercedes drivers won the championship for the next two years, before the team withdrew from all motorsport competitions due to the 1955 Le Mans disaster.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The first major technological development in the sport was Bugatti's introduction of mid-engined cars. Jack Brabham, the world champion in 1959, 1960, and 1966, soon proved the mid-engine's superiority over all other engine positions. By 1961 all teams had switched to mid-engined cars. The Ferguson P99, a four-wheel drive design, was the last front-engined Formula One car to enter a world championship race. It was entered in the 1961 British Grand Prix, the only front-engined car to compete that year.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "During 1962, Lotus introduced a car with an aluminium-sheet monocoque chassis instead of the traditional space-frame design. This proved to be the greatest technological breakthrough since the introduction of mid-engined cars.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1968 sponsorship was introduced to the sport. Team Gunston became the first team to run cigarette sponsorship on their Brabham cars, which privately entered in orange, brown and gold colours of Gunston cigarettes in the 1968 South African Grand Prix on 1 January 1968. Five months later, Lotus as the first works team followed this example when they entered their cars painted in the red, gold and white colours of the Imperial Tobacco's Gold Leaf livery at the 1968 Spanish Grand Prix.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Aerodynamic downforce slowly gained importance in car design with the appearance of aerofoils during the 1968 season. During the late 1970s, Lotus introduced ground-effect aerodynamics, previously used on Jim Hall's Chaparral 2J in 1970, that provided enormous downforce and greatly increased cornering speeds. The aerodynamic forces pressing the cars to the track were up to five times the car's weight. As a result, extremely stiff springs were needed to maintain a constant ride height, leaving the suspension virtually solid. This meant that the drivers were depending entirely on the tyres for any small amount of cushioning of the car and driver from irregularities of the road surface.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Beginning in the 1970s, Bernie Ecclestone rearranged the management of Formula One's commercial rights; he is widely credited with transforming the sport into the multibillion-dollar business it now is. When Ecclestone bought the Brabham team during 1971, he gained a seat on the Formula One Constructors' Association and during 1978, he became its president. Previously, the circuit owners controlled the income of the teams and negotiated with each individually; however, Ecclestone persuaded the teams to \"hunt as a pack\" through FOCA. He offered Formula One to circuit owners as a package, which they could take or leave. In return for the package, almost all that was required was to surrender trackside advertising.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The formation of the Fédération Internationale du Sport Automobile (FISA) during 1979 set off the FISA–FOCA war, during which FISA and its president Jean-Marie Balestre argued repeatedly with FOCA over television revenues and technical regulations. The Guardian said that Ecclestone and Max Mosley \"used [FOCA] to wage a guerrilla war with a very long-term aim in view\". FOCA threatened to establish a rival series, boycotted a Grand Prix and FISA withdrew its sanction from races. The result was the 1981 Concorde Agreement, which guaranteed technical stability, as teams were to be given reasonable notice of new regulations. Although FISA asserted its right to the TV revenues, it handed the administration of those rights to FOCA.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "FISA imposed a ban on ground-effect aerodynamics during 1983. By then, however, turbocharged engines, which Renault had pioneered in 1977, were producing over 520 kW (700 bhp) and were essential to be competitive. By 1986, a BMW turbocharged engine achieved a flash reading of 5.5 bar (80 psi) pressure, estimated to be over 970 kW (1,300 bhp) in qualifying for the Italian Grand Prix. The next year, power in race trim reached around 820 kW (1,100 bhp), with boost pressure limited to only 4.0 bar. These cars were the most powerful open-wheel circuit racing cars ever. To reduce engine power output and thus speeds, the FIA limited fuel tank capacity in 1984, and boost pressures in 1988, before banning turbocharged engines completely in 1989.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The development of electronic driver aids began during the 1980s. Lotus began to develop a system of active suspension, which first appeared during 1983 on the Lotus 92. By 1987, this system had been perfected and was driven to victory by Ayrton Senna in the Monaco Grand Prix that year. In the early 1990s, other teams followed suit and semi-automatic gearboxes and traction control were a natural progression. The FIA, due to complaints that technology was determining the outcome of races more than driver skill, banned many such aids for the 1994 season. This resulted in cars that were previously dependent on electronic aids becoming very \"twitchy\" and difficult to drive. Observers felt the ban on driver aids was in name only, as they \"proved difficult to police effectively\".",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The teams signed a second Concorde Agreement during 1992 and a third in 1997.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "On the track, the McLaren and Williams teams dominated the 1980s and 1990s. Brabham were also being competitive during the early part of the 1980s, winning two Drivers' Championships with Nelson Piquet. Powered by Porsche, Honda, and Mercedes-Benz, McLaren won sixteen championships (seven constructors' and nine drivers') in that period, while Williams used engines from Ford, Honda, and Renault to also win sixteen titles (nine constructors' and seven drivers'). The rivalry between racers Ayrton Senna and Alain Prost became F1's central focus during 1988 and continued until Prost retired at the end of 1993. Senna died at the 1994 San Marino Grand Prix after crashing into a wall on the exit of the notorious curve Tamburello. The FIA worked to improve the sport's safety standards since that weekend, during which Roland Ratzenberger also died in an accident during Saturday qualifying. No driver died of injuries sustained on the track at the wheel of a Formula One car for 20 years until the 2014 Japanese Grand Prix, where Jules Bianchi collided with a recovery vehicle after aquaplaning off the circuit, dying nine months later from his injuries. Since 1994, three track marshals have died, one at the 2000 Italian Grand Prix, the second at the 2001 Australian Grand Prix and the third at the 2013 Canadian Grand Prix.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Since the deaths of Senna and Ratzenberger, the FIA has used safety as a reason to impose rule changes that otherwise, under the Concorde Agreement, would have had to be agreed upon by all the teams – most notably the changes introduced for 1998. This so-called 'narrow track' era resulted in cars with smaller rear tyres, a narrower track overall, and the introduction of grooved tyres to reduce mechanical grip. The objective was to reduce cornering speeds and to produce racing similar to rainy conditions by enforcing a smaller contact patch between tyre and track. This, according to the FIA, was to reduce cornering speeds in the interest of safety.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Results were mixed, as the lack of mechanical grip resulted in the more ingenious designers clawing back the deficit with aerodynamic grip. This resulted in pushing more force onto the tyres through wings and aerodynamic devices, which in turn resulted in less overtaking as these devices tended to make the wake behind the car turbulent or 'dirty'. This prevented other cars from following closely due to their dependence on 'clean' air to make the car stick to the track. The grooved tyres also had the unfortunate side effect of initially being of a harder compound to be able to hold the grooved tread blocks, which resulted in spectacular accidents in times of aerodynamic grip failure, as the harder compound could not grip the track as well.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Drivers from McLaren, Williams, Renault (formerly Benetton), and Ferrari, dubbed the \"Big Four\", won every World Championship from 1984 to 2008. The teams won every Constructors' Championship from 1979 to 2008, as well as placing themselves as the top four teams in the Constructors' Championship in every season between 1989 and 1997, and winning every race but one (the 1996 Monaco Grand Prix) between 1988 and 1997. Due to the technological advances of the 1990s, the cost of competing in Formula One increased dramatically, thus increasing financial burdens. This, combined with the dominance of four teams (largely funded by big car manufacturers such as Mercedes-Benz), caused the poorer independent teams to struggle not only to remain competitive but to stay in business. This effectively forced several teams to withdraw.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Michael Schumacher and Ferrari won five consecutive Drivers' Championships (2000–2004) and six consecutive Constructors' Championships (1999–2004). Schumacher set many new records, including those for Grand Prix wins (91, since beaten by Lewis Hamilton), wins in a season (thirteen, since beaten by Max Verstappen), and most Drivers' Championships (seven, tied with Lewis Hamilton as of 2021). Schumacher's championship streak ended on 25 September 2005, when Renault driver Fernando Alonso became Formula One's youngest champion at that time (until Lewis Hamilton in 2008 and followed by Sebastian Vettel in 2010). During 2006, Renault and Alonso won both titles again. Schumacher retired at the end of 2006 after sixteen years in Formula One, but came out of retirement for the 2010 season, racing for the newly formed Mercedes works team, following the rebrand of Brawn GP.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "During this period, the championship rules were changed frequently by the FIA with the intention of improving the on-track action and cutting costs. Team orders, legal since the championship started during 1950, were banned during 2002, after several incidents, in which teams openly manipulated race results, generating negative publicity, most famously by Ferrari at the 2002 Austrian Grand Prix. Other changes included the qualifying format, the points scoring system, the technical regulations, and rules specifying how long engines and tyres must last. A \"tyre war\" between suppliers Michelin and Bridgestone saw lap times fall, although, at the 2005 United States Grand Prix at Indianapolis, seven out of ten teams did not race when their Michelin tyres were deemed unsafe for use, leading to Bridgestone becoming the sole tyre supplier to Formula One for the 2007 season by default. Bridgestone then went on to sign a contract on 20 December 2007 that officially made them the exclusive tyre supplier for the next three seasons.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "During 2006, Max Mosley outlined a \"green\" future for Formula One, in which efficient use of energy would become an important factor.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Starting in 2000, with Ford's purchase of Stewart Grand Prix to form the Jaguar Racing team, new manufacturer-owned teams entered Formula One for the first time since the departure of Alfa Romeo and Renault at the end of 1985. By 2006, the manufacturer teams – Renault, BMW, Toyota, Honda, and Ferrari – dominated the championship, taking five of the first six places in the Constructors' Championship. The sole exception was McLaren, which at the time was part-owned by Mercedes-Benz. Through the Grand Prix Manufacturers Association (GPMA), the manufacturers negotiated a larger share of Formula One's commercial profit and a greater say in the running of the sport.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In 2008 and 2009, Honda, BMW, and Toyota all withdrew from Formula One racing within the space of a year, blaming the economic recession. This resulted in the end of manufacturer dominance within the sport. The Honda F1 team went through a management buyout to become Brawn GP with Ross Brawn and Nick Fry running and owning the majority of the organisation. Brawn GP laid off hundreds of employees, but eventually won the year's world championships. BMW F1 was bought out by the original founder of the team, Peter Sauber. The Lotus F1 Team were another, formerly manufacturer-owned team that reverted to \"privateer\" ownership, together with the buy-out of the Renault team by Genii Capital investors. A link with their previous owners still survived, however, with their car continuing to be powered by a Renault engine until 2014.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "McLaren also announced that it was to reacquire the shares in its team from Mercedes-Benz (McLaren's partnership with Mercedes was reported to have started to sour with the McLaren Mercedes SLR road car project and tough F1 championships which included McLaren being found guilty of spying on Ferrari). Hence, during the 2010 season, Mercedes-Benz re-entered the sport as a manufacturer after its purchase of Brawn GP and split with McLaren after 15 seasons with the team.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "During the 2009 season of Formula One, the sport was gripped by the FIA–FOTA dispute. The FIA President Max Mosley proposed numerous cost-cutting measures for the following season, including an optional budget cap for the teams; teams electing to take the budget cap would be granted greater technical freedom, adjustable front and rear wings and an engine not subject to a rev limiter. The Formula One Teams Association (FOTA) believed that allowing some teams to have such technical freedom would have created a 'two-tier' championship, and thus requested urgent talks with the FIA. However, talks broke down and FOTA teams announced, with the exception of Williams and Force India, that 'they had no choice' but to form a breakaway championship series.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "On 24 June, an agreement was reached between Formula One's governing body and the teams to prevent a breakaway series. It was agreed teams must cut spending to the level of the early 1990s within two years; exact figures were not specified, and Max Mosley agreed he would not stand for re-election to the FIA presidency in October. Following further disagreements, after Max Mosley suggested he would stand for re-election, FOTA made it clear that breakaway plans were still being pursued. On 8 July, FOTA issued a press release stating they had been informed they were not entered for the 2010 season, and an FIA press release said the FOTA representatives had walked out of the meeting. On 1 August, it was announced FIA and FOTA had signed a new Concorde Agreement, bringing an end to the crisis and securing the sport's future until 2012.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "To compensate for the loss of manufacturer teams, four new teams were accepted entry into the 2010 season ahead of a much anticipated 'cost-cap'. Entrants included a reborn Team Lotus – which was led by a Malaysian consortium including Tony Fernandes, the boss of Air Asia; Hispania Racing – the first Spanish Formula One team; as well as Virgin Racing – Richard Branson's entry into the series following a successful partnership with Brawn the year before. They were also joined by the US F1 Team, which planned to run out of the United States as the only non-European-based team in the sport. Financial issues befell the squad before they even made the grid. Despite the entry of these new teams, the proposed cost-cap was repealed and these teams – who did not have the budgets of the midfield and top-order teams – ran around at the back of the field until they inevitably collapsed; HRT in 2012, Caterham (formerly Lotus) in 2014 and Manor (formerly Virgin then Marussia), having survived falling into administration in 2014, went under at the end of 2016.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "A major rule shake-up in 2014 saw the 2.4-litre naturally aspirated V8 engines replaced by 1.6-litre turbocharged hybrid power units. This prompted Honda to return to the sport in 2015 as the championship's fourth power unit manufacturer. Mercedes emerged as the dominant force after the rule shake-up, with Lewis Hamilton winning the championship closely followed by his main rival and teammate, Nico Rosberg, with the team winning 16 out of the 19 races that season. The team continued this form in the following two seasons, again winning 16 races in 2015 before taking a record 19 wins in 2016, with Hamilton claiming the title in the former year and Rosberg winning it in the latter by five points. The 2016 season also saw a new team, Haas, join the grid, while Max Verstappen became the youngest-ever race winner at the age of 18 in Spain.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "After revised aerodynamic regulations were introduced, the 2017 and 2018 seasons featured a title battle between Mercedes and Ferrari. However, Mercedes ultimately won the titles with multiple races to spare and continued to experience dominance in the next two years, eventually winning seven consecutive Drivers' Championships from 2014 to 2020 and eight consecutive Constructors' titles from 2014 to 2021. During this eight-year period between 2014 and 2021, 111 of the 160 races were won by a Mercedes driver, with Hamilton winning 81 of these races and taking six Drivers' Championships during this period to equal Schumacher's record of seven titles. In 2021, the Honda-powered Red Bull team began to seriously challenge Mercedes, with their driver Max Verstappen beating Hamilton to the Drivers' Championship after a season-long battle that saw the pair exchange the championship lead multiple times.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "This era has seen an increase in car manufacturer presence in the sport. After Honda's return as an engine manufacturer in 2015, Renault came back as a team in 2016 after buying back the Lotus F1 team. In 2018, Aston Martin and Alfa Romeo became Red Bull and Sauber's title sponsors, respectively. Sauber was rebranded as Alfa Romeo Racing for the 2019 season, while Racing Point part-owner Lawrence Stroll bought a stake in Aston Martin to rebrand the Racing Point team as Aston Martin for 2021. In August 2020, a new Concorde Agreement was signed by all ten F1 teams committing them to the sport until 2025, including a $145M budget cap for car development to support equal competition and sustainable development in the future.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "The COVID-19 pandemic forced the sport to adapt to budgetary and logistical limitations. A significant overhaul of the technical regulations intended to be introduced in the 2021 season was pushed back to 2022, with constructors instead using their 2020 chassis for two seasons and a token system limiting which parts could be modified was introduced. The start of the 2020 season was delayed by several months, and both it and 2021 seasons were subject to several postponements, cancellations and rescheduling of races due to the shifting restrictions on international travel. Many races took place behind closed doors and with only essential personnel present to maintain social distancing.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In 2022, a major rule and car design change was announced by the F1 governing body, intended to promote closer racing through the use of ground effects, new aerodynamics, larger wheels with low-profile tires, and redesigned nose and wing regulations. The 2022 Constructors' and Drivers' Championships were won by Red Bull and Verstappen, respectively.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "A Formula One Grand Prix event spans a weekend. It typically begins with two free practice sessions on Friday, and one free practice on Saturday. Additional drivers (commonly known as third drivers) are allowed to run on Fridays, but only two cars may be used per team, requiring a race driver to give up their seat. A qualifying session is held after the last free practice session. This session determines the starting order for the race on Sunday.",
"title": "Racing and strategy"
},
{
"paragraph_id": 37,
"text": "Each driver may use no more than thirteen sets of dry-weather tyres, four sets of intermediate tyres, and three sets of wet-weather tyres during a race weekend.",
"title": "Racing and strategy"
},
{
"paragraph_id": 38,
"text": "For much of the sport's history, qualifying sessions differed little from practice sessions; drivers would have one or more sessions in which to set their fastest time, with the grid order determined by each driver's best single lap, with the fastest getting first place on the grid, referred to as pole position. From 1996 to 2002, the format was a one-hour shootout. This approach lasted until the end of 2002 before the rules were changed again because the teams were not running in the early part of the session to take advantage of better track conditions later on.",
"title": "Racing and strategy"
},
{
"paragraph_id": 39,
"text": "Grids were generally limited to 26 cars – if the race had more entries, qualification would also decide which drivers would start the race. During the early 1990s, the number of entries was so high that the worst-performing teams had to enter a pre-qualifying session, with the fastest cars allowed through to the main qualifying session. The qualifying format began to change in the early 2000s, with the FIA experimenting with limiting the number of laps, determining the aggregate time over two sessions, and allowing each driver only one qualifying lap.",
"title": "Racing and strategy"
},
{
"paragraph_id": 40,
"text": "The current qualifying system was adopted in the 2006 season. Known as \"knock-out\" qualifying, it is split into three periods, known as Q1, Q2, and Q3. In each period, drivers run qualifying laps to attempt to advance to the next period, with the slowest drivers being \"knocked out\" of qualification (but not necessarily the race) at the end of the period and their grid positions set within the rearmost five based on their best lap times. Drivers are allowed as many laps as they wish within each period. After each period, all times are reset, and only a driver's fastest lap in that period (barring infractions) counts. Any timed lap started before the end of that period may be completed and will count toward that driver's placement. The number of cars eliminated in each period is dependent on the total number of cars entered into the championship.",
"title": "Racing and strategy"
},
{
"paragraph_id": 41,
"text": "Currently, with 20 cars, Q1 runs for 18 minutes, and eliminates the slowest five drivers. During this period, any driver whose best lap takes longer than 107% of the fastest time in Q1 will not be allowed to start the race without permission from the stewards. Otherwise, all drivers proceed to the race albeit in the worst starting positions. This rule does not affect drivers in Q2 or Q3. In Q2, the 15 remaining drivers have 15 minutes to set one of the ten fastest times and proceed to the next period. Finally, Q3 lasts 12 minutes and sees the remaining ten drivers decide the first ten grid positions. At the beginning of the 2016 Formula 1 season, the FIA introduced a new qualifying format, whereby drivers were knocked out every 90 seconds after a certain amount of time had passed in each session. The aim was to mix up grid positions for the race, but due to unpopularity, the FIA reverted to the above qualifying format for the Chinese GP, after running the format for only two races.",
"title": "Racing and strategy"
},
{
"paragraph_id": 42,
"text": "Each car is allocated one set of the softest tyres for use in Q3. The cars that qualify for Q3 must return them after Q3; the cars that do not qualify for Q3 can use them during the race. As of 2022, all drivers are given a free choice of tyre to use at the start of the Grand Prix, whereas in previous years only the drivers that did not participate in Q3 had free tyre choice for the start of the race. Any penalties that affect grid position are applied at the end of qualifying. Grid penalties can be applied for driving infractions in the previous or current Grand Prix, or for changing a gearbox or engine component. If a car fails scrutineering, the driver will be excluded from qualifying but will be allowed to start the race from the back of the grid at the race steward's discretion.",
"title": "Racing and strategy"
},
{
"paragraph_id": 43,
"text": "2021 saw the trialling of a 'sprint qualifying' race on the Saturday of three race weekends, with the intention of testing the new approach to qualifying. The traditional qualifying would determine the starting order for the sprint, and the result of the sprint would then determine the start order for the Grand Prix. The system returned for the 2022 season, now titled the 'sprint'. From 2023, sprint races no longer impacted the start order for the main race, which would be determined by traditional qualifying. Sprints would have their own qualifying session, titled the 'sprint shootout'; such a system made its debut at the 2023 Azerbaijan Grand Prix and is set to be used throughout all sprint sessions in place of the traditional second free practice session. Sprint qualifying sessions are run much shorter than traditional qualifying, and each session required teams to fit new tyres - mediums for SQ1 and SQ2, and softs for SQ3 - otherwise they cannot participate in the session.",
"title": "Racing and strategy"
},
{
"paragraph_id": 44,
"text": "The race begins with a warm-up lap, after which the cars assemble on the starting grid in the order they qualified. This lap is often referred to as the formation lap, as the cars lap in formation with no overtaking (although a driver who makes a mistake may regain lost ground). The warm-up lap allows drivers to check the condition of the track and their car, gives the tyres a chance to warm up to increase traction and grip, and also gives the pit crews time to clear themselves and their equipment from the grid for the race start.",
"title": "Racing and strategy"
},
{
"paragraph_id": 45,
"text": "Once all the cars have formed on the grid, after the medical car positions itself behind the pack, a light system above the track indicates the start of the race: five red lights are illuminated at intervals of one second; they are all then extinguished simultaneously after an unspecified time (typically less than 3 seconds) to signal the start of the race. The start procedure may be abandoned if a driver stalls on the grid or on the track in an unsafe position, signalled by raising their arm. If this happens, the procedure restarts: a new formation lap begins with the offending car removed from the grid. The race may also be restarted in the event of a serious accident or dangerous conditions, with the original start voided. The race may be started from behind the Safety Car if race control feels a racing start would be excessively dangerous, such as extremely heavy rainfall. As of the 2019 season, there will always be a standing restart. If due to heavy rainfall a start behind the safety car is necessary, then after the track has dried sufficiently, drivers will form up for a standing start. There is no formation lap when races start behind the Safety Car.",
"title": "Racing and strategy"
},
{
"paragraph_id": 46,
"text": "Under normal circumstances, the winner of the race is the first driver to cross the finish line having completed a set number of laps. Race officials may end the race early (putting out a red flag) due to unsafe conditions such as extreme rainfall, and it must finish within two hours, although races are only likely to last this long in the case of extreme weather or if the safety car is deployed during the race. When a situation justifies pausing the race without terminating it, the red flag is deployed; since 2005, a ten-minute warning is given before the race is resumed behind the safety car, which leads the field for a lap before it returns to the pit lane (before then the race resumed in race order from the penultimate lap before the red flag was shown).",
"title": "Racing and strategy"
},
{
"paragraph_id": 47,
"text": "In the 1950s, race distances varied from 300 km (190 mi) to 600 km (370 mi). The maximum race length was reduced to 400 km (250 mi) in 1966 and 325 km (202 mi) in 1971. The race length was standardised to the current 305 km (190 mi) in 1989. However, street races like Monaco have shorter distances, to keep under the two-hour limit.",
"title": "Racing and strategy"
},
{
"paragraph_id": 48,
"text": "Drivers may overtake one another for position over the course of the race. If a leader comes across a backmarker (slower car) who has completed fewer laps, the back marker is shown a blue flag telling them that they are obliged to allow the leader to overtake them. The slower car is said to be \"lapped\" and, once the leader finishes the race, is classified as finishing the race \"one lap down\". A driver can be lapped numerous times, by any car in front of them. A driver who fails to complete more than 90% of the race distance is shown as \"not classified\" in the results.",
"title": "Racing and strategy"
},
{
"paragraph_id": 49,
"text": "Throughout the race, drivers may make pit stops to change tyres and repair damage (from 1994 to 2009 inclusive, they could also refuel). Different teams and drivers employ different pit stop strategies in order to maximise their car's potential. Three dry tyre compounds, with different durability and adhesion characteristics, are available to drivers. Over the course of a race, drivers must use two of the three available compounds. The different compounds have different levels of performance and choosing when to use which compound is a key tactical decision to make. Different tyres have different colours on their sidewalls; this allows spectators to understand the strategies.",
"title": "Racing and strategy"
},
{
"paragraph_id": 50,
"text": "Under wet conditions, drivers may switch to one of two specialised wet weather tyres with additional grooves (one \"intermediate\", for mild wet conditions, such as after recent rain, one \"full wet\", for racing in or immediately after rain). A driver must make at least one stop to use two tyre compounds; up to three stops are typically made, although further stops may be necessary to fix damage or if weather conditions change. If rain tyres are used, drivers are no longer obliged to use both types of dry tyres.",
"title": "Racing and strategy"
},
{
"paragraph_id": 51,
"text": "This role involves generally managing the logistics of each F1 Grand Prix, inspecting cars in parc fermé before a race, enforcing FIA rules, and controlling the lights which start each race. As the head of the race officials, the race director also plays a large role in sorting disputes among teams and drivers. Penalties, such as drive-through penalties (and stop-and-go penalties), demotions on a pre-race start grid, race disqualifications, and fines can all be handed out should parties break regulations. As of 2023, the race director is Niels Wittich, with Herbie Blash as a permanent advisor.",
"title": "Racing and strategy"
},
{
"paragraph_id": 52,
"text": "In the event of an incident that risks the safety of competitors or trackside race marshals, race officials may choose to deploy the safety car. This in effect suspends the race, with drivers following the safety car around the track at its speed in race order, with overtaking not permitted. Cars that have been lapped may, during the safety car period and depending on circumstances permitted by the race director, be allowed to un-lap themselves in order to ensure a smoother restart and to avoid blue flags being immediately thrown upon the resumption of the race with many of the cars in very close proximity to each other. The safety car circulates until the danger is cleared; after it comes in, the race restarts with a \"rolling start\". Pit stops are permitted under the safety car. Since 2000, the main safety car driver has been German ex-racing driver Bernd Mayländer. On the lap in which the safety car returns to the pits, the leading car takes over the role of the safety car until the timing line. After crossing this line, drivers are allowed to start racing for track position once more. Mercedes-Benz supplies Mercedes-AMG models to Formula One to use as the safety cars. From 2021 onwards, Aston Martin supplies the Vantage to Formula One to use as the safety car, sharing the duty with Mercedes-Benz.",
"title": "Racing and strategy"
},
{
"paragraph_id": 53,
"text": "Flags specifications and usage are prescribed by Appendix H of the FIA's International Sporting Code.",
"title": "Racing and strategy"
},
{
"paragraph_id": 54,
"text": "The format of the race has changed little through Formula One's history. The main changes have revolved around what is allowed at pit stops. In the early days of Grand Prix racing, a driver would be allowed to continue a race in their teammate's car should theirs develop a problem – in the modern era, cars are so carefully fitted to drivers that this has become impossible. In recent years, the emphasis has been on changing refuelling and tyre change regulations.",
"title": "Racing and strategy"
},
{
"paragraph_id": 55,
"text": "Since the 2010 season, refuelling – which was reintroduced in 1994 – has not been allowed, to encourage less tactical racing following safety concerns. The rule requiring both compounds of tyre to be used during the race was introduced in 2007, again to encourage racing on the track. The safety car is another relatively recent innovation that reduced the need to deploy the red flag, allowing races to be completed on time for a growing international live television audience.",
"title": "Racing and strategy"
},
{
"paragraph_id": 56,
"text": "*A driver must finish within the top ten to receive a point for setting the fastest lap of the race. If the driver who set the fastest lap finishes outside of the top ten, then the point for fastest lap will not be awarded for that race.",
"title": "Racing and strategy"
},
{
"paragraph_id": 57,
"text": "Various systems for awarding championship points have been used since 1950. The current system, in place since 2010, awards the top ten cars points in the Drivers' and Constructors' Championships, with the winner receiving 25 points. All points won at each race are added up, and the driver and constructor with the most points at the end of the season are crowned World Champions. Regardless of whether a driver stays with the same team throughout the season, or switches teams, all points earned by them count for the Drivers' Championship.",
"title": "Racing and strategy"
},
{
"paragraph_id": 58,
"text": "A driver must be classified in order to receive points, as of 2022, a driver must complete at least 90% of the race distance in order to receive points. Therefore, it is possible for a driver to receive points even if they retired before the end of the race.",
"title": "Racing and strategy"
},
{
"paragraph_id": 59,
"text": "From some time between the 1977 and 1980 seasons to the end of the 2021 season if less than 75% of the race laps were completed by the winner, then only half of the points listed in the table were awarded to the drivers and constructors. This has happened on only five occasions in the history of the championship, and it had a notable influence on the final standing of the 1984 season. The last occurrence was at the 2021 Belgian Grand Prix when the race was called off after just three laps behind a safety car due to torrential rain. The half points rule was replaced by a distance-dependent gradual scale system for 2022.",
"title": "Racing and strategy"
},
{
"paragraph_id": 60,
"text": "A Formula One constructor is the entity credited for designing the chassis and the engine. If both are designed by the same company, that company receives sole credit as the constructor (e.g., Ferrari). If they are designed by different companies, both are credited, and the name of the chassis designer is placed before that of the engine designer (e.g., McLaren-Mercedes). All constructors are scored individually, even if they share either chassis or engine with another constructor (e.g., Williams-Ford, Williams-Honda in 1983).",
"title": "Constructors"
},
{
"paragraph_id": 61,
"text": "Since 1981, Formula One teams have been required to build the chassis in which they compete, and consequently the distinction between the terms \"team\" and \"constructor\" became less pronounced, though engines may still be produced by a different entity. This requirement distinguishes the sport from series such as the IndyCar Series which allows teams to purchase chassis, and \"spec series\" such as Formula 2 which require all cars be kept to an identical specification. It also effectively prohibits privateers, which were common even in Formula One well into the 1970s.",
"title": "Constructors"
},
{
"paragraph_id": 62,
"text": "The sport's debut season, 1950, saw eighteen teams compete, but due to high costs, many dropped out quickly. In fact, such was the scarcity of competitive cars for much of the first decade of Formula One that Formula Two cars were admitted to fill the grids. Ferrari is the oldest Formula One team, the only still-active team which competed in 1950.",
"title": "Constructors"
},
{
"paragraph_id": 63,
"text": "Early manufacturer involvement came in the form of a \"factory team\" or \"works team\" (that is, one owned and staffed by a major car company), such as those of Alfa Romeo, Ferrari, or Renault. Ferrari holds the record for having won the most Constructors' Championships (sixteen).",
"title": "Constructors"
},
{
"paragraph_id": 64,
"text": "Companies such as Climax, Repco, Cosworth, Hart, Judd and Supertec, which had no direct team affiliation, often sold engines to teams that could not afford to manufacture them. In the early years, independently owned Formula One teams sometimes also built their engines, though this became less common with the increased involvement of major car manufacturers such as BMW, Ferrari, Honda, Mercedes-Benz, Renault, and Toyota, whose large budgets rendered privately built engines less competitive. Cosworth was the last independent engine supplier. It is estimated the major teams spend between €100 and €200 million ($125–$225 million) per year per manufacturer on engines alone.",
"title": "Constructors"
},
{
"paragraph_id": 65,
"text": "In the 2007 season, for the first time since the 1981 rule, two teams used chassis built by other teams. Super Aguri started the season using a modified Honda Racing RA106 chassis (used by Honda the previous year), while Scuderia Toro Rosso used the same chassis used by the parent Red Bull Racing team, which was formally designed by a separate subsidiary. The usage of these loopholes was ended for 2010 with the publication of new technical regulations, which require each constructor to own the intellectual property rights to their chassis, The regulations continue to allow a team to subcontract the design and construction of the chassis to a third-party, an option used by the HRT team in 2010 and Haas currently.",
"title": "Constructors"
},
{
"paragraph_id": 66,
"text": "Although teams rarely disclose information about their budgets, it is estimated they range from US$66 million to US$400 million each.",
"title": "Constructors"
},
{
"paragraph_id": 67,
"text": "Entering a new team in the Formula One World Championship requires a $200 million up-front payment to the FIA, which is then shared equally among the existing teams. As a consequence, constructors desiring to enter Formula One often prefer to buy an existing team: BAR's purchase of Tyrrell and Midland's purchase of Jordan allowed both of these teams to sidestep the large deposit and secure the benefits the team already had, such as TV revenue.",
"title": "Constructors"
},
{
"paragraph_id": 68,
"text": "Seven out of the ten teams competing in Formula One are based close to London in an area centred around Oxford. Ferrari have both their chassis and engine assembly in Maranello, Italy. The AlphaTauri team are based close to Ferrari in Faenza, whilst the Alfa Romeo team are based near Zurich in Switzerland.",
"title": "Constructors"
},
{
"paragraph_id": 69,
"text": "Every team in Formula One must run two cars in every session in a Grand Prix weekend, and every team may use up to four drivers in a season. A team may also run two additional drivers in Free Practice sessions, which are often used to test potential new drivers for a career as a Formula One driver or gain experienced drivers to evaluate the car. Most drivers are contracted for at least the duration of a season, with driver changes taking place in-between seasons, in comparison to early years when drivers often competed on an ad hoc basis from race to race. Each competitor must be in the possession of a FIA Super Licence to compete in a Grand Prix, which is issued to drivers who have met the criteria of success in junior motorsport categories and having achieved 300 kilometres (190 mi) of running in a Formula One car. Drivers may also be issued a Super Licence by the World Motor Sport Council if they fail to meet the criteria. Although most drivers earn their seat on ability, commercial considerations also come into play with teams having to satisfy sponsors and financial demands.",
"title": "Drivers"
},
{
"paragraph_id": 70,
"text": "Teams also contract test and reserve drivers to stand in for regular drivers when necessary and develop the team's car; although with the reduction on testing the reserve drivers' role mainly takes places on a simulator, such as rFactor Pro, which is used by most of the F1 teams.",
"title": "Drivers"
},
{
"paragraph_id": 71,
"text": "Each driver chooses an unassigned number from 2 to 99 (excluding 17 which was retired following the death of Jules Bianchi) upon entering Formula One and keeps that number during their time in the series. The number one is reserved for the reigning Drivers' Champion, who retains their previous number and may choose to use it instead of the number one. At the onset of the championship, numbers were allocated by race organisers on an ad hoc basis from race to race.",
"title": "Drivers"
},
{
"paragraph_id": 72,
"text": "Permanent numbers were introduced in 1973 to take effect in 1974, when teams were allocated numbers in ascending order based on the Constructors' Championship standings at the end of the 1973 season. The teams would hold those numbers from season to season with the exception of the team with the World Drivers' Champion, which would swap its numbers with the one and two of the previous champion's team. New entrants were allocated spare numbers, with the exception of the number 13 which had been unused since 1976.",
"title": "Drivers"
},
{
"paragraph_id": 73,
"text": "As teams kept their numbers for long periods of time, car numbers became associated with a team, such as Ferrari's 27 and 28. A different system was used from 1996 to 2013: at the start of each season, the current Drivers' Champion was designated number one, their teammate number two, and the rest of the teams assigned ascending numbers according to previous season's Constructors' Championship order.",
"title": "Drivers"
},
{
"paragraph_id": 74,
"text": "As of the conclusion of the 2022 Championship, a total of 34 separate drivers have won the World Drivers' Championship, with Michael Schumacher and Lewis Hamilton holding the record for most championships with seven. Lewis Hamilton achieved the most race wins, too, in 2020. Jochen Rindt is the only posthumous World Champion, after his points total was not surpassed despite his fatal accident at the 1970 Italian Grand Prix, with 4 races still remaining in the season. Drivers from the United Kingdom have been the most successful in the sport, with 18 championships among 10 drivers, and 308 wins.",
"title": "Drivers"
},
{
"paragraph_id": 75,
"text": "Most F1 drivers start in kart racing competitions, and then come up through traditional European single-seater series like Formula Ford and Formula Renault to Formula 3, and finally the GP2 Series. GP2 started in 2005, replacing Formula 3000, which itself had replaced Formula Two as the last major stepping-stone into F1. GP2 was rebranded as the FIA Formula 2 Championship in 2017. Most champions from this level graduate into F1, but 2006 GP2 champion Lewis Hamilton became the first F2, F3000 or GP2 champion to win the Formula One drivers' title in 2008.",
"title": "Drivers"
},
{
"paragraph_id": 76,
"text": "Drivers are not required to have competed at this level before entering Formula One. British F3 has supplied many F1 drivers, with champions, including Nigel Mansell, Ayrton Senna and Mika Häkkinen having moved straight from that series to Formula One, and Max Verstappen made his F1 debut following a single season in European F3. More rarely a driver may be picked from an even lower level, as was the case with 2007 World Champion Kimi Räikkönen, who went straight from Formula Renault to F1.",
"title": "Drivers"
},
{
"paragraph_id": 77,
"text": "American open-wheel car racing has also contributed to the Formula One grid. CART champions Mario Andretti and Jacques Villeneuve became F1 World Champions, while Juan Pablo Montoya won seven races in F1. Other CART (also known as ChampCar) champions, like Michael Andretti and Alessandro Zanardi won no races in F1. Other drivers have taken different paths to F1; Damon Hill raced motorbikes, and Michael Schumacher raced in sports cars, albeit after climbing through the junior single-seater ranks. Former F1 driver Paul di Resta raced in DTM until he was signed with Force India in 2011.",
"title": "Drivers"
},
{
"paragraph_id": 78,
"text": "The number of Grands Prix held in a season has varied over the years. The inaugural 1950 world championship season comprised only seven races, while the 2019 season contained 21 races. There were no more than 11 Grands Prix per season during the early decades of the championship, although a large number of non-championship Formula One events also took place. The number of Grands Prix increased to an average of 16 to 17 by the late 1970s, while non-championship events ended in 1983. More Grands Prix began to be held in the 2000s, and recent seasons have seen an average of 19 races. In 2021 and 2022, the calendar peaked at 22 events, the highest number of world championship races in one season.",
"title": "Grands Prix"
},
{
"paragraph_id": 79,
"text": "Six of the original seven races took place in Europe; the only non-European race that counted towards the World Championship in 1950 was the Indianapolis 500, which was held to different regulations and later replaced by the United States Grand Prix. The F1 championship gradually expanded to other non-European countries. Argentina hosted the first South American Grand Prix in 1953, and Morocco hosted the first African World Championship race in 1958. Asia and Oceania followed (Japan in 1976 and Australia in 1985), and the first race in the Middle East was held in 2004. The 19 races of the 2014 season were spread over every populated continent except for Africa, with 10 Grands Prix held outside Europe.",
"title": "Grands Prix"
},
{
"paragraph_id": 80,
"text": "Some of the Grands Prix pre-date the formation of the World Championship, such as the French Grand Prix and were incorporated into the championship as Formula One races in 1950. The British and Italian Grands Prix are the only events to have been held every Formula One season; other long-running races include the Belgian, German, and French Grands Prix. The Monaco Grand Prix was first held in 1929 and has run continuously since 1955 (with the exception of 2020) and is widely considered to be one of the most important and prestigious automobile races in the world.",
"title": "Grands Prix"
},
{
"paragraph_id": 81,
"text": "All Grands Prix have traditionally been run during the day, until the inaugural Singapore Grand Prix hosted the first Formula One night race in 2008, which was followed by the day–night Abu Dhabi Grand Prix in 2009 and the Bahrain Grand Prix which converted to a night race in 2014. Other Grands Prix in Asia have had their start times adjusted to benefit the European television audience.",
"title": "Grands Prix"
},
{
"paragraph_id": 82,
"text": "Bold denotes the Grands Prix scheduled as part of the 2023 season.",
"title": "Grands Prix"
},
{
"paragraph_id": 83,
"text": "Bold denotes the Grands Prix scheduled as part of the 2023 season.",
"title": "Grands Prix"
},
{
"paragraph_id": 84,
"text": "Since 2008, the Formula One Group has been targeting new \"destination cities\" to expand its global reach, with the aim to produce races from countries that have not previously been involved in the sport. This initiative started with the 2008 Singapore Grand Prix.",
"title": "Grands Prix"
},
{
"paragraph_id": 85,
"text": "A typical circuit features a stretch of straight road on which the starting grid is situated. The pit lane, where the drivers stop for tyres, aerodynamic adjustments and minor repairs (such as changing the car's nose due to front wing damage) during the race, retirements from the race, and where the teams work on the cars before the race, is normally located next to the starting grid. The layout of the rest of the circuit varies widely, although in most cases the circuit runs in a clockwise direction. Those few circuits that run anticlockwise (and therefore have predominantly left-handed corners) can cause drivers neck problems due to the enormous lateral forces generated by F1 cars pulling their heads in the opposite direction to normal. A single race requires hotel rooms to accommodate at least 5,000 visitors.",
"title": "Circuits"
},
{
"paragraph_id": 86,
"text": "Most of the circuits currently in use are specially constructed for competition. The current street circuits are Monaco, Melbourne, Singapore, Baku, Miami and Jeddah although races in other urban locations come and go (Las Vegas and Detroit, for example) and proposals for such races are often discussed – most recently Las Vegas. The glamour and history of the Monaco race are the primary reasons why the circuit is still in use, even though it does not meet the strict safety requirements imposed on other tracks. Three-time World champion Nelson Piquet famously described racing in Monaco as \"like riding a bicycle around your living room\".",
"title": "Circuits"
},
{
"paragraph_id": 87,
"text": "Circuit design to protect the safety of drivers is becoming increasingly sophisticated, as exemplified by the Bahrain International Circuit, added in 2004 and designed – like most of F1's new circuits – by Hermann Tilke. Several of the new circuits in F1, especially those designed by Tilke, have been criticised as lacking the \"flow\" of such classics as Spa-Francorchamps and Imola. His redesign of the Hockenheim circuit in Germany for example, while providing more capacity for grandstands and eliminating extremely long and dangerous straights, has been frowned upon by many who argue that part of the character of the Hockenheim circuits was the long and blinding straights into dark forest sections. These newer circuits, however, are generally agreed to meet the safety standards of modern Formula One better than the older ones.",
"title": "Circuits"
},
{
"paragraph_id": 88,
"text": "The Circuit of the Americas in Austin, the Sochi Autodrom in Sochi and the Baku City Circuit in Azerbaijan have all been introduced as brand new tracks since 2012. In 2020, Algarve International Circuit debuted on the F1 calendar as the venue of the Portuguese Grand Prix, with the country having last hosted a race in 1996. In 2021, Circuit Zandvoort returned to the F1 calendar as the Dutch Grand Prix, having last hosted a race in 1985.",
"title": "Circuits"
},
{
"paragraph_id": 89,
"text": "Modern Formula One cars are mid-engined, hybrid, semi-open cockpit, open-wheel single-seaters. The chassis is made largely of carbon-fibre composites, rendering it light but extremely stiff and strong. The whole car, including the driver but not fuel, weighs only 795 kg (1,753 lb) – the minimum weight set by the regulations. If the construction of the car is lighter than the minimum, it can be ballasted up to add the necessary weight. The race teams take advantage of this by placing this ballast at the extreme bottom of the chassis, thereby locating the centre of gravity as low as possible in order to improve handling and weight transfer.",
"title": "Cars and technology"
},
{
"paragraph_id": 90,
"text": "The cornering speed of Formula One cars is largely determined by the aerodynamic downforce that they generate, which pushes the car down onto the track. This is provided by \"wings\" mounted at the front and rear of the vehicle, and by ground effect created by low air pressure under the flat bottom of the car. The aerodynamic design of the cars is very heavily constrained to limit performance. The previous generation of cars sported a large number of small winglets, \"barge boards\", and turning vanes designed to closely control the flow of the air over, under, and around the car.",
"title": "Cars and technology"
},
{
"paragraph_id": 91,
"text": "The other major factor controlling the cornering speed of the cars is the design of the tyres. From 1998 to 2008, the tyres in Formula One were not \"slicks\" (tyres with no tread pattern) as in most other circuit racing series. Instead, each tyre had four large circumferential grooves on its surface designed to limit the cornering speed of the cars. Slick tyres returned to Formula One in the 2009 season. Suspension is double wishbone or multilink front and rear, with pushrod operated springs and dampers on the chassis – one exception being that of the 2009 specification Red Bull Racing car (RB5) which used pullrod suspension at the rear, the first car to do so since the Minardi PS01 in 2001. Ferrari used a pullrod suspension at both the front and rear in their 2012 car. Both Ferrari (F138) and McLaren (MP4-28) of the 2013 season used a pullrod suspension at both the front and the rear. In 2022, McLaren (MCL36) and Red Bull Racing (RB18) switched to a pullrod front suspension and push rod rear suspension.",
"title": "Cars and technology"
},
{
"paragraph_id": 92,
"text": "Carbon-carbon disc brakes are used for reduced weight and increased frictional performance. These provide a very high level of braking performance and are usually the element that provokes the greatest reaction from drivers new to the formula.",
"title": "Cars and technology"
},
{
"paragraph_id": 93,
"text": "In 2022, the technical regulations changed considerably in order to reduce the turbulence (commonly referred to as \"dirty air\") produced by the aerodynamics of the car. This includes a redesigned front and rear wing, larger wheels with a lower tyre profile, wheel covers, small winglets, the banning of barge boards, and the reintroduction of Ground effect downforce production. These have been changed to promote racing, meaning cars lose less downforce when following another car. It allows cars to follow another at a much closer distance, without extending the gap due to the turbulent air. (See 2022 Formula One World Championship Technical regulations)",
"title": "Cars and technology"
},
{
"paragraph_id": 94,
"text": "Formula One cars must have four wheels made of the same metallic material, which must be one of two magnesium alloys specified by the FIA. Magnesium alloy wheels made by forging are used to achieve maximum unsprung rotating weight reduction. As of 2022, the wheels are covered with \"spec\" (Standardised) Wheel Covers, the wheel diameter has increased from 13 inches to 18 inches (reducing the \"tyre profile\"), and small winglets have been placed over the front tyres.",
"title": "Cars and technology"
},
{
"paragraph_id": 95,
"text": "Starting with the 2014 Formula 1 season, the engines have changed from a 2.4-litre naturally aspirated V8 to turbocharged 1.6-litre V6 \"power-units\". These get a significant amount of their power from electric motors. In addition, they include a lot of energy recovery technology. Engines run on unleaded fuel closely resembling publicly available petrol. The oil which lubricates and protects the engine from overheating is very similar in viscosity to water. The 2006 generation of engines spun up to 20,000 rpm and produced over 580 kW (780 bhp). For 2007, engines were restricted to 19,000 rpm with limited development areas allowed, following the engine specification freeze since the end of 2006. For the 2009 Formula One season the engines were further restricted to 18,000 rpm.",
"title": "Cars and technology"
},
{
"paragraph_id": 96,
"text": "A wide variety of technologies – including active suspension are banned under the current regulations. Despite this the current generation of cars can reach speeds in excess of 350 km/h (220 mph) at some circuits. The highest straight line speed recorded during a Grand Prix was 372.6 km/h (231.5 mph), set by Juan Pablo Montoya during the 2005 Italian Grand Prix. A BAR-Honda Formula One car, running with minimum downforce on a runway in the Mojave Desert achieved a top speed of 415 km/h (258 mph) in 2006. According to Honda, the car fully met the FIA Formula One regulations.",
"title": "Cars and technology"
},
{
"paragraph_id": 97,
"text": "Even with the limitations on aerodynamics, at 160 km/h (99 mph) aerodynamically generated downforce is equal to the weight of the car, and the oft-repeated claim that Formula One cars create enough downforce to \"drive on the ceiling\", while possible in principle, has never been put to the test. Downforce of 2.5 times the car's weight can be achieved at full speed. The downforce means that the cars can achieve a lateral force with a magnitude of up to 3.5 times that of the force of gravity (3.5g) in cornering. Consequently, the driver's head is pulled sideways with a force equivalent to the weight of 20 kg in corners. Such high lateral forces are enough to make breathing difficult and the drivers need supreme concentration and fitness to maintain their focus for the one to two hours that it takes to complete the race. A high-performance road car like the Enzo Ferrari only achieves around 1g.",
"title": "Cars and technology"
},
{
"paragraph_id": 98,
"text": "As of 2019, each team may have no more than two cars available for use at any time. Each driver may use no more than four engines during a championship season unless they drive for more than one team. If more engines are used, they drop ten places on the starting grid of the event at which an additional engine is used. The only exception is where the engine is provided by a manufacturer or supplier taking part in its first championship season, in which case up to five may be used by a driver. Each driver may use no more than one gearbox for six consecutive events; every unscheduled gearbox change requires the driver to drop five places on the grid unless they failed to finish the previous race due to reasons beyond the team's control.",
"title": "Cars and technology"
},
{
"paragraph_id": 99,
"text": "As of 2019, each driver is limited to three power units per season, before incurring grid penalties.",
"title": "Cars and technology"
},
{
"paragraph_id": 100,
"text": "In March 2007, F1 Racing published its annual estimates of spending by Formula One teams. The total spending of all eleven teams in 2006 was estimated at $2.9 billion US. This was broken down as follows: Toyota $418.5 million, Ferrari $406.5 m, McLaren $402 m, Honda $380.5 m, BMW Sauber $355 m, Renault $324 m, Red Bull $252 m, Williams $195.5 m, Midland F1/Spyker-MF1 $120 m, Toro Rosso $75 m, and Super Aguri $57 million.",
"title": "Revenue and profits"
},
{
"paragraph_id": 101,
"text": "Costs vary greatly from team to team. Honda, Toyota, McLaren-Mercedes, and Ferrari were estimated to have spent approximately $200 million on engines in 2006, Renault spent approximately $125 million and Cosworth's 2006 V8 was developed for $15 million. In contrast to the 2006 season on which these figures are based, the 2007 sporting regulations banned all performance-related engine development.",
"title": "Revenue and profits"
},
{
"paragraph_id": 102,
"text": "Formula One teams pay entry fees of $500,000, plus $5,000 per point scored the previous year or $6,000 per point for the winner of the Constructors' Championship. Formula One drivers pay a FIA Super Licence fee, which in 2013 was €10,000 plus €1,000 per point.",
"title": "Revenue and profits"
},
{
"paragraph_id": 103,
"text": "There have been controversies with the way profits are shared among the teams. The smaller teams have complained that the profits are unevenly shared, favouring established top teams. In September 2015, Force India and Sauber officially lodged a complaint with the European Union against Formula One questioning the governance and stating that the system of dividing revenues and determining the rules is unfair and unlawful.",
"title": "Revenue and profits"
},
{
"paragraph_id": 104,
"text": "The cost of building a brand-new permanent circuit can be up to hundreds of millions of dollars, while the cost of converting a public road, such as Albert Park, into a temporary circuit is much less. Permanent circuits, however, can generate revenue all year round from leasing the track for private races and other races, such as MotoGP. The Shanghai International Circuit cost over $300 million and the Istanbul Park circuit cost $150 million to build.",
"title": "Revenue and profits"
},
{
"paragraph_id": 105,
"text": "A number of Formula One drivers earn the highest salary of any drivers in auto racing. The highest-paid driver in 2021 is Lewis Hamilton, who received $55 million in salary from Mercedes AMG Petronas F1 – a record for any driver. The very top Formula One drivers get paid more than IndyCar or NASCAR drivers; however, the earnings immediately fall off after the top three F1 drivers, and the majority of NASCAR racers will make more money than their F1 counterparts. Most top IndyCar drivers are paid around a tenth of their Formula One counterparts.",
"title": "Revenue and profits"
},
{
"paragraph_id": 106,
"text": "In the second quarter of 2020, Formula One reported a loss revenue of $122 million and an income of $24 million. This was a result of the delay of the racing championship start as a result of the COVID-19 pandemic. The company grossed revenues of $620 million for the same quarter the previous year.",
"title": "Revenue and profits"
},
{
"paragraph_id": 107,
"text": "The expense of Formula One has seen the FIA and the Formula One Commission attempt to create new regulations to lower the costs for a team to compete in the sport.",
"title": "Future"
},
{
"paragraph_id": 108,
"text": "Following their purchase of the commercial rights to the sport in 2017, Liberty Media announced their vision for the future of Formula One at the 2018 Bahrain Grand Prix. The proposal identified five key areas, including streamlining the governance of the sport, emphasising cost-effectiveness, maintaining the sport's relevance to road cars and encouraging new manufacturers to enter the championship whilst enabling them to be competitive. Liberty cited 2021 as their target date as it coincided with the need to renew commercial agreements with the teams and the end of the seven-year cycle of engine development that started in 2014.",
"title": "Future"
},
{
"paragraph_id": 109,
"text": "On 19 August 2020, it was announced that all 10 teams had signed the new Concorde Agreement. This came into effect at the start of the 2021 season and changed how prize money and TV revenue is distributed.",
"title": "Future"
},
{
"paragraph_id": 110,
"text": "When I get out of the car, of course I'm thinking as well: 'Is this something we should do, travel the world, wasting resources?'",
"title": "Future"
},
{
"paragraph_id": 111,
"text": "—Sebastian Vettel, former champion voicing concerns on Formula One's impact on climate change.",
"title": "Future"
},
{
"paragraph_id": 112,
"text": "Formula One has launched a plan to become carbon neutral by 2030. By 2025, all events should become \"sustainable\", including eliminating single-use plastics and ensuring all waste is reused, recycled or composted.",
"title": "Future"
},
{
"paragraph_id": 113,
"text": "A report conducted by Formula One estimated that the series was responsible for 256,000 tonnes of carbon dioxide emissions in the 2019 season, finding that 45% of emissions were from logistics and only 0.7% were from emissions from the cars themselves.",
"title": "Future"
},
{
"paragraph_id": 114,
"text": "In January 2020, FIA and Formula One signed the United Nations \"Sports for Climate Action\" framework. After the signing was announced, FIA President Jean Todt said: \"As an international Federation comprising 244 members in 140 countries and the leader in motor sport and mobility development, we are fully committed to global environmental protection. The signing of this UN Sports for Climate Action Framework reinforces the momentum that has been growing in our Federation for many years. Since the introduction of the hybrid power unit in F1 to the creation of the Environment and Sustainability Commission, the entire FIA community has been investing time, energy and financial resources to the benefit of environmental innovations. We aim to inspire greater awareness and best practice in sustainability motor sport standards.\"",
"title": "Future"
},
{
"paragraph_id": 115,
"text": "From the 2021–22 season, all cars will increase the bio-component of their fuel, using E10 fuel, rather than the 5.75% of ethanol currently used. This percentage is expected to grow again in the future. In December 2020, the FIA claimed that it had developed a fuel with 100% sustainability, to be used in Formula One from either 2025 or 2026, when new engine regulations come into force.",
"title": "Future"
},
{
"paragraph_id": 116,
"text": "Prior to the beginning of the 2020 Formula One World Championship, F1 announced and launched the #WeRaceAsOne initiative. The initiative primarily focuses on visible displays of solidarity in the fight against racism on Grand Prix Weekends, as well as the creation of a Formula 1 Task Force that will \"listen to people from across the paddock [...] and make conclusions on the actions required to improve the diversity and opportunity in Formula 1 at all levels\". The move spurs from the growing questions about racism and global inequalities perpetuated by the sport. The 70-year history of the World Championship has been dominated by European and white drivers, with the first (and only) black driver, Lewis Hamilton, participating in the world championship since 2007.",
"title": "Future"
},
{
"paragraph_id": 117,
"text": "In addition to organization-wide measures, individual teams have also acknowledged deficiencies in the sport's cultural and political activism. During the 2020 season, the Mercedes-AMG Petronas F1 Team conducted a study of its racial composition and found that approximately 95% of its workforce was white. Due to the results of the study, the team changed the car's livery to promote anti-racism messages and also launched the Accelerate 25 programme. The program vows that approximately 25% of all new hires to the team will come from underrepresented minorities in the sport until 2025.",
"title": "Future"
},
{
"paragraph_id": 118,
"text": "The 20 drivers on the grid have also stood in solidarity on multiple occasions in the fight against racism both on and off the track. Following the murder of George Floyd in the summer of 2020, all twenty drivers wore \"End Racism\" shirts and took part in an organised anti-racism protest during the pre-race formalities. In the year since, Lewis Hamilton has remained vocal in his pre-race attire, with other drivers occasionally wearing change-demanding clothing.",
"title": "Future"
},
{
"paragraph_id": 119,
"text": "Formula One can be seen live, or tape delayed in almost every country and territory and attracts one of the largest global television audiences. The 2008 season attracted a global audience of 600 million people per race. The cumulative television audience was calculated to be 54 billion for the 2001 season, broadcast to 200 territories.",
"title": "Media coverage"
},
{
"paragraph_id": 120,
"text": "During the early 1990s, Formula One Group created a number of trademarks, an official logo, an official TV graphics package and in 2003, an official website for the sport in an attempt to give it a corporate identity.",
"title": "Media coverage"
},
{
"paragraph_id": 121,
"text": "TV stations all take what is known as the \"World Feed\", either produced historically by the \"host broadcaster\" or by Formula One Management (FOM). The host broadcaster either had one feed for all, or two separate feeds – a feed for local viewers and a feed for international viewers. The one size fits all approach meant that there was bias to a certain team or driver during the event, which led to viewers missing out on more important action and incidents, while the two-feed approach meant that replays (for when returning from an ad break) and local bias action could be overlaid on the local feed while the international feed was left unaffected.",
"title": "Media coverage"
},
{
"paragraph_id": 122,
"text": "The only station that differed from this set up was \"DF1\" (re-branded to \"Premiere\" then to \"Sky Deutschland\") – a German channel which offers all sessions live and interactive, with features such as the onboard and pit-lane channels. This service was purchased by Bernie Ecclestone at the end of 1996 and became F1 Digital Plus, which was made more widely available around Europe until the end of 2002, when the cost of the digital interactive service was thought too much.",
"title": "Media coverage"
},
{
"paragraph_id": 123,
"text": "On 12 January 2011, F1 announced that it would adopt the HD format for the 2011 season.",
"title": "Media coverage"
},
{
"paragraph_id": 124,
"text": "It was announced on 29 July 2011, that Sky Sports and the BBC would team up to show the races in F1 from 2012 to 2018. Sky launched a dedicated channel, Sky Sports F1 which covered all races live without commercial interruption as well as live practice and qualifying sessions, along with F1 programming, including interviews, archive action and magazine shows. In 2012 the BBC broadcast live coverage of half of the races in the season. The BBC ended its television contract after the 2015 season, three years earlier than planned. The free-to-air TV rights were picked up by Channel 4 until the end of the 2018 season. Sky Sports F1 coverage remained unaffected and BBC Radio 5 Live and 5 Sports Extra coverage was extended until 2021. As of 2022, BBC Radio 5 Live and 5 Sports Extra has rights to such coverage until 2024.",
"title": "Media coverage"
},
{
"paragraph_id": 125,
"text": "While Sky Sports and Channel 4 are the two major broadcasters of Formula 1, other countries show Formula One races. Many use commentary from either Sky Sports or Channel 4. In most of Asia (excluding China), the two main broadcasters of Formula one includes the Fox network and Star Sports (in India). In the United States, ESPN holds the official rights to broadcast the sport while ABC also holds free-to-air rights for some races under the ESPN on ABC banner. In Germany, Austria and Switzerland, the two main broadcasters are RTL Germany and n-TV. In China, there are multiple channels that broadcast Formula One which include CCTV, Tencent, Guangdong TV and Shanghai TV. Currently in France, the only channel that broadcasts Formula one is the pay TV channel Canal+, having renewed its broadcasting rights until 2024.",
"title": "Media coverage"
},
{
"paragraph_id": 126,
"text": "The official Formula One website has live timing charts that can be used during the race to follow the leaderboard in real time. An official application has been available for the Apple App Store since 2009, and on Google Play since 2011, that shows users a real-time feed of driver positions, timing and commentary. On 26 November 2017 Formula One unveiled a new logo, which replaced the previous \"flying one\" in use since 1993.",
"title": "Media coverage"
},
{
"paragraph_id": 127,
"text": "In March 2018, FOM announced the launch of F1 TV, an over-the-top (OTT) streaming platform that lets viewers watch multiple simultaneous video feeds and timing screens in addition to traditional directed race footage and commentary.",
"title": "Media coverage"
},
{
"paragraph_id": 128,
"text": "Currently, the terms \"Formula One race\" and \"World Championship race\" are effectively synonymous. Since 1984, every Formula One race has counted towards the World Championship, and every World Championship race has been run to Formula One regulations. However, the two terms are not interchangeable.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 129,
"text": "The distinction is most relevant when considering career summaries and all-time lists. For example, in the List of Formula One drivers, Clemente Biondetti is shown with a single race against his name. Biondetti actually competed in four Formula One races in 1950, but only one of these counted for the World Championship.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 130,
"text": "In the earlier history of Formula One, many races took place outside the World Championship, and local championships run to Formula One regulations also occurred. These events often took place on circuits that were not always suitable for the World Championship and featured local cars and drivers as well as those competing in the championship.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 131,
"text": "In the early years of Formula One, before the world championship was established, there were around twenty races held from late Spring to early Autumn in Europe, although not all of these were considered significant. Most competitive cars came from Italy, particularly Alfa Romeo. After the start of the world championship, these non-championship races continued. In the 1950s and 1960s, there were many Formula One races which did not count for the World Championship; in 1950 a total of twenty-two Formula One races were held, of which only six counted towards the World Championship. In 1952 and 1953, when the world championship was run to Formula Two regulations, non-championship events were the only Formula One races that took place.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 132,
"text": "Some races, particularly in the UK, including the Race of Champions, Oulton Park International Gold Cup and the International Trophy, were attended by the majority of the world championship contenders. Other smaller events were regularly held in locations not part of the championship, such as the Syracuse and Danish Grands Prix, although these only attracted a small amount of the championship teams and relied on private entries and lower Formula cars to make up the grid. These became less common through the 1970s and 1983 saw the last non-championship Formula One race; the 1983 Race of Champions at Brands Hatch, won by reigning World Champion Keke Rosberg in a Williams-Cosworth in a close fight with American Danny Sullivan.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 133,
"text": "South Africa's flourishing domestic Formula One championship ran from 1960 through to 1975. The frontrunning cars in the series were recently retired from the world championship although there was also a healthy selection of locally built or modified machines.",
"title": "Distinction between Formula One and World Championship races"
},
{
"paragraph_id": 134,
"text": "The DFV helped in making the UK domestic Formula One championship possible between 1978 and 1980. As in South Africa a decade before, second hand cars from manufacturers like Lotus and Fittipaldi Automotive were the order of the day, although some, such as the March 781, were built specifically for the series. In 1980, the series saw South African Desiré Wilson become the only woman to win a Formula One race when she triumphed at Brands Hatch in a Wolf WR3.",
"title": "Distinction between Formula One and World Championship races"
}
]
| Formula One is the highest class of international racing for open-wheel single-seater formula racing cars sanctioned by the Fédération Internationale de l'Automobile (FIA). The FIA Formula One World Championship has been one of the premier forms of racing around the world since its inaugural season in 1950. The word formula in the name refers to the set of rules to which all participants' cars must conform. A Formula One season consists of a series of races, known as Grands Prix. Grands Prix take place in multiple countries and continents around the world on either purpose-built circuits or closed public roads. A points system is used at Grands Prix to determine two annual World Championships: one for the drivers, and one for the constructors. Each driver must hold a valid Super Licence, the highest class of racing licence issued by the FIA, and the races must be held on grade one tracks, the highest grade-rating issued by the FIA for tracks. Formula One cars are the fastest regulated road-course racing cars in the world, owing to very high cornering speeds achieved through generating large amounts of aerodynamic downforce. Much of this downforce is generated by front and rear wings, which have the side effect of causing severe turbulence behind each car. The turbulence reduces the downforce generated by the cars following directly behind, making it hard to overtake. Major changes made to the cars for the 2022 season have resulted in greater use of ground effect aerodynamics and modified wings to reduce the turbulence behind the cars, with the goal of making overtaking easier. The cars are dependent on electronics, aerodynamics, suspension and tyres. Traction control, launch control, and automatic shifting, plus other electronic driving aids, were first banned in 1994. They were briefly reintroduced in 2001, and have more recently been banned since 2004 and 2008, respectively. With the average annual cost of running a team – designing, building, and maintaining cars, pay, transport – being approximately £220,000,000, its financial and political battles are widely reported. On 23 January 2017, Liberty Media completed its acquisition of the Formula One Group, from private-equity firm CVC Capital Partners for £6.4bn ($8bn). | 2001-08-03T18:09:51Z | 2023-12-30T13:09:57Z | [
"Template:Use dmy dates",
"Template:Tooltip",
"Template:Use British English",
"Template:Nowrap",
"Template:Quotebox",
"Template:Main",
"Template:Webarchive",
"Template:Cite journal",
"Template:Refbegin",
"Template:Official website",
"Template:Pp-move-indef",
"Template:Original research inline",
"Template:Cite news",
"Template:ISBN",
"Template:Commons category-inline",
"Template:Cite book",
"Template:Refend",
"Template:Formula One",
"Template:Ndash",
"Template:Update",
"Template:Source needed",
"Template:Portal",
"Template:Reflist",
"Template:Cite web",
"Template:Navboxes",
"Template:F1",
"Template:Cvt",
"Template:F1 GP",
"Template:See also",
"Template:Efn",
"Template:Flagicon",
"Template:As of",
"Template:Notelist",
"Template:Short description",
"Template:Redirect-multi",
"Template:Infobox motorsport championship",
"Template:Who",
"Template:Multiple image",
"Template:Convert",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Formula_One |
10,855 | Franco Baresi | Franchino Baresi OMRI (Italian pronunciation: [ˈfraŋko baˈreːzi; -eːsi]; born 8 May 1960) is an Italian football youth team coach and a former player and manager. He mainly played as a sweeper or as a central defender, and spent his entire 20-year career with Serie A club AC Milan, captaining the club for 15 seasons. He is considered to be one of the best defenders in the history of the sport. He was ranked 19th in World Soccer magazine's list of the 100 greatest players of the 20th century. With Milan, he won three UEFA Champions League titles, six Serie A titles, four Supercoppa Italiana titles, two European Super Cups and two Intercontinental Cups.
With the Italy national team, he was a member of the Italian squad that won the 1982 FIFA World Cup. He also played in the 1990 World Cup, where he was named in the FIFA World Cup All-Star Team, finishing third in the competition. At the 1994 World Cup, he was named Italy's captain and was part of the squad that reached the final, although he would miss a penalty in the resulting shoot-out as Brazil lifted the trophy. Baresi also represented Italy at two UEFA European Championships, in 1980 and 1988, and at the 1984 Olympics, reaching the semi-finals on each occasion.
The younger brother of former footballer Giuseppe Baresi, after joining the Milan senior team as a youngster, Franco Baresi was initially nicknamed "Piscinin", Milanese for "little one". Due to his skill and success, he was later known as "Kaiser Franz", a reference to fellow sweeper Franz Beckenbauer. In 1999, he was voted Milan's Player of the Century. After his final season at Milan in 1997, the club retired Baresi's shirt number 6. He was named by Pelé one of the 125 Greatest Living Footballers at the FIFA centenary awards ceremony in 2004. Baresi was inducted into the Italian Football Hall of Fame in 2013.
Baresi grew up in a farmstead on the outskirts of a small north Italian town, Travagliato. He did not watch football on television until he was 10.
Originally an AC Milan youth product, Baresi went on to spend his entire 20-year professional career with Milan, making his Serie A debut at age 17 during the 1977–78 season on 23 April 1978. He had initially been rejected by the Internazionale youth team, who chose his brother Giuseppe instead, hence the Milan youth team signed Franco Baresi. The two brothers ended up captaining their respective teams shortly after, with their image while exchanging pennants became the trademark of Milan's derby della Madonnina throughout the 80s.
The following season, he was made a member of the starting 11, playing as a sweeper or as a centreback, winning the 1978–79 Serie A title, Milan's tenth overall, playing alongside Fabio Capello and Gianni Rivera.
This success was soon followed by a dark period in the club's history, when Milan was relegated to Serie B twice during the early 1980s. Milan were relegated in 1980 for being involved in the match fixing scandal of 1980, and once again after finishing third-last in the 1981–82 season, after having just returned to Serie A the previous season, after winning the 1980–81 Serie B title. Despite being a member of the Euro 1980 Italy squad that had finished fourth, and the 1982 World Cup-winning team, Baresi elected to stay with Milan, winning the Serie B title for the second time during the 1982–83 season and bringing Milan back to Serie A. After Aldo Maldera and Fulvio Collovati left the club in 1982, Baresi was appointed Milan's captain, at age 22, and would hold this position for much of his time at the club, becoming a symbol and a leader for the team. During this bleak period for Milan, Baresi did manage to win a Mitropa Cup in 1982 and reached the Coppa Italia final during 1984–85 season, although the team failed to dominate in Serie A. During the end of the 1980s and the first half of the 1990s, Baresi was at the heart of a notable all-Italian defence alongside Paolo Maldini, Alessandro Costacurta, Mauro Tassotti and later Christian Panucci, under managers Arrigo Sacchi and Fabio Capello, a defence which is regarded by many as one of the greatest of all time. When the attacking Dutch trio of Marco van Basten, Ruud Gullit and Frank Rijkaard arrived at the club in the late 1980s, Milan began a period of domestic and international triumphs, and between 1987 and 1996, at the height of the club's success, the Milan squad contained many Italian and international stars, such as Roberto Donadoni, Carlo Ancelotti, Marco van Basten, Ruud Gullit, Frank Rijkaard and later Demetrio Albertini, Dejan Savićević, Zvonimir Boban, Marcel Desailly, George Weah, Jean-Pierre Papin, Brian Laudrup and Roberto Baggio. Under Sacchi, Milan won the Serie A title in 1987–88, with Baresi helping Milan to concede only 14 goals. This title was immediately followed by a Supercoppa Italiana in 1988 the next season, and back-to-back European Cups in 1988–89 and 1989–90; In the 1990 European Cup Final, Baresi turned in a dominant performance as the team's captain, helping Milan to defend the European Cup title and keep a clean sheet in a 1–0 victory over Benfica. Baresi was also runner-up to teammate Van Basten for the Ballon d'Or in 1989, finishing ahead of his other teammate Frank Rijkaard, and was named Serie A Footballer of the Year in 1989–90. Milan also reached the Coppa Italia final during the 1989–90 season.
Baresi went on to win four more Serie A titles with Milan under Fabio Capello, including three consecutive titles in 1991–92, 1992–93 and the 1993–94 seasons. Baresi helped Milan win the 1991–92 title undefeated, helping Milan to go unbeaten for an Italian record of 58 matches. Milan also scored a record 74 goals that season. During the 1993–94 season, Baresi helped Milan concede a mere 15 goals in Serie A, helping the club to finish the season with the best defence. Baresi also won three consecutive Supercoppa Italiana under Capello, in 1992, 1993 and 1994. Milan also reached three consecutive UEFA Champions League finals during the 1992–93, 1993–94 and 1994–95 seasons, losing to Marseille in 1992–93 and Ajax in 1994–95. Baresi won the third European Cup/UEFA Champions League of his career in 1993–94 when Milan defeated Johan Cruyff's Barcelona "Dream Team" 4–0 in the final. Baresi also managed to win the 1994 European Super Cup, although Milan were defeated in the 1994 Intercontinental Cup, the 1993 European Super Cup and the 1993 Intercontinental Cup. Under Capello, Milan and Baresi were able to capture another Serie A title during 1995–96 season, Baresi's sixth.
Baresi retired at the end of the 1996–97 Serie A season, at age 37. In his 20 seasons with Milan, he won six Serie A titles, three European Cup/UEFA Champions League titles (reaching five finals in total), two Intercontinental Cups (four finals in total), four European Supercups (five finals in total), four Supercoppa Italiana (five finals in total), two Serie B titles and a Mitropa Cup. He scored 31 goals for Milan, 21 of which were on penalties, and, despite being a defender, he was the top scorer of the Coppa Italia during the 1989–90 season, the only trophy which he failed to win with Milan, reaching the final twice during his career. His final goal for Milan was scored in a 2–1 win against Padova on 27 August 1995. In his honour, Milan retired his number 6 shirt, which he had worn throughout his career. The captain's armband, which he had worn for 15 seasons, was handed over to Paolo Maldini. Milan organised a celebration match in his honour, which was played on 28 October 1997 at the San Siro, featuring many footballing stars.
At age 20, while still playing in the Italy under-21 side, Baresi was named in Italy's 22-man squad for the 1980 European Championship (along with his older brother Giuseppe) by manager Enzo Bearzot. The tournament was held on home soil and Italy finished fourth. However, unlike his brother, Franco Baresi did not play a single match in the tournament. Euro 1980 would be the only time the two brothers were on the Italy squad together at a major tournament. At age 22, Baresi was named in Italy's squad for the 1982 FIFA World Cup. The Azzurri won their third World Cup, defeating West Germany in the final, but Baresi, once again, was not selected to play a match throughout the tournament. Baresi was also a member of the Italy squad that took part in the 1984 Olympics. Italy finished in fourth place after a semi-final defeat to Brazil, and losing the bronze medal match to Yugoslavia. Baresi scored a goal against the United States during the group stage.
Baresi won his first senior international cap in a 1984 UEFA Championship qualifying match against Romania in Florence, on 14 December 1982, a 0–0 draw. Italy, however, ultimately failed to qualify for the final tournament.
Baresi was not included in Italy's squad for the 1986 World Cup by coach Enzo Bearzot, who saw him as being more of a midfielder than a defender (although his brother Giuseppe was selected as a defender for the World Cup, as well as Roberto Tricella). He returned to the team for the 1988 European Championship, playing as a sweeper, where Italy reached the semi-finals under Azeglio Vicini, becoming an undisputed first team member and playing in every match. He made his first appearance in a World Cup finals match in the 1990 tournament, which was held on home soil, and he played in every match as one of the starting centre-backs, as Italy finished in third-place, after being eliminated by defending champions Argentina in a penalty shootout in the semi-finals. Baresi helped the Italian defence to keep five consecutive clean sheets, only conceding two goals, and going unbeaten for a World Cup record of 518 minutes, until they were beaten by an Argentinian equaliser in the semi-final. His performances earned him a spot on the 1990 World Cup Team of the tournament.
After replacing Giuseppe Bergomi as captain for the 1994 World Cup under his former manager at Milan, Arrigo Sacchi, Baresi sustained an injury to his meniscus in Italy's second group match, a 1–0 win against Norway, and missed most of the tournament. He returned to the squad 25 days later, in time for the final, with a dominant defensive performance, helping Italy to keep a clean sheet against Brazil, despite the key defensive absences of his Milan teammates Alessandro Costacurta and Mauro Tassotti. After a 0–0 deadlock following extra time, the match went to a penalty shootout, and Baresi subsequently missed his penalty, suffering from severe cramps and fatigue. Following misses by Daniele Massaro and Roberto Baggio, Italy were defeated by Brazil in the penalty shootout.
Following the World Cup defeat, Baresi made one more appearance for Italy, in an away UEFA Euro 1996 qualifying match against Slovenia on 7 September 1994, which ended in a 1–1 draw. Baresi subsequently retired from the national side at age 34, passing the captain's armband to his Milan teammate Paolo Maldini. Baresi amassed 81 caps for Italy, scoring one goal in a friendly win against the Soviet Union, and he is one of seven players to have achieved the rare feat of winning Gold, Silver and Bronze FIFA World Cup medals during his international career.
Baresi is regarded as one of the greatest defenders of all time. He played his entire 20-year career with Milan, becoming a club legend. At Milan, he formed one of the most formidable defensive units of all time, alongside Paolo Maldini, Alessandro Costacurta, Mauro Tassotti, Filippo Galli and later Christian Panucci. He was a complete and consistent defender who combined power with elegance and was gifted with outstanding physical and mental attributes, such as pace, strength, tenacity, concentration and stamina, which made him effective in the air, despite his lack of notable height for a centre-back.
Although Baresi was capable of playing anywhere along the backline, he primarily excelled as a centreback and as sweeper, where he combined his defensive attributes, and his ability to read the game, with his excellent vision, technique, distribution and ball skills. These qualities also enabled him to excel in a zonal marking system, maintain a high defensive line, and play the offside trap, in particular during his time at Milan under Sacchi; indeed, Baresi came to be known for often raising his arm towards the linesman whenever his team attempted to play the offside trap. Baresi's passing range, technical ability and ball control allowed him to advance forward into the midfield to start attacking plays from the back, enabling him to function as a secondary playmaker for his team, and also play as a defensive or central midfielder when necessary. Despite being a defender, he was also an accurate penalty kick taker. Baresi was known for being a strong and accurate tackler, who was very good at winning back possession, and at anticipating and intercepting plays, due to his acute tactical intelligence, speed of thought, marking ability and positional sense. A precocious talent in his youth, throughout the course of his career, he also stood out for his professionalism, athleticism, longevity, and discipline in training, as well as his outstanding leadership, commanding presence on the pitch and his organisational skills; indeed, he captained both Milan and the Italy national team.
Baresi also shares the record of most own goals scored in Serie A history (eight, along with Riccardo Ferri).
On 1 June 2002, Baresi was officially appointed as director of football at Fulham, but tensions between Baresi and then Fulham manager Jean Tigana led to resignation from the club in August.
He was appointed head coach of Milan's Primavera Under-20 squad. In 2006, he was moved by the club to coach the Berretti Under-19 squad, with his former teammate Filippo Galli replacing him at the helm of the Primavera squad. He retired from coaching and was replaced by Roberto Bertuzzo.
Franco Baresi is the younger brother of Internazionale legendary defender Giuseppe Baresi. As youngsters, both players had tryouts for Inter, but Franco was rejected, and purchased by local rivals Milan. As he was the younger player, Franco was initially known as "Baresi 2". However, due to Franco's eventual great success and popularity throughout his career, which surpassed even that of his older brother's, Giuseppe later became known as "the other Baresi", despite also achieving notable success.
Baresi is featured in the EA Sports football video game series FIFA 14's Classic XI – a multi-national all-star team, along with compatriots Bruno Conti, Gianni Rivera and Giacinto Facchetti. He was also named in the Ultimate Team Legends in FIFA 15.
AC Milan
Individual
Orders | [
{
"paragraph_id": 0,
"text": "Franchino Baresi OMRI (Italian pronunciation: [ˈfraŋko baˈreːzi; -eːsi]; born 8 May 1960) is an Italian football youth team coach and a former player and manager. He mainly played as a sweeper or as a central defender, and spent his entire 20-year career with Serie A club AC Milan, captaining the club for 15 seasons. He is considered to be one of the best defenders in the history of the sport. He was ranked 19th in World Soccer magazine's list of the 100 greatest players of the 20th century. With Milan, he won three UEFA Champions League titles, six Serie A titles, four Supercoppa Italiana titles, two European Super Cups and two Intercontinental Cups.",
"title": ""
},
{
"paragraph_id": 1,
"text": "With the Italy national team, he was a member of the Italian squad that won the 1982 FIFA World Cup. He also played in the 1990 World Cup, where he was named in the FIFA World Cup All-Star Team, finishing third in the competition. At the 1994 World Cup, he was named Italy's captain and was part of the squad that reached the final, although he would miss a penalty in the resulting shoot-out as Brazil lifted the trophy. Baresi also represented Italy at two UEFA European Championships, in 1980 and 1988, and at the 1984 Olympics, reaching the semi-finals on each occasion.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The younger brother of former footballer Giuseppe Baresi, after joining the Milan senior team as a youngster, Franco Baresi was initially nicknamed \"Piscinin\", Milanese for \"little one\". Due to his skill and success, he was later known as \"Kaiser Franz\", a reference to fellow sweeper Franz Beckenbauer. In 1999, he was voted Milan's Player of the Century. After his final season at Milan in 1997, the club retired Baresi's shirt number 6. He was named by Pelé one of the 125 Greatest Living Footballers at the FIFA centenary awards ceremony in 2004. Baresi was inducted into the Italian Football Hall of Fame in 2013.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Baresi grew up in a farmstead on the outskirts of a small north Italian town, Travagliato. He did not watch football on television until he was 10.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Originally an AC Milan youth product, Baresi went on to spend his entire 20-year professional career with Milan, making his Serie A debut at age 17 during the 1977–78 season on 23 April 1978. He had initially been rejected by the Internazionale youth team, who chose his brother Giuseppe instead, hence the Milan youth team signed Franco Baresi. The two brothers ended up captaining their respective teams shortly after, with their image while exchanging pennants became the trademark of Milan's derby della Madonnina throughout the 80s.",
"title": "Club career"
},
{
"paragraph_id": 5,
"text": "The following season, he was made a member of the starting 11, playing as a sweeper or as a centreback, winning the 1978–79 Serie A title, Milan's tenth overall, playing alongside Fabio Capello and Gianni Rivera.",
"title": "Club career"
},
{
"paragraph_id": 6,
"text": "This success was soon followed by a dark period in the club's history, when Milan was relegated to Serie B twice during the early 1980s. Milan were relegated in 1980 for being involved in the match fixing scandal of 1980, and once again after finishing third-last in the 1981–82 season, after having just returned to Serie A the previous season, after winning the 1980–81 Serie B title. Despite being a member of the Euro 1980 Italy squad that had finished fourth, and the 1982 World Cup-winning team, Baresi elected to stay with Milan, winning the Serie B title for the second time during the 1982–83 season and bringing Milan back to Serie A. After Aldo Maldera and Fulvio Collovati left the club in 1982, Baresi was appointed Milan's captain, at age 22, and would hold this position for much of his time at the club, becoming a symbol and a leader for the team. During this bleak period for Milan, Baresi did manage to win a Mitropa Cup in 1982 and reached the Coppa Italia final during 1984–85 season, although the team failed to dominate in Serie A. During the end of the 1980s and the first half of the 1990s, Baresi was at the heart of a notable all-Italian defence alongside Paolo Maldini, Alessandro Costacurta, Mauro Tassotti and later Christian Panucci, under managers Arrigo Sacchi and Fabio Capello, a defence which is regarded by many as one of the greatest of all time. When the attacking Dutch trio of Marco van Basten, Ruud Gullit and Frank Rijkaard arrived at the club in the late 1980s, Milan began a period of domestic and international triumphs, and between 1987 and 1996, at the height of the club's success, the Milan squad contained many Italian and international stars, such as Roberto Donadoni, Carlo Ancelotti, Marco van Basten, Ruud Gullit, Frank Rijkaard and later Demetrio Albertini, Dejan Savićević, Zvonimir Boban, Marcel Desailly, George Weah, Jean-Pierre Papin, Brian Laudrup and Roberto Baggio. Under Sacchi, Milan won the Serie A title in 1987–88, with Baresi helping Milan to concede only 14 goals. This title was immediately followed by a Supercoppa Italiana in 1988 the next season, and back-to-back European Cups in 1988–89 and 1989–90; In the 1990 European Cup Final, Baresi turned in a dominant performance as the team's captain, helping Milan to defend the European Cup title and keep a clean sheet in a 1–0 victory over Benfica. Baresi was also runner-up to teammate Van Basten for the Ballon d'Or in 1989, finishing ahead of his other teammate Frank Rijkaard, and was named Serie A Footballer of the Year in 1989–90. Milan also reached the Coppa Italia final during the 1989–90 season.",
"title": "Club career"
},
{
"paragraph_id": 7,
"text": "Baresi went on to win four more Serie A titles with Milan under Fabio Capello, including three consecutive titles in 1991–92, 1992–93 and the 1993–94 seasons. Baresi helped Milan win the 1991–92 title undefeated, helping Milan to go unbeaten for an Italian record of 58 matches. Milan also scored a record 74 goals that season. During the 1993–94 season, Baresi helped Milan concede a mere 15 goals in Serie A, helping the club to finish the season with the best defence. Baresi also won three consecutive Supercoppa Italiana under Capello, in 1992, 1993 and 1994. Milan also reached three consecutive UEFA Champions League finals during the 1992–93, 1993–94 and 1994–95 seasons, losing to Marseille in 1992–93 and Ajax in 1994–95. Baresi won the third European Cup/UEFA Champions League of his career in 1993–94 when Milan defeated Johan Cruyff's Barcelona \"Dream Team\" 4–0 in the final. Baresi also managed to win the 1994 European Super Cup, although Milan were defeated in the 1994 Intercontinental Cup, the 1993 European Super Cup and the 1993 Intercontinental Cup. Under Capello, Milan and Baresi were able to capture another Serie A title during 1995–96 season, Baresi's sixth.",
"title": "Club career"
},
{
"paragraph_id": 8,
"text": "Baresi retired at the end of the 1996–97 Serie A season, at age 37. In his 20 seasons with Milan, he won six Serie A titles, three European Cup/UEFA Champions League titles (reaching five finals in total), two Intercontinental Cups (four finals in total), four European Supercups (five finals in total), four Supercoppa Italiana (five finals in total), two Serie B titles and a Mitropa Cup. He scored 31 goals for Milan, 21 of which were on penalties, and, despite being a defender, he was the top scorer of the Coppa Italia during the 1989–90 season, the only trophy which he failed to win with Milan, reaching the final twice during his career. His final goal for Milan was scored in a 2–1 win against Padova on 27 August 1995. In his honour, Milan retired his number 6 shirt, which he had worn throughout his career. The captain's armband, which he had worn for 15 seasons, was handed over to Paolo Maldini. Milan organised a celebration match in his honour, which was played on 28 October 1997 at the San Siro, featuring many footballing stars.",
"title": "Club career"
},
{
"paragraph_id": 9,
"text": "At age 20, while still playing in the Italy under-21 side, Baresi was named in Italy's 22-man squad for the 1980 European Championship (along with his older brother Giuseppe) by manager Enzo Bearzot. The tournament was held on home soil and Italy finished fourth. However, unlike his brother, Franco Baresi did not play a single match in the tournament. Euro 1980 would be the only time the two brothers were on the Italy squad together at a major tournament. At age 22, Baresi was named in Italy's squad for the 1982 FIFA World Cup. The Azzurri won their third World Cup, defeating West Germany in the final, but Baresi, once again, was not selected to play a match throughout the tournament. Baresi was also a member of the Italy squad that took part in the 1984 Olympics. Italy finished in fourth place after a semi-final defeat to Brazil, and losing the bronze medal match to Yugoslavia. Baresi scored a goal against the United States during the group stage.",
"title": "International career"
},
{
"paragraph_id": 10,
"text": "Baresi won his first senior international cap in a 1984 UEFA Championship qualifying match against Romania in Florence, on 14 December 1982, a 0–0 draw. Italy, however, ultimately failed to qualify for the final tournament.",
"title": "International career"
},
{
"paragraph_id": 11,
"text": "Baresi was not included in Italy's squad for the 1986 World Cup by coach Enzo Bearzot, who saw him as being more of a midfielder than a defender (although his brother Giuseppe was selected as a defender for the World Cup, as well as Roberto Tricella). He returned to the team for the 1988 European Championship, playing as a sweeper, where Italy reached the semi-finals under Azeglio Vicini, becoming an undisputed first team member and playing in every match. He made his first appearance in a World Cup finals match in the 1990 tournament, which was held on home soil, and he played in every match as one of the starting centre-backs, as Italy finished in third-place, after being eliminated by defending champions Argentina in a penalty shootout in the semi-finals. Baresi helped the Italian defence to keep five consecutive clean sheets, only conceding two goals, and going unbeaten for a World Cup record of 518 minutes, until they were beaten by an Argentinian equaliser in the semi-final. His performances earned him a spot on the 1990 World Cup Team of the tournament.",
"title": "International career"
},
{
"paragraph_id": 12,
"text": "After replacing Giuseppe Bergomi as captain for the 1994 World Cup under his former manager at Milan, Arrigo Sacchi, Baresi sustained an injury to his meniscus in Italy's second group match, a 1–0 win against Norway, and missed most of the tournament. He returned to the squad 25 days later, in time for the final, with a dominant defensive performance, helping Italy to keep a clean sheet against Brazil, despite the key defensive absences of his Milan teammates Alessandro Costacurta and Mauro Tassotti. After a 0–0 deadlock following extra time, the match went to a penalty shootout, and Baresi subsequently missed his penalty, suffering from severe cramps and fatigue. Following misses by Daniele Massaro and Roberto Baggio, Italy were defeated by Brazil in the penalty shootout.",
"title": "International career"
},
{
"paragraph_id": 13,
"text": "Following the World Cup defeat, Baresi made one more appearance for Italy, in an away UEFA Euro 1996 qualifying match against Slovenia on 7 September 1994, which ended in a 1–1 draw. Baresi subsequently retired from the national side at age 34, passing the captain's armband to his Milan teammate Paolo Maldini. Baresi amassed 81 caps for Italy, scoring one goal in a friendly win against the Soviet Union, and he is one of seven players to have achieved the rare feat of winning Gold, Silver and Bronze FIFA World Cup medals during his international career.",
"title": "International career"
},
{
"paragraph_id": 14,
"text": "Baresi is regarded as one of the greatest defenders of all time. He played his entire 20-year career with Milan, becoming a club legend. At Milan, he formed one of the most formidable defensive units of all time, alongside Paolo Maldini, Alessandro Costacurta, Mauro Tassotti, Filippo Galli and later Christian Panucci. He was a complete and consistent defender who combined power with elegance and was gifted with outstanding physical and mental attributes, such as pace, strength, tenacity, concentration and stamina, which made him effective in the air, despite his lack of notable height for a centre-back.",
"title": "Style of play"
},
{
"paragraph_id": 15,
"text": "Although Baresi was capable of playing anywhere along the backline, he primarily excelled as a centreback and as sweeper, where he combined his defensive attributes, and his ability to read the game, with his excellent vision, technique, distribution and ball skills. These qualities also enabled him to excel in a zonal marking system, maintain a high defensive line, and play the offside trap, in particular during his time at Milan under Sacchi; indeed, Baresi came to be known for often raising his arm towards the linesman whenever his team attempted to play the offside trap. Baresi's passing range, technical ability and ball control allowed him to advance forward into the midfield to start attacking plays from the back, enabling him to function as a secondary playmaker for his team, and also play as a defensive or central midfielder when necessary. Despite being a defender, he was also an accurate penalty kick taker. Baresi was known for being a strong and accurate tackler, who was very good at winning back possession, and at anticipating and intercepting plays, due to his acute tactical intelligence, speed of thought, marking ability and positional sense. A precocious talent in his youth, throughout the course of his career, he also stood out for his professionalism, athleticism, longevity, and discipline in training, as well as his outstanding leadership, commanding presence on the pitch and his organisational skills; indeed, he captained both Milan and the Italy national team.",
"title": "Style of play"
},
{
"paragraph_id": 16,
"text": "Baresi also shares the record of most own goals scored in Serie A history (eight, along with Riccardo Ferri).",
"title": "Style of play"
},
{
"paragraph_id": 17,
"text": "On 1 June 2002, Baresi was officially appointed as director of football at Fulham, but tensions between Baresi and then Fulham manager Jean Tigana led to resignation from the club in August.",
"title": "Coaching career"
},
{
"paragraph_id": 18,
"text": "He was appointed head coach of Milan's Primavera Under-20 squad. In 2006, he was moved by the club to coach the Berretti Under-19 squad, with his former teammate Filippo Galli replacing him at the helm of the Primavera squad. He retired from coaching and was replaced by Roberto Bertuzzo.",
"title": "Coaching career"
},
{
"paragraph_id": 19,
"text": "Franco Baresi is the younger brother of Internazionale legendary defender Giuseppe Baresi. As youngsters, both players had tryouts for Inter, but Franco was rejected, and purchased by local rivals Milan. As he was the younger player, Franco was initially known as \"Baresi 2\". However, due to Franco's eventual great success and popularity throughout his career, which surpassed even that of his older brother's, Giuseppe later became known as \"the other Baresi\", despite also achieving notable success.",
"title": "Personal life"
},
{
"paragraph_id": 20,
"text": "Baresi is featured in the EA Sports football video game series FIFA 14's Classic XI – a multi-national all-star team, along with compatriots Bruno Conti, Gianni Rivera and Giacinto Facchetti. He was also named in the Ultimate Team Legends in FIFA 15.",
"title": "Media"
},
{
"paragraph_id": 21,
"text": "AC Milan",
"title": "Honours"
},
{
"paragraph_id": 22,
"text": "Individual",
"title": "Honours"
},
{
"paragraph_id": 23,
"text": "Orders",
"title": "Honours"
}
]
| Franchino Baresi is an Italian football youth team coach and a former player and manager. He mainly played as a sweeper or as a central defender, and spent his entire 20-year career with Serie A club AC Milan, captaining the club for 15 seasons. He is considered to be one of the best defenders in the history of the sport. He was ranked 19th in World Soccer magazine's list of the 100 greatest players of the 20th century. With Milan, he won three UEFA Champions League titles, six Serie A titles, four Supercoppa Italiana titles, two European Super Cups and two Intercontinental Cups. With the Italy national team, he was a member of the Italian squad that won the 1982 FIFA World Cup. He also played in the 1990 World Cup, where he was named in the FIFA World Cup All-Star Team, finishing third in the competition. At the 1994 World Cup, he was named Italy's captain and was part of the squad that reached the final, although he would miss a penalty in the resulting shoot-out as Brazil lifted the trophy. Baresi also represented Italy at two UEFA European Championships, in 1980 and 1988, and at the 1984 Olympics, reaching the semi-finals on each occasion. The younger brother of former footballer Giuseppe Baresi, after joining the Milan senior team as a youngster, Franco Baresi was initially nicknamed "Piscinin", Milanese for "little one". Due to his skill and success, he was later known as "Kaiser Franz", a reference to fellow sweeper Franz Beckenbauer. In 1999, he was voted Milan's Player of the Century. After his final season at Milan in 1997, the club retired Baresi's shirt number 6. He was named by Pelé one of the 125 Greatest Living Footballers at the FIFA centenary awards ceremony in 2004. Baresi was inducted into the Italian Football Hall of Fame in 2013. | 2001-10-26T05:22:07Z | 2023-12-24T20:19:51Z | [
"Template:Navboxes",
"Template:Authority control",
"Template:Infobox football biography",
"Template:Commons category",
"Template:FIFA",
"Template:Webarchive",
"Template:In lang",
"Template:Notelist",
"Template:Cite web",
"Template:Olympics.com",
"Template:Olympedia",
"Template:TuttoCalciatori",
"Template:Use dmy dates",
"Template:Postnominals",
"Template:Efn",
"Template:UEFA",
"Template:FootballDatabase.eu",
"Template:Cbignore",
"Template:Short description",
"Template:IPA-it",
"Template:Fb",
"Template:Reflist",
"Template:Cite news"
]
| https://en.wikipedia.org/wiki/Franco_Baresi |
10,857 | Stage (stratigraphy) | In chronostratigraphy, a stage is a succession of rock strata laid down in a single age on the geologic timescale, which usually represents millions of years of deposition. A given stage of rock and the corresponding age of time will by convention have the same name, and the same boundaries.
Rock series are divided into stages, just as geological epochs are divided into ages. Stages can be divided into smaller stratigraphic units called chronozones. (See chart at right for full terminology hierarchy.) Stages may also be divided into substages or indeed grouped as superstages.
The term faunal stage is sometimes used, referring to the fact that the same fauna (animals) are found throughout the layer (by definition).
Stages are primarily defined by a consistent set of fossils (biostratigraphy) or a consistent magnetic polarity (see paleomagnetism) in the rock. Usually one or more index fossils that are common, found worldwide, easily recognized, and limited to a single, or at most a few, stages are used to define the stage's bottom.
Thus, for example in the local North American subdivision, a paleontologist finding fragments of the trilobite Olenellus would identify the beds as being from the Waucoban Stage whereas fragments of a later trilobite such as Elrathia would identify the stage as Albertan.
Stages were important in the 19th and early 20th centuries as they were the major tool available for dating and correlating rock units prior to the development of seismology and radioactive dating in the second half of the 20th century. Microscopic analysis of the rock (petrology) is also sometimes useful in confirming that a given segment of rock is from a particular age.
Originally, faunal stages were only defined regionally; however, as additional stratigraphic and geochronologic tools were developed, stages were defined over broader and broader areas. More recently, the adjective "faunal" has been dropped as regional and global correlations of rock sequences have become relatively certain and there is less need for faunal labels to define the age of formations. A tendency developed to use European and, to a lesser extent, Asian stage names for the same time period worldwide, even though the faunas in other regions often had little in common with the stage as originally defined.
Boundaries and names are established by the International Commission on Stratigraphy (ICS) of the International Union of Geological Sciences. As of 2008, the ICS is nearly finished with a task begun in 1974, subdividing the Phanerozoic eonothem into internationally accepted stages using two types of benchmark. For younger stages, a Global Boundary Stratotype Section and Point (GSSP), a physical outcrop clearly demonstrates the boundary. For older stages, a Global Standard Stratigraphic Age (GSSA) is an absolute date. The benchmarks will give a much greater certainty that results can be compared with confidence in the date determinations, and such results will have farther scope than any evaluation based solely on local knowledge and conditions.
In many regions local subdivisions and classification criteria are still used along with the newer internationally coordinated uniform system, but once the research establishes a more complete international system, it is expected that local systems will be abandoned.
Stages can include many lithostratigraphic units (for example formations, beds, members, etc.) of differing rock types that were being laid down in different environments at the same time. In the same way, a lithostratigraphic unit can include a number of stages or parts of them. | [
{
"paragraph_id": 0,
"text": "In chronostratigraphy, a stage is a succession of rock strata laid down in a single age on the geologic timescale, which usually represents millions of years of deposition. A given stage of rock and the corresponding age of time will by convention have the same name, and the same boundaries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Rock series are divided into stages, just as geological epochs are divided into ages. Stages can be divided into smaller stratigraphic units called chronozones. (See chart at right for full terminology hierarchy.) Stages may also be divided into substages or indeed grouped as superstages.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term faunal stage is sometimes used, referring to the fact that the same fauna (animals) are found throughout the layer (by definition).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Stages are primarily defined by a consistent set of fossils (biostratigraphy) or a consistent magnetic polarity (see paleomagnetism) in the rock. Usually one or more index fossils that are common, found worldwide, easily recognized, and limited to a single, or at most a few, stages are used to define the stage's bottom.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "Thus, for example in the local North American subdivision, a paleontologist finding fragments of the trilobite Olenellus would identify the beds as being from the Waucoban Stage whereas fragments of a later trilobite such as Elrathia would identify the stage as Albertan.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "Stages were important in the 19th and early 20th centuries as they were the major tool available for dating and correlating rock units prior to the development of seismology and radioactive dating in the second half of the 20th century. Microscopic analysis of the rock (petrology) is also sometimes useful in confirming that a given segment of rock is from a particular age.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "Originally, faunal stages were only defined regionally; however, as additional stratigraphic and geochronologic tools were developed, stages were defined over broader and broader areas. More recently, the adjective \"faunal\" has been dropped as regional and global correlations of rock sequences have become relatively certain and there is less need for faunal labels to define the age of formations. A tendency developed to use European and, to a lesser extent, Asian stage names for the same time period worldwide, even though the faunas in other regions often had little in common with the stage as originally defined.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "Boundaries and names are established by the International Commission on Stratigraphy (ICS) of the International Union of Geological Sciences. As of 2008, the ICS is nearly finished with a task begun in 1974, subdividing the Phanerozoic eonothem into internationally accepted stages using two types of benchmark. For younger stages, a Global Boundary Stratotype Section and Point (GSSP), a physical outcrop clearly demonstrates the boundary. For older stages, a Global Standard Stratigraphic Age (GSSA) is an absolute date. The benchmarks will give a much greater certainty that results can be compared with confidence in the date determinations, and such results will have farther scope than any evaluation based solely on local knowledge and conditions.",
"title": "International standardization"
},
{
"paragraph_id": 8,
"text": "In many regions local subdivisions and classification criteria are still used along with the newer internationally coordinated uniform system, but once the research establishes a more complete international system, it is expected that local systems will be abandoned.",
"title": "International standardization"
},
{
"paragraph_id": 9,
"text": "Stages can include many lithostratigraphic units (for example formations, beds, members, etc.) of differing rock types that were being laid down in different environments at the same time. In the same way, a lithostratigraphic unit can include a number of stages or parts of them.",
"title": "Stages and lithostratigraphy"
}
]
| In chronostratigraphy, a stage is a succession of rock strata laid down in a single age on the geologic timescale, which usually represents millions of years of deposition. A given stage of rock and the corresponding age of time will by convention have the same name, and the same boundaries. Rock series are divided into stages, just as geological epochs are divided into ages. Stages can be divided into smaller stratigraphic units called chronozones. Stages may also be divided into substages or indeed grouped as superstages. The term faunal stage is sometimes used, referring to the fact that the same fauna (animals) are found throughout the layer. | 2023-06-22T21:38:38Z | [
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Geological history",
"Template:Chronology",
"Template:Short description",
"Template:Geology to Paleobiology",
"Template:Update"
]
| https://en.wikipedia.org/wiki/Stage_(stratigraphy) |
|
10,858 | Franz Kafka | Franz Kafka (3 July 1883 – 3 June 1924) was a German-speaking Bohemian novelist and short-story writer based in Prague, who is widely regarded as one of the major figures of 20th-century literature. His work fuses elements of realism and the fantastic. It typically features isolated protagonists facing bizarre or surrealistic predicaments and incomprehensible socio-bureaucratic powers. It has been interpreted as exploring themes of alienation, existential anxiety, guilt, and absurdity. His best known works include the novella The Metamorphosis and novels The Trial and The Castle. The term Kafkaesque has entered English to describe absurd situations like those depicted in his writing.
Kafka was born into a middle-class German-speaking Czech Jewish family in Prague, the capital of the Kingdom of Bohemia, then part of the Austro-Hungarian Empire (today the capital of the Czech Republic). He trained as a lawyer, and after completing his legal education was employed full-time by an insurance company, forcing him to relegate writing to his spare time. Over the course of his life, Kafka wrote hundreds of letters to family and close friends, including his father, with whom he had a strained and formal relationship. He became engaged to several women but never married. He died in obscurity in 1924 at the age of 40 from tuberculosis.
Kafka was a prolific writer, spending most of his free time writing, often late in the night. He burned an estimated 90 per cent of his total work due to his persistent struggles with self-doubt. Much of the remaining 10 per cent is lost or otherwise unpublished. Few of Kafka's works were published during his lifetime: the story collections Contemplation and A Country Doctor, and individual stories (such as his novella The Metamorphosis) were published in literary magazines but received little public attention.
In his will, Kafka instructed his close friend and literary executor Max Brod to destroy his unfinished works, including his novels The Trial, The Castle, and Amerika, but Brod ignored these instructions and had much of his work published. Kafka's writings became famous in German-speaking countries after World War II, influencing their literature, and its influence spread elsewhere in the world in the 1960s. It has also influenced artists, composers, and philosophers.
Kafka was born near the Old Town Square in Prague, then part of the Austro-Hungarian Empire. His family were German-speaking middle-class Ashkenazi Jews. His father, Hermann Kafka (1854–1931), was the fourth child of Jakob Kafka, a shochet or ritual slaughterer in Osek, a Czech village with a large Jewish population located near Strakonice in southern Bohemia. Hermann brought the Kafka family to Prague. After working as a travelling sales representative, he eventually became a fashion retailer who employed up to 15 people and used the image of a jackdaw (kavka in Czech, pronounced and colloquially written as kafka) as his business logo. Kafka's mother, Julie (1856–1934), was the daughter of Jakob Löwy, a prosperous retail merchant in Poděbrady, and was better educated than her husband.
Kafka's parents probably spoke a German influenced by Yiddish that was sometimes pejoratively called Mauscheldeutsch, but, as German was considered the vehicle of social mobility, they probably encouraged their children to speak Standard German. Hermann and Julie had six children, of whom Franz was the eldest. Franz's two brothers, Georg and Heinrich, died in infancy before Franz was seven; his three sisters were Gabriele ("Elli") (1889–1944), Valerie ("Valli") (1890–1942) and Ottilie ("Ottla") (1892–1943). All three were murdered in the Holocaust of World War II. Valli was deported to the Łódź Ghetto in occupied Poland in 1942, but that is the last documentation of her; it is assumed she did not survive the war. Ottilie was Kafka's favourite sister.
Hermann is described by the biographer Stanley Corngold as a "huge, selfish, overbearing businessman" and by Franz Kafka as "a true Kafka in strength, health, appetite, loudness of voice, eloquence, self-satisfaction, worldly dominance, endurance, presence of mind, [and] knowledge of human nature". On business days, both parents were absent from the home, with Julie Kafka working as many as 12 hours each day helping to manage the family business. Consequently, Kafka's childhood was somewhat lonely, and the children were reared largely by a series of governesses and servants. Kafka's troubled relationship with his father is evident in his Brief an den Vater (Letter to His Father) of more than 100 pages, in which he complains of being profoundly affected by his father's authoritarian and demanding character; his mother, in contrast, was quiet and shy. The dominating figure of Kafka's father had a significant influence on Kafka's writing.
The Kafka family had a servant girl living with them in a cramped apartment. Franz's room was often cold. In November 1913 the family moved into a bigger apartment, although Ellie and Valli had married and moved out of the first apartment. In early August 1914, just after World War I began, the sisters did not know where their husbands were in the military and moved back in with the family in this larger apartment. Both Ellie and Valli also had children. Franz at age 31 moved into Valli's former apartment, quiet by contrast, and lived by himself for the first time.
From 1889 to 1893, Kafka attended the Deutsche Knabenschule German boys' elementary school at the Masný trh/Fleischmarkt (meat market), now known as Masná Street. His Jewish education ended with his bar mitzvah celebration at the age of 13. Kafka never enjoyed attending the synagogue and went with his father only on four high holidays a year.
After leaving elementary school in 1893, Kafka was admitted to the rigorous classics-oriented state gymnasium, Altstädter Deutsches Gymnasium, an academic secondary school at Old Town Square, within the Kinský Palace. German was the language of instruction, but Kafka also spoke and wrote in Czech. He studied the latter at the gymnasium for eight years, achieving good grades. Although Kafka received compliments for his Czech, he never considered himself fluent in the language, though he spoke German with a Czech accent. He completed his Matura exams in 1901.
Admitted to the Deutsche Karl-Ferdinands-Universität of Prague in 1901, Kafka began studying chemistry but switched to law after two weeks. Although this field did not excite him, it offered a range of career possibilities which pleased his father. In addition, law required a longer course of study, giving Kafka time to take classes in German studies and art history. He also joined a student club, Lese- und Redehalle der Deutschen Studenten (Reading and Lecture Hall of the German students), which organised literary events, readings and other activities. Among Kafka's friends were the journalist Felix Weltsch, who studied philosophy, the actor Yitzchak Lowy who came from an orthodox Hasidic Warsaw family, and the writers Ludwig Winder, Oskar Baum and Franz Werfel.
At the end of his first year of studies, Kafka met Max Brod, a fellow law student who became a close friend for life. Years later, Brod coined the term Der enge Prager Kreis ("The Close Prague Circle") to describe the group of writers, which included Kafka, Felix Weltsch and Brod himself. Brod soon noticed that, although Kafka was shy and seldom spoke, what he said was usually profound. Kafka was an avid reader throughout his life; together he and Brod read Plato's Protagoras in the original Greek, on Brod's initiative, and Flaubert's L'éducation sentimentale and La Tentation de St. Antoine (The Temptation of Saint Anthony) in French, at his own suggestion. Kafka considered Fyodor Dostoyevsky, Gustav Flaubert, Nikolai Gogol, Franz Grillparzer, and Heinrich von Kleist to be his "true blood brothers". Besides these, he took an interest in Czech literature and was also very fond of the works of Goethe. Kafka was awarded the degree of Doctor of Law on 18 June 1906 and performed an obligatory year of unpaid service as law clerk for the civil and criminal courts.
On 1 November 1907, Kafka was employed at the Assicurazioni Generali, an insurance company, where he worked for nearly a year. His correspondence during that period indicates that he was unhappy with a work schedule—from 08:00 until 18:00—that made it extremely difficult to concentrate on writing, which was assuming increasing importance to him. On 15 July 1908, he resigned. Two weeks later, he found employment more amenable to writing when he joined the Worker's Accident Insurance Institute for the Kingdom of Bohemia. The job involved investigating and assessing compensation for personal injury to industrial workers; accidents such as lost fingers or limbs were commonplace, owing to poor work safety policies at the time. It was especially true of factories fitted with machine lathes, drills, planing machines and rotary saws, which were rarely fitted with safety guards.
The management professor Peter Drucker credits Kafka with developing the first civilian hard hat while employed at the Worker's Accident Insurance Institute, but this is not supported by any document from his employer. His father often referred to his son's job as an insurance officer as a Brotberuf, literally "bread job", a job done only to pay the bills; Kafka often claimed to despise it. Kafka was rapidly promoted and his duties included processing and investigating compensation claims, writing reports, and handling appeals from businessmen who thought their firms had been placed in too high a risk category, which cost them more in insurance premiums. He would compile and compose the annual report on the insurance institute for the several years he worked there. The reports were well received by his superiors. Kafka usually got off work at 2 p.m., so that he had time to spend on his literary work, to which he was committed. Kafka's father also expected him to help out at and take over the family fancy goods store. In his later years, Kafka's illness often prevented him from working at the insurance bureau and at his writing.
In late 1911, Elli's husband Karl Hermann and Kafka became partners in the first asbestos factory in Prague, known as Prager Asbestwerke Hermann & Co., having used dowry money from Hermann Kafka. Kafka showed a positive attitude at first, dedicating much of his free time to the business, but he later resented the encroachment of this work on his writing time. During that period, he also found interest and entertainment in the performances of Yiddish theatre. After seeing a Yiddish theatre troupe perform in October 1911, for the next six months Kafka "immersed himself in Yiddish language and in Yiddish literature". This interest also served as a starting point for his growing exploration of Judaism. It was at about this time that Kafka became a vegetarian. Around 1915, Kafka received his draft notice for military service in World War I, but his employers at the insurance institute arranged for a deferment because his work was considered essential government service. He later attempted to join the military but was prevented from doing so by medical problems associated with tuberculosis, with which he was diagnosed in 1917. In 1918, the Worker's Accident Insurance Institute put Kafka on a pension due to his illness, for which there was no cure at the time, and he spent most of the rest of his life in sanatoriums.
Kafka never married. According to Brod, Kafka was "tortured" by sexual desire, and Kafka's biographer Reiner Stach states that his life was full of "incessant womanising" and that he was filled with a fear of "sexual failure". Kafka visited brothels for most of his adult life and was interested in pornography. In addition, he had close relationships with several women during his lifetime. On 13 August 1912, Kafka met Felice Bauer, a relative of Brod's, who worked in Berlin as a representative of a dictaphone company. A week after the meeting at Brod's home, Kafka wrote in his diary:
Miss FB. When I arrived at Brod's on 13 August, she was sitting at the table. I was not at all curious about who she was, but rather took her for granted at once. Bony, empty face that wore its emptiness openly. Bare throat. A blouse thrown on. Looked very domestic in her dress although, as it turned out, she by no means was. (I alienate myself from her a little by inspecting her so closely ...) Almost broken nose. Blonde, somewhat straight, unattractive hair, strong chin. As I was taking my seat I looked at her closely for the first time, by the time I was seated I already had an unshakeable opinion.
Shortly after this meeting, Kafka wrote the story "Das Urteil" ("The Judgment") in only one night and in a productive period worked on Der Verschollene (The Man Who Disappeared) and Die Verwandlung (The Metamorphosis). Kafka and Felice Bauer communicated mostly through letters over the next five years, met occasionally, and were engaged twice. Kafka's extant letters to Bauer were published as Briefe an Felice (Letters to Felice); her letters do not survive. After he had written to Bauer's father asking to marry her, Kafka wrote in his diary:
My job is unbearable to me because it conflicts with my only desire and my only calling, which is literature.... I am nothing but literature and can and want to be nothing else.... Nervous states of the worst sort control me without pause.... A marriage could not change me, just as my job cannot change me.
According to the biographers Stach and James Hawes, Kafka became engaged a third time around 1920, to Julie Wohryzek, a poor and uneducated hotel chambermaid. Kafka's father objected to Julie because of her Zionist beliefs. Although Kafka and Julie rented a flat and set a wedding date, the marriage never took place. During this time, Kafka began a draft of Letter to His Father. Before the date of the intended marriage, he took up with yet another woman. While he needed women and sex in his life, he had low self-confidence, felt sex was dirty, and was cripplingly shy—especially about his body.
Stach and Brod state that during the time that Kafka knew Felice Bauer, he had an affair with a friend of hers, Margarethe "Grete" Bloch, a Jewish woman from Berlin. Brod says that Bloch gave birth to Kafka's son, although Kafka never knew about the child. The boy, whose name is not known, was born in 1914 or 1915 and died in Munich in 1921. However, Kafka's biographer Peter-André Alt says that, while Bloch had a son, Kafka was not the father, as the pair were never intimate. Stach points out that there is a great deal of contradictory evidence around the claim that Kafka was the father.
Kafka was diagnosed with tuberculosis in August 1917 and moved for a few months to the Bohemian village of Zürau (Siřem in Czech), where his sister Ottla worked on the farm of her brother-in-law Karl Hermann. He felt comfortable there and later described this time as perhaps the best period of his life, probably because he had no responsibilities. He kept diaries and Oktavhefte (octavo). From the notes in these books, Kafka extracted 109 numbered pieces of text on Zettel, single pieces of paper in no given order. They were later published as Die Zürauer Aphorismen oder Betrachtungen über Sünde, Hoffnung, Leid und den wahren Weg (The Zürau Aphorisms or Reflections on Sin, Hope, Suffering, and the True Way).
In 1920, Kafka began an intense relationship with Milena Jesenská, a Czech journalist and writer who was non-Jewish and who was married, but when she met Kafka, her marriage was a "sham". His letters to her were later published as Briefe an Milena. During a vacation in July 1923 to Graal-Müritz on the Baltic Sea, Kafka met Dora Diamant, a 25-year-old kindergarten teacher from an orthodox Jewish family. Kafka, hoping to escape the influence of his family to concentrate on his writing, moved briefly to Berlin (September 1923-March 1924) and lived with Diamant. She became his lover and sparked his interest in the Talmud. He worked on four stories, including Ein Hungerkünstler (A Hunger Artist), which were published shortly after his death.
Kafka had a lifelong suspicion that people found him mentally and physically repulsive. However, many of those who met him found him to possess obvious intelligence and a sense of humour; they also found him handsome, although of austere appearance. Brod compared Kafka to Heinrich von Kleist, noting that both writers had the ability to describe a situation realistically with precise details. Brod thought Kafka was one of the most entertaining people he had met; Kafka enjoyed sharing humour with his friends, but also helped them in difficult situations with good advice. According to Brod, he was a passionate reciter, able to phrase his speech as though it were music. Brod felt that two of Kafka's most distinguishing traits were "absolute truthfulness" (absolute Wahrhaftigkeit) and "precise conscientiousness" (präzise Gewissenhaftigkeit). He explored details, the inconspicuous, in depth and with such love and precision that things surfaced that were unforeseen, seemingly strange, but absolutely true (nichts als wahr).
Although Kafka showed little interest in exercise as a child, he later developed a passion for games and physical activity, and was an accomplished rider, swimmer, and rower. On weekends, he and his friends embarked on long hikes, often planned by Kafka himself. His other interests included alternative medicine, modern education systems such as Montessori, and technological novelties such as airplanes and film. Writing was vitally important to Kafka; he considered it a "form of prayer". He was highly sensitive to noise and preferred absolute quiet when writing.
Pérez-Álvarez has claimed that Kafka had symptomatology consistent with schizoid personality disorder. His style, it is claimed, not only in Die Verwandlung (The Metamorphosis) but in other writings, appears to show low- to medium-level schizoid traits, which Pérez-Álvarez claims to have influenced much of his work. His anguish can be seen in this diary entry from 21 June 1913:
Die ungeheure Welt, die ich im Kopfe habe. Aber wie mich befreien und sie befreien, ohne zu zerreißen. Und tausendmal lieber zerreißen, als in mir sie zurückhalten oder begraben. Dazu bin ich ja hier, das ist mir ganz klar.
The tremendous world I have inside my head, but how to free myself and free it without being torn to pieces. And a thousand times rather be torn to pieces than retain it in me or bury it. That, indeed, is why I am here, that is quite clear to me.
and in Zürau Aphorism number 50:
Man cannot live without a permanent trust in something indestructible within himself, though both that indestructible something and his own trust in it may remain permanently concealed from him.
Alessia Coralli and Antonio Perciaccante of San Giovanni di Dio Hospital have posited that Kafka may have had borderline personality disorder with co-occurring psychophysiological insomnia. Joan Lachkar interpreted Die Verwandlung as "a vivid depiction of the borderline personality" and described the story as "model for Kafka's own abandonment fears, anxiety, depression, and parasitic dependency needs. Kafka illuminated the borderline's general confusion of normal and healthy desires, wishes, and needs with something ugly and disdainful."
Though Kafka never married, he held marriage and children in high esteem. He had several girlfriends and lovers across his life. He may have suffered from an eating disorder. Doctor Manfred M. Fichter of the Psychiatric Clinic, University of Munich, presented "evidence for the hypothesis that the writer Franz Kafka had suffered from an atypical anorexia nervosa", and that Kafka was not just lonely and depressed but also "occasionally suicidal". In his 1995 book Franz Kafka, the Jewish Patient, Sander Gilman investigated "why a Jew might have been considered 'hypochondriacal' or 'homosexual' and how Kafka incorporates aspects of these ways of understanding the Jewish male into his own self-image and writing". Kafka considered suicide at least once, in late 1912.
Before World War I, Kafka attended several meetings of the Klub mladých, a Czech anarchist, anti-militarist, and anti-clerical organization. Hugo Bergmann, who attended the same elementary and high schools as Kafka, fell out with Kafka during their last academic year (1900–1901) because "[Kafka's] socialism and my Zionism were much too strident". Bergmann said: "Franz became a socialist, I became a Zionist in 1898. The synthesis of Zionism and socialism did not yet exist." Bergmann claims that Kafka wore a red carnation to school to show his support for socialism. In one diary entry, Kafka made reference to the influential anarchist philosopher Peter Kropotkin: "Don't forget Kropotkin!"
During the communist era, the legacy of Kafka's work for Eastern Bloc socialism was hotly debated. Opinions ranged from the notion that he satirised the bureaucratic bungling of a crumbling Austro-Hungarian Empire, to the belief that he embodied the rise of socialism. A further key point was Marx's theory of alienation. While the orthodox position was that Kafka's depictions of alienation were no longer relevant for a society that had supposedly eliminated alienation, a 1963 conference held in Liblice, Czechoslovakia, on the eightieth anniversary of his birth, reassessed the importance of Kafka's portrayal of bureaucracy. Whether or not Kafka was a political writer is still an issue of debate.
Kafka grew up in Prague as a German-speaking Jew. He was deeply fascinated by the Jews of Eastern Europe, who he thought possessed an intensity of spiritual life that was absent from Jews in the West. His diary contains many references to Yiddish writers. Yet he was at times alienated from Judaism and Jewish life. On 8 January 1914, he wrote in his diary:
Was habe ich mit Juden gemeinsam? Ich habe kaum etwas mit mir gemeinsam und sollte mich ganz still, zufrieden damit daß ich atmen kann in einen Winkel stellen. (What have I in common with Jews? I have hardly anything in common with myself and should stand very quietly in a corner, content that I can breathe.)
In his adolescent years, Kafka declared himself an atheist.
Hawes suggests that Kafka, though very aware of his own Jewishness, did not incorporate it into his work, which, according to Hawes, lacks Jewish characters, scenes or themes. In the opinion of literary critic Harold Bloom, although Kafka was uneasy with his Jewish heritage, he was the quintessential Jewish writer. Lothar Kahn is likewise unequivocal: "The presence of Jewishness in Kafka's oeuvre is no longer subject to doubt". Pavel Eisner, one of Kafka's first translators, interprets Der Process (The Trial) as the embodiment of the "triple dimension of Jewish existence in Prague ... his protagonist Josef K. is (symbolically) arrested by a German (Rabensteiner), a Czech (Kullich), and a Jew (Kaminer). He stands for the 'guiltless guilt' that imbues the Jew in the modern world, although there is no evidence that he himself is a Jew".
In his essay Sadness in Palestine?!, Dan Miron explores Kafka's connection to Zionism: "It seems that those who claim that there was such a connection and that Zionism played a central role in his life and literary work, and those who deny the connection altogether or dismiss its importance, are both wrong. The truth lies in some very elusive place between these two simplistic poles." Kafka considered moving to Palestine with Felice Bauer, and later with Dora Diamant. He studied Hebrew while living in Berlin, hiring a friend of Brod's from Palestine, Pua Bat-Tovim, to tutor him and attending Rabbi Julius Grünthal and Rabbi Julius Guttmann's classes in the Berlin Hochschule für die Wissenschaft des Judentums (College for the Study of Judaism).
Livia Rothkirchen calls Kafka the "symbolic figure of his era". His contemporaries included numerous Jewish, Czech, and German writers who were sensitive to Jewish, Czech, and German culture. According to Rothkirchen, "This situation lent their writings a broad cosmopolitan outlook and a quality of exaltation bordering on transcendental metaphysical contemplation. An illustrious example is Franz Kafka".
Towards the end of his life Kafka sent a postcard to his friend Hugo Bergmann in Tel Aviv, announcing his intention to emigrate to Palestine. Bergmann refused to host Kafka because he had young children and was afraid that Kafka would infect them with tuberculosis.
Kafka's laryngeal tuberculosis worsened and in March 1924 he returned from Berlin to Prague, where members of his family, principally his sister Ottla and Dora Diamant, took care of him. He went to Hugo Hoffmann's sanatorium in Kierling just outside Vienna for treatment on 10 April, and died there on 3 June 1924. The cause of death seemed to be starvation: the condition of Kafka's throat made eating too painful for him, and since parenteral nutrition had not yet been developed, there was no way to feed him. Kafka was editing "A Hunger Artist" on his deathbed, a story whose composition he had begun before his throat closed to the point that he could not take any nourishment. His body was brought back to Prague where he was buried on 11 June 1924, in the New Jewish Cemetery in Prague-Žižkov. Kafka was virtually unknown during his own lifetime, but he did not consider fame important. He rose to fame rapidly after his death, particularly after World War II. The Kafka tombstone was designed by architect Leopold Ehrmann.
All of Kafka's published works, except some letters he wrote in Czech to Milena Jesenská, were written in German. What little was published during his lifetime attracted scant public attention.
Kafka finished none of his full-length novels and burned around 90 percent of his work, much of it during the period he lived in Berlin with Diamant, who helped him burn the drafts. In his early years as a writer he was influenced by von Kleist, whose work he described in a letter to Bauer as frightening and whom he considered closer than his own family.
Kafka drew and sketched extensively. Until May 2021, only about 40 of his drawings were known. In 2022, Yale University Press published Franz Kafka: The Drawings.
Kafka's earliest published works were eight stories which appeared in 1908 in the first issue of the literary journal Hyperion under the title Betrachtung (Contemplation). He wrote the story "Beschreibung eines Kampfes" ("Description of a Struggle") in 1904; he showed it to Brod in 1905 who advised him to continue writing and convinced him to submit it to Hyperion. Kafka published a fragment in 1908 and two sections in the spring of 1909, all in Munich.
In a creative outburst on the night of 22 September 1912, Kafka wrote the story "Das Urteil" ("The Judgment", literally: "The Verdict") and dedicated it to Felice Bauer. Brod noted the similarity in names of the main character and his fictional fiancée, Georg Bendemann and Frieda Brandenfeld, to Franz Kafka and Felice Bauer. The story is often considered Kafka's breakthrough work. It deals with the troubled relationship of a son and his dominant father, facing a new situation after the son's engagement. Kafka later described writing it as "a complete opening of body and soul", a story that "evolved as a true birth, covered with filth and slime". The story was first published in Leipzig in 1912 and dedicated "to Miss Felice Bauer", and in subsequent editions "for F."
In 1912, Kafka wrote Die Verwandlung (The Metamorphosis, or The Transformation), published in 1915 in Leipzig. The story begins with a travelling salesman waking to find himself transformed into an ungeheures Ungeziefer, a monstrous vermin, Ungeziefer being a general term for unwanted and unclean pests, especially insects. Critics regard the work as one of the seminal works of fiction of the 20th century. The story "In der Strafkolonie" ("In the Penal Colony"), dealing with an elaborate torture and execution device, was written in October 1914, revised in 1918, and published in Leipzig during October 1919. The story "Ein Hungerkünstler" ("A Hunger Artist"), published in the periodical Die neue Rundschau in 1924, describes a victimized protagonist who experiences a decline in the appreciation of his strange craft of starving himself for extended periods. His last story, "Josefine, die Sängerin oder Das Volk der Mäuse" ("Josephine the Singer, or the Mouse Folk"), also deals with the relationship between an artist and his audience.
Kafka began his first novel in 1912; its first chapter is the story "Der Heizer" ("The Stoker"). He called the work, which remained unfinished, Der Verschollene (The Man Who Disappeared or The Missing Person), but when Brod published it after Kafka's death he named it Amerika. The inspiration for the novel was the time Kafka spent in the audience of Yiddish theatre the previous year, bringing him to a new awareness of his heritage, which led to the thought that an innate appreciation for one's heritage lives deep within each person. More explicitly humorous and slightly more realistic than most of Kafka's works, the novel shares the motif of an oppressive and intangible system putting the protagonist repeatedly in bizarre situations. It uses many details of experiences from his relatives who had emigrated to America and is the only work for which Kafka considered an optimistic ending.
In 1914 Kafka began the novel Der Process (The Trial), the story of a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. He did not complete the novel, although he finished the final chapter. According to Nobel Prize winner and Kafka scholar Elias Canetti, Felice is central to the plot of Der Process and Kafka said it was "her story". Canetti titled his book on Kafka's letters to Felice Kafka's Other Trial, in recognition of the relationship between the letters and the novel. Michiko Kakutani notes in a review for The New York Times that Kafka's letters have the "earmarks of his fiction: the same nervous attention to minute particulars; the same paranoid awareness of shifting balances of power; the same atmosphere of emotional suffocation—combined, surprisingly enough, with moments of boyish ardour and delight."
According to his diary, Kafka was already planning his novel Das Schloss (The Castle), by 11 June 1914; however, he did not begin writing it until 27 January 1922. The protagonist is the Landvermesser (land surveyor) named K., who struggles for unknown reasons to gain access to the mysterious authorities of a castle who govern the village. Kafka's intent was that the castle's authorities notify K. on his deathbed that his "legal claim to live in the village was not valid, yet, taking certain auxiliary circumstances into account, he was to be permitted to live and work there". Dark and at times surreal, the novel is focused on alienation, bureaucracy, the seemingly endless frustrations of man's attempts to stand against the system, and the futile and hopeless pursuit of an unattainable goal. Hartmut M. Rastalsky noted in his thesis: "Like dreams, his texts combine precise 'realistic' detail with absurdity, careful observation and reasoning on the part of the protagonists with inexplicable obliviousness and carelessness."
Kafka's stories were initially published in literary periodicals. His first eight were printed in 1908 in the first issue of the bi-monthly Hyperion. Franz Blei published two dialogues in 1909 which became part of "Beschreibung eines Kampfes" ("Description of a Struggle"). A fragment of the story "Die Aeroplane in Brescia" ("The Aeroplanes at Brescia"), written on a trip to Italy with Brod, appeared in the daily Bohemia on 28 September 1909. On 27 March 1910, several stories that later became part of the book Betrachtung were published in the Easter edition of Bohemia. In Leipzig during 1913, Brod and publisher Kurt Wolff included "Das Urteil. Eine Geschichte von Franz Kafka." ("The Judgment. A Story by Franz Kafka.") in their literary yearbook for the art poetry Arkadia. In the same year, Wolff published "Der Heizer" ("The Stoker") in the Jüngste Tag series, where it enjoyed three printings. The story "Vor dem Gesetz" ("Before the Law") was published in the 1915 New Year's edition of the independent Jewish weekly Selbstwehr; it was reprinted in 1919 as part of the story collection Ein Landarzt (A Country Doctor) and became part of the novel Der Process. Other stories were published in various publications, including Martin Buber's Der Jude, the paper Prager Tagblatt, and the periodicals Die neue Rundschau, Genius, and Prager Presse.
Kafka's first published book, Betrachtung (Contemplation, or Meditation), was a collection of 18 stories written between 1904 and 1912. On a summer trip to Weimar, Brod initiated a meeting between Kafka and Kurt Wolff; Wolff published Betrachtung in the Rowohlt Verlag at the end of 1912 (with the year given as 1913). Kafka dedicated it to Brod, "Für M.B.", and added in the personal copy given to his friend "So wie es hier schon gedruckt ist, für meinen liebsten Max—Franz K." ("As it is already printed here, for my dearest Max").
Kafka's novella Die Verwandlung (The Metamorphosis) was first printed in the October 1915 issue of Die Weißen Blätter, a monthly edition of expressionist literature, edited by René Schickele. Another story collection, Ein Landarzt (A Country Doctor), was published by Kurt Wolff in 1919, dedicated to Kafka's father. Kafka prepared a final collection of four stories for print, Ein Hungerkünstler (A Hunger Artist), which appeared in 1924 after his death, in Verlag Die Schmiede. On 20 April 1924, the Berliner Börsen-Courier published Kafka's essay on Adalbert Stifter.
Kafka left his work, both published and unpublished, to his friend and literary executor Max Brod with explicit instructions that it should be destroyed on Kafka's death; Kafka wrote: "Dearest Max, my last request: Everything I leave behind me ... in the way of diaries, manuscripts, letters (my own and others'), sketches, and so on, [is] to be burned unread." Brod ignored this request and published the novels and collected works between 1925 and 1935. Brod defended his action by claiming that he had told Kafka, "I shall not carry out your wishes", and that "Franz should have appointed another executor if he had been absolutely determined that his instructions should stand".
Brod took many of Kafka's papers, which remain unpublished, with him in suitcases to Palestine when he fled there in 1939. Kafka's last lover, Dora Diamant (later, Dymant-Lask), also ignored his wishes, secretly keeping 20 notebooks and 35 letters. These were confiscated by the Gestapo in 1933, but scholars continue to search for them.
As Brod published the bulk of the writings in his possession, Kafka's work began to attract wider attention and critical acclaim. Brod found it difficult to arrange Kafka's notebooks in chronological order. One problem was that Kafka often began writing in different parts of the book; sometimes in the middle, sometimes working backwards from the end. Brod finished many of Kafka's incomplete works for publication. For example, Kafka left Der Process with unnumbered and incomplete chapters and Das Schloss with incomplete sentences and ambiguous content; Brod rearranged chapters, copy-edited the text, and changed the punctuation. Der Process appeared in 1925 in Verlag Die Schmiede. Kurt Wolff published two other novels, Das Schloss in 1926 and Amerika in 1927. In 1931, Brod edited a collection of prose and unpublished stories as Beim Bau der Chinesischen Mauer (The Great Wall of China), including the story of the same name. The book appeared in the Gustav Kiepenheuer Verlag. Brod's sets are usually called the "Definitive Editions".
In 1961 Malcolm Pasley acquired for the Oxford Bodleian Library most of Kafka's original handwritten works. The text for Der Process was later purchased through auction and is stored at the German Literary Archives in Marbach am Neckar, Germany. Subsequently, Pasley headed a team (including Gerhard Neumann, Jost Schillemeit and Jürgen Born) which reconstructed the German novels; S. Fischer Verlag republished them. Pasley was the editor for Das Schloss, published in 1982, and Der Process (The Trial), published in 1990. Jost Schillemeit was the editor of Der Verschollene (Amerika) published in 1983. These are called the "Critical Editions" or the "Fischer Editions".
In 2023, the first unexpurgated edition of Kafka's diaries was published in English, "more than three decades after this complete text appeared in German. The sole previous English edition, with Brod’s edits, was issued in the late 1940s".
When Brod died in 1968, he left Kafka's unpublished papers, which are believed to number in the thousands, to his secretary Esther Hoffe. She released or sold some, but left most to her daughters, Eva and Ruth, who also refused to release the papers. A court battle began in 2008 between the sisters and the National Library of Israel, which claimed these works became the property of the nation of Israel when Brod emigrated to British Palestine in 1939. Esther Hoffe sold the original manuscript of Der Process for US$2 million in 1988 to the German Literary Archive Museum of Modern Literature in Marbach am Neckar. A ruling by a Tel Aviv family court in 2010 held that the papers must be released and a few were, including a previously unknown story, but the legal battle continued. The Hoffes claim the papers are their personal property, while the National Library of Israel argues they are "cultural assets belonging to the Jewish people". The National Library also suggests that Brod bequeathed the papers to them in his will. The Tel Aviv Family Court ruled in October 2012, six months after Ruth's death, that the papers were the property of the National Library. The Israeli Supreme Court upheld the decision in December 2016.
The poet W. H. Auden called Kafka "the Dante of the twentieth century"; the novelist Vladimir Nabokov placed him among the greatest writers of the 20th century. Gabriel García Márquez noted the reading of Kafka's The Metamorphosis showed him "that it was possible to write in a different way". A prominent theme of Kafka's work, first established in the short story "Das Urteil", is father–son conflict: the guilt induced in the son is resolved through suffering and atonement. Other prominent themes and archetypes include alienation, physical and psychological brutality, characters on a terrifying quest, and mystical transformation.
Kafka's style has been compared to that of Kleist as early as 1916, in a review of "Die Verwandlung" and "Der Heizer" by Oscar Walzel in Berliner Beiträge. The nature of Kafka's prose allows for varied interpretations and critics have placed his writing into a variety of literary schools. Marxists, for example, have sharply disagreed over how to interpret Kafka's works. Some accused him of distorting reality whereas others claimed he was critiquing capitalism. The hopelessness and absurdity common to his works are seen as emblematic of existentialism. Some of Kafka's books are influenced by the expressionist movement, though the majority of his literary output was associated with the experimental modernist genre. Kafka also touches on the theme of human conflict with bureaucracy. William Burrows claims that such work is centred on the concepts of struggle, pain, solitude, and the need for relationships. Others, such as Thomas Mann, see Kafka's work as allegorical: a quest, metaphysical in nature, for God.
According to Gilles Deleuze and Félix Guattari, the themes of alienation and persecution, although present in Kafka's work, have been overemphasised by critics. They argue that Kafka's work is more deliberate and subversive—and more joyful—than may first appear. They point out that reading Kafka while focusing on the futility of his characters' struggles reveals Kafka's humour; he is not necessarily commenting on his own problems, but rather pointing out how people tend to invent problems. In his work, Kafka often creates malevolent, absurd worlds. Kafka read drafts of his works to his friends, typically concentrating on his humorous prose. The writer Milan Kundera suggests that Kafka's surrealist humour may have been an inversion of Dostoyevsky's presentation of characters who are punished for a crime. In Kafka's work, a character is punished although a crime has not been committed. Kundera believes that Kafka's inspirations for his characteristic situations came both from growing up in a patriarchal family and from living in a totalitarian state.
Attempts have been made to identify the influence of Kafka's legal background and the role of law in his fiction. Most interpretations identify aspects of law and legality as important in his work, in which the legal system is often oppressive. The law in Kafka's works, rather than being representative of any particular legal or political entity, is usually interpreted to represent a collection of anonymous, incomprehensible forces. These are hidden from the individual but control the lives of the people, who are innocent victims of systems beyond their control. Critics who support this absurdist interpretation cite instances where Kafka describes himself in conflict with an absurd universe, such as the following entry from his diary:
Enclosed in my own four walls, I found myself as an immigrant imprisoned in a foreign country;... I saw my family as strange aliens whose foreign customs, rites, and very language defied comprehension;... though I did not want it, they forced me to participate in their bizarre rituals;... I could not resist.
However, James Hawes argues many of Kafka's descriptions of the legal proceedings in Der Process—metaphysical, absurd, bewildering and nightmarish as they might appear—are based on accurate and informed descriptions of German and Austrian criminal proceedings of the time, which were inquisitorial rather than adversarial. Although he worked in insurance, as a trained lawyer Kafka was "keenly aware of the legal debates of his day". In an early 21st-century publication that uses Kafka's office writings as its point of departure, Pothik Ghosh states that with Kafka, law "has no meaning outside its fact of being a pure force of domination and determination".
The first instance of Kafka being translated into English was in 1925, when William A. Drake published "A Report for an Academy" in The New York Herald Tribune. Eugene Jolas translated Kafka's "The Judgment" for the modernist journal transition in 1928. In 1930, Edwin and Willa Muir translated the first German edition of Das Schloss. This was published as The Castle by Secker & Warburg in England and Alfred A. Knopf in the United States. A 1941 edition, including a homage by Thomas Mann, spurred a surge in Kafka's popularity in the United States during the late 1940s. The Muirs translated all shorter works that Kafka had seen fit to print; they were published by Schocken Books in 1948 as The Penal Colony: Stories and Short Pieces, including additionally The First Long Train Journey, written by Kafka and Brod, Kafka's "A Novel about Youth", a review of Felix Sternheim's Die Geschichte des jungen Oswald, his essay on Kleist's "Anecdotes", his review of the literary magazine Hyperion, and an epilogue by Brod.
Later editions, notably those of 1954 (Dearest Father: Stories and Other Writings), included text, translated by Eithne Wilkins and Ernst Kaiser, that had been deleted by earlier publishers. Known as "Definitive Editions", they include translations of The Trial, Definitive, The Castle, Definitive, and other writings. These translations are generally accepted to have a number of biases and are considered to be dated in interpretation. Published in 1961 by Schocken Books, Parables and Paradoxes presented in a bilingual edition by Nahum N. Glatzer selected writings, drawn from notebooks, diaries, letters, short fictional works and the novel Der Process.
New translations were completed and published based on the recompiled German text of Pasley and Schillemeit—The Castle, Critical by Mark Harman (Schocken Books, 1998), The Trial, Critical by Breon Mitchell (Schocken Books, 1998), and The Man Who Disappeared (Amerika) by Michael Hofmann (Penguin Books, 1996) and Amerika: The Missing Person by Mark Harman (Schocken Books, 2008).
Kafka often made extensive use of a characteristic particular to German, which permits long sentences that sometimes can span an entire page. Kafka's sentences then deliver an unexpected impact just before the full stop—this being the finalizing meaning and focus. This is due to the construction of subordinate clauses in German, which require that the verb be at the end of the sentence. Such constructions are difficult to duplicate in English, so it is up to the translator to provide the reader with the same (or at least equivalent) effect as the original text. German's more flexible word order and syntactical differences provide for multiple ways in which the same German writing can be translated into English. An example is the first sentence of Kafka's The Metamorphosis, which is crucial to the setting and understanding of the entire story:
The sentence above also exemplifies an instance of another difficult problem facing translators: dealing with the author's intentional use of ambiguous idioms and words that have several meanings, which results in phrasing that is difficult to translate precisely. English translators often render the word Ungeziefer as 'insect'; in Middle German, however, Ungeziefer literally means 'an animal unclean for sacrifice'; in today's German, it means 'vermin'. It is sometimes used colloquially to mean 'bug'—a very general term, unlike the scientific 'insect'. Kafka had no intention of labeling Gregor, the protagonist of the story, as any specific thing but instead wanted to convey Gregor's disgust at his transformation. Another example of this can be found in the final sentence of "Das Urteil" ("The Judgement"), with Kafka's use of the German noun Verkehr. Literally, Verkehr means 'intercourse' and, as in English, can have either a sexual or a non-sexual meaning. The word is additionally used to mean 'transport' or 'traffic', therefore the sentence can also be translated as: "At that moment an unending stream of traffic crossed over the bridge." The double meaning of Verkehr is given added weight by Kafka's confession to Brod that when he wrote that final line he was thinking of "a violent ejaculation."
Unlike many famous writers, Kafka is rarely quoted by others. Instead, he is noted more for his visions and perspective. Kafka had a strong influence on Gabriel García Márquez, Milan Kundera and the novel The Palace of Dreams by Ismail Kadare. Shimon Sandbank, a professor, literary critic, and writer, also identifies Kafka as having influenced Jorge Luis Borges, Albert Camus, Eugène Ionesco, J. M. Coetzee and Jean-Paul Sartre. A Financial Times literary critic credits Kafka with influencing José Saramago, and Al Silverman, a writer and editor, states that J. D. Salinger loved to read Kafka's works. The Romanian writer Mircea Cărtărescu said "Kafka is the author I love the most and who means, for me, the gate to literature"; he also described Kafka as "the saint of literature". Kafka has been cited as an influence on the Swedish writer Stig Dagerman, and the Japanese writer Haruki Murakami, who paid homage to Kafka in his novel Kafka on the Shore with the namesake protagonist.
In 1999 a committee of 99 authors, scholars, and literary critics ranked Der Process and Das Schloss the second and ninth most significant German-language novels of the 20th century. Harold Bloom said "when he is most himself, Kafka gives us a continuous inventiveness and originality that rivals Dante and truly challenges Proust and Joyce as that of the dominant Western author of our century". Sandbank argues that despite Kafka's pervasiveness, his enigmatic style has yet to be emulated. Neil Christian Pages, a professor of German Studies and Comparative Literature at Binghamton University who specialises in Kafka's works, says Kafka's influence transcends literature and literary scholarship; it impacts visual arts, music, and popular culture. Harry Steinhauer, a professor of German and Jewish literature, says that Kafka "has made a more powerful impact on literate society than any other writer of the twentieth century". Brod said that the 20th century will one day be known as the "century of Kafka".
Michel-André Bossy writes that Kafka created a rigidly inflexible and sterile bureaucratic universe. Kafka wrote in an aloof manner full of legal and scientific terms. Yet his serious universe also had insightful humour, all highlighting the "irrationality at the roots of a supposedly rational world". His characters are trapped, confused, full of guilt, frustrated, and lacking understanding of their surreal world. Much post-Kafka fiction, especially science fiction, follows the themes and precepts of Kafka's universe. This can be seen in the works of authors such as George Orwell and Ray Bradbury.
The following are examples of works across a range of dramatic, literary, and musical genres that demonstrate the extent of Kafka's cultural influence:
The term "Kafkaesque" is used to describe concepts and situations reminiscent of Kafka's work, particularly Der Process (The Trial) and Die Verwandlung (The Metamorphosis). Examples include instances in which bureaucracies overpower people, often in a surreal, nightmarish milieu that evokes feelings of senselessness, disorientation, and helplessness. Characters in a Kafkaesque setting often lack a clear course of action to escape a labyrinthine situation. Kafkaesque elements often appear in existential works, but the term has transcended the literary realm to apply to real-life occurrences and situations that are incomprehensibly complex, bizarre, or illogical.
Numerous films and television works have been described as Kafkaesque, and the style is particularly prominent in dystopian science fiction. Works in this genre that have been thus described include Patrick Bokanowski's film The Angel (1982), Terry Gilliam's film Brazil (1985), and Alex Proyas' science fiction film noir, Dark City (1998). Films from other genres which have been similarly described include Roman Polanski's The Tenant (1976) and the Coen brothers' Barton Fink (1991). The television series The Prisoner and The Twilight Zone are also frequently described as Kafkaesque.
However, with common usage, the term has become so ubiquitous that Kafka scholars note it is often misused. More accurately then, according to author Ben Marcus, paraphrased in "What it Means to be Kafkaesque" by Joe Fassler in The Atlantic, "Kafka's quintessential qualities are affecting use of language, a setting that straddles fantasy and reality, and a sense of striving even in the face of bleakness—hopelessly and full of hope."
3412 Kafka is an asteroid from the inner regions of the asteroid belt, approximately 6 kilometers in diameter. It was discovered on 10 January 1983 by American astronomers Randolph Kirk and Donald Rudy at Palomar Observatory in California, United States, and named after Kafka by them.
Apache Kafka, an open-source stream processing platform originally released in January 2011, is named after Kafka.
The Franz Kafka Museum in Prague is dedicated to Kafka and his work. A major component of the museum is an exhibit, The City of K. Franz Kafka and Prague, which was first shown in Barcelona in 1999, moved to the Jewish Museum in New York City, and finally established in Prague in Malá Strana (Lesser Town), along the Moldau, in 2005. The Franz Kafka Museum calls its display of original photos and documents Město K. Franz Kafka a Praha ("City K. Kafka and Prague") and aims to immerse the visitor into the world in which Kafka lived and about which he wrote.
The Franz Kafka Prize, established in 2001, is an annual literary award of the Franz Kafka Society and the City of Prague. It recognizes the merits of literature as "humanistic character and contribution to cultural, national, language and religious tolerance, its existential, timeless character, its generally human validity, and its ability to hand over a testimony about our times". The selection committee and recipients come from all over the world, but are limited to living authors who have had at least one work published in Czech. The recipient receives $10,000, a diploma, and a bronze statuette at a presentation in Prague's Old Town Hall, on the Czech State Holiday in late October.
San Diego State University operates the Kafka Project, which began in 1998 as the official international search for Kafka's last writings.
Kafka Dome is an off-axis oceanic core complex in the central Atlantic named after Kafka.
Journals | [
{
"paragraph_id": 0,
"text": "Franz Kafka (3 July 1883 – 3 June 1924) was a German-speaking Bohemian novelist and short-story writer based in Prague, who is widely regarded as one of the major figures of 20th-century literature. His work fuses elements of realism and the fantastic. It typically features isolated protagonists facing bizarre or surrealistic predicaments and incomprehensible socio-bureaucratic powers. It has been interpreted as exploring themes of alienation, existential anxiety, guilt, and absurdity. His best known works include the novella The Metamorphosis and novels The Trial and The Castle. The term Kafkaesque has entered English to describe absurd situations like those depicted in his writing.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Kafka was born into a middle-class German-speaking Czech Jewish family in Prague, the capital of the Kingdom of Bohemia, then part of the Austro-Hungarian Empire (today the capital of the Czech Republic). He trained as a lawyer, and after completing his legal education was employed full-time by an insurance company, forcing him to relegate writing to his spare time. Over the course of his life, Kafka wrote hundreds of letters to family and close friends, including his father, with whom he had a strained and formal relationship. He became engaged to several women but never married. He died in obscurity in 1924 at the age of 40 from tuberculosis.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kafka was a prolific writer, spending most of his free time writing, often late in the night. He burned an estimated 90 per cent of his total work due to his persistent struggles with self-doubt. Much of the remaining 10 per cent is lost or otherwise unpublished. Few of Kafka's works were published during his lifetime: the story collections Contemplation and A Country Doctor, and individual stories (such as his novella The Metamorphosis) were published in literary magazines but received little public attention.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In his will, Kafka instructed his close friend and literary executor Max Brod to destroy his unfinished works, including his novels The Trial, The Castle, and Amerika, but Brod ignored these instructions and had much of his work published. Kafka's writings became famous in German-speaking countries after World War II, influencing their literature, and its influence spread elsewhere in the world in the 1960s. It has also influenced artists, composers, and philosophers.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Kafka was born near the Old Town Square in Prague, then part of the Austro-Hungarian Empire. His family were German-speaking middle-class Ashkenazi Jews. His father, Hermann Kafka (1854–1931), was the fourth child of Jakob Kafka, a shochet or ritual slaughterer in Osek, a Czech village with a large Jewish population located near Strakonice in southern Bohemia. Hermann brought the Kafka family to Prague. After working as a travelling sales representative, he eventually became a fashion retailer who employed up to 15 people and used the image of a jackdaw (kavka in Czech, pronounced and colloquially written as kafka) as his business logo. Kafka's mother, Julie (1856–1934), was the daughter of Jakob Löwy, a prosperous retail merchant in Poděbrady, and was better educated than her husband.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "Kafka's parents probably spoke a German influenced by Yiddish that was sometimes pejoratively called Mauscheldeutsch, but, as German was considered the vehicle of social mobility, they probably encouraged their children to speak Standard German. Hermann and Julie had six children, of whom Franz was the eldest. Franz's two brothers, Georg and Heinrich, died in infancy before Franz was seven; his three sisters were Gabriele (\"Elli\") (1889–1944), Valerie (\"Valli\") (1890–1942) and Ottilie (\"Ottla\") (1892–1943). All three were murdered in the Holocaust of World War II. Valli was deported to the Łódź Ghetto in occupied Poland in 1942, but that is the last documentation of her; it is assumed she did not survive the war. Ottilie was Kafka's favourite sister.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "Hermann is described by the biographer Stanley Corngold as a \"huge, selfish, overbearing businessman\" and by Franz Kafka as \"a true Kafka in strength, health, appetite, loudness of voice, eloquence, self-satisfaction, worldly dominance, endurance, presence of mind, [and] knowledge of human nature\". On business days, both parents were absent from the home, with Julie Kafka working as many as 12 hours each day helping to manage the family business. Consequently, Kafka's childhood was somewhat lonely, and the children were reared largely by a series of governesses and servants. Kafka's troubled relationship with his father is evident in his Brief an den Vater (Letter to His Father) of more than 100 pages, in which he complains of being profoundly affected by his father's authoritarian and demanding character; his mother, in contrast, was quiet and shy. The dominating figure of Kafka's father had a significant influence on Kafka's writing.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "The Kafka family had a servant girl living with them in a cramped apartment. Franz's room was often cold. In November 1913 the family moved into a bigger apartment, although Ellie and Valli had married and moved out of the first apartment. In early August 1914, just after World War I began, the sisters did not know where their husbands were in the military and moved back in with the family in this larger apartment. Both Ellie and Valli also had children. Franz at age 31 moved into Valli's former apartment, quiet by contrast, and lived by himself for the first time.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "From 1889 to 1893, Kafka attended the Deutsche Knabenschule German boys' elementary school at the Masný trh/Fleischmarkt (meat market), now known as Masná Street. His Jewish education ended with his bar mitzvah celebration at the age of 13. Kafka never enjoyed attending the synagogue and went with his father only on four high holidays a year.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "After leaving elementary school in 1893, Kafka was admitted to the rigorous classics-oriented state gymnasium, Altstädter Deutsches Gymnasium, an academic secondary school at Old Town Square, within the Kinský Palace. German was the language of instruction, but Kafka also spoke and wrote in Czech. He studied the latter at the gymnasium for eight years, achieving good grades. Although Kafka received compliments for his Czech, he never considered himself fluent in the language, though he spoke German with a Czech accent. He completed his Matura exams in 1901.",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Admitted to the Deutsche Karl-Ferdinands-Universität of Prague in 1901, Kafka began studying chemistry but switched to law after two weeks. Although this field did not excite him, it offered a range of career possibilities which pleased his father. In addition, law required a longer course of study, giving Kafka time to take classes in German studies and art history. He also joined a student club, Lese- und Redehalle der Deutschen Studenten (Reading and Lecture Hall of the German students), which organised literary events, readings and other activities. Among Kafka's friends were the journalist Felix Weltsch, who studied philosophy, the actor Yitzchak Lowy who came from an orthodox Hasidic Warsaw family, and the writers Ludwig Winder, Oskar Baum and Franz Werfel.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "At the end of his first year of studies, Kafka met Max Brod, a fellow law student who became a close friend for life. Years later, Brod coined the term Der enge Prager Kreis (\"The Close Prague Circle\") to describe the group of writers, which included Kafka, Felix Weltsch and Brod himself. Brod soon noticed that, although Kafka was shy and seldom spoke, what he said was usually profound. Kafka was an avid reader throughout his life; together he and Brod read Plato's Protagoras in the original Greek, on Brod's initiative, and Flaubert's L'éducation sentimentale and La Tentation de St. Antoine (The Temptation of Saint Anthony) in French, at his own suggestion. Kafka considered Fyodor Dostoyevsky, Gustav Flaubert, Nikolai Gogol, Franz Grillparzer, and Heinrich von Kleist to be his \"true blood brothers\". Besides these, he took an interest in Czech literature and was also very fond of the works of Goethe. Kafka was awarded the degree of Doctor of Law on 18 June 1906 and performed an obligatory year of unpaid service as law clerk for the civil and criminal courts.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "On 1 November 1907, Kafka was employed at the Assicurazioni Generali, an insurance company, where he worked for nearly a year. His correspondence during that period indicates that he was unhappy with a work schedule—from 08:00 until 18:00—that made it extremely difficult to concentrate on writing, which was assuming increasing importance to him. On 15 July 1908, he resigned. Two weeks later, he found employment more amenable to writing when he joined the Worker's Accident Insurance Institute for the Kingdom of Bohemia. The job involved investigating and assessing compensation for personal injury to industrial workers; accidents such as lost fingers or limbs were commonplace, owing to poor work safety policies at the time. It was especially true of factories fitted with machine lathes, drills, planing machines and rotary saws, which were rarely fitted with safety guards.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "The management professor Peter Drucker credits Kafka with developing the first civilian hard hat while employed at the Worker's Accident Insurance Institute, but this is not supported by any document from his employer. His father often referred to his son's job as an insurance officer as a Brotberuf, literally \"bread job\", a job done only to pay the bills; Kafka often claimed to despise it. Kafka was rapidly promoted and his duties included processing and investigating compensation claims, writing reports, and handling appeals from businessmen who thought their firms had been placed in too high a risk category, which cost them more in insurance premiums. He would compile and compose the annual report on the insurance institute for the several years he worked there. The reports were well received by his superiors. Kafka usually got off work at 2 p.m., so that he had time to spend on his literary work, to which he was committed. Kafka's father also expected him to help out at and take over the family fancy goods store. In his later years, Kafka's illness often prevented him from working at the insurance bureau and at his writing.",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "In late 1911, Elli's husband Karl Hermann and Kafka became partners in the first asbestos factory in Prague, known as Prager Asbestwerke Hermann & Co., having used dowry money from Hermann Kafka. Kafka showed a positive attitude at first, dedicating much of his free time to the business, but he later resented the encroachment of this work on his writing time. During that period, he also found interest and entertainment in the performances of Yiddish theatre. After seeing a Yiddish theatre troupe perform in October 1911, for the next six months Kafka \"immersed himself in Yiddish language and in Yiddish literature\". This interest also served as a starting point for his growing exploration of Judaism. It was at about this time that Kafka became a vegetarian. Around 1915, Kafka received his draft notice for military service in World War I, but his employers at the insurance institute arranged for a deferment because his work was considered essential government service. He later attempted to join the military but was prevented from doing so by medical problems associated with tuberculosis, with which he was diagnosed in 1917. In 1918, the Worker's Accident Insurance Institute put Kafka on a pension due to his illness, for which there was no cure at the time, and he spent most of the rest of his life in sanatoriums.",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "Kafka never married. According to Brod, Kafka was \"tortured\" by sexual desire, and Kafka's biographer Reiner Stach states that his life was full of \"incessant womanising\" and that he was filled with a fear of \"sexual failure\". Kafka visited brothels for most of his adult life and was interested in pornography. In addition, he had close relationships with several women during his lifetime. On 13 August 1912, Kafka met Felice Bauer, a relative of Brod's, who worked in Berlin as a representative of a dictaphone company. A week after the meeting at Brod's home, Kafka wrote in his diary:",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "Miss FB. When I arrived at Brod's on 13 August, she was sitting at the table. I was not at all curious about who she was, but rather took her for granted at once. Bony, empty face that wore its emptiness openly. Bare throat. A blouse thrown on. Looked very domestic in her dress although, as it turned out, she by no means was. (I alienate myself from her a little by inspecting her so closely ...) Almost broken nose. Blonde, somewhat straight, unattractive hair, strong chin. As I was taking my seat I looked at her closely for the first time, by the time I was seated I already had an unshakeable opinion.",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "Shortly after this meeting, Kafka wrote the story \"Das Urteil\" (\"The Judgment\") in only one night and in a productive period worked on Der Verschollene (The Man Who Disappeared) and Die Verwandlung (The Metamorphosis). Kafka and Felice Bauer communicated mostly through letters over the next five years, met occasionally, and were engaged twice. Kafka's extant letters to Bauer were published as Briefe an Felice (Letters to Felice); her letters do not survive. After he had written to Bauer's father asking to marry her, Kafka wrote in his diary:",
"title": "Life"
},
{
"paragraph_id": 18,
"text": "My job is unbearable to me because it conflicts with my only desire and my only calling, which is literature.... I am nothing but literature and can and want to be nothing else.... Nervous states of the worst sort control me without pause.... A marriage could not change me, just as my job cannot change me.",
"title": "Life"
},
{
"paragraph_id": 19,
"text": "According to the biographers Stach and James Hawes, Kafka became engaged a third time around 1920, to Julie Wohryzek, a poor and uneducated hotel chambermaid. Kafka's father objected to Julie because of her Zionist beliefs. Although Kafka and Julie rented a flat and set a wedding date, the marriage never took place. During this time, Kafka began a draft of Letter to His Father. Before the date of the intended marriage, he took up with yet another woman. While he needed women and sex in his life, he had low self-confidence, felt sex was dirty, and was cripplingly shy—especially about his body.",
"title": "Life"
},
{
"paragraph_id": 20,
"text": "Stach and Brod state that during the time that Kafka knew Felice Bauer, he had an affair with a friend of hers, Margarethe \"Grete\" Bloch, a Jewish woman from Berlin. Brod says that Bloch gave birth to Kafka's son, although Kafka never knew about the child. The boy, whose name is not known, was born in 1914 or 1915 and died in Munich in 1921. However, Kafka's biographer Peter-André Alt says that, while Bloch had a son, Kafka was not the father, as the pair were never intimate. Stach points out that there is a great deal of contradictory evidence around the claim that Kafka was the father.",
"title": "Life"
},
{
"paragraph_id": 21,
"text": "Kafka was diagnosed with tuberculosis in August 1917 and moved for a few months to the Bohemian village of Zürau (Siřem in Czech), where his sister Ottla worked on the farm of her brother-in-law Karl Hermann. He felt comfortable there and later described this time as perhaps the best period of his life, probably because he had no responsibilities. He kept diaries and Oktavhefte (octavo). From the notes in these books, Kafka extracted 109 numbered pieces of text on Zettel, single pieces of paper in no given order. They were later published as Die Zürauer Aphorismen oder Betrachtungen über Sünde, Hoffnung, Leid und den wahren Weg (The Zürau Aphorisms or Reflections on Sin, Hope, Suffering, and the True Way).",
"title": "Life"
},
{
"paragraph_id": 22,
"text": "In 1920, Kafka began an intense relationship with Milena Jesenská, a Czech journalist and writer who was non-Jewish and who was married, but when she met Kafka, her marriage was a \"sham\". His letters to her were later published as Briefe an Milena. During a vacation in July 1923 to Graal-Müritz on the Baltic Sea, Kafka met Dora Diamant, a 25-year-old kindergarten teacher from an orthodox Jewish family. Kafka, hoping to escape the influence of his family to concentrate on his writing, moved briefly to Berlin (September 1923-March 1924) and lived with Diamant. She became his lover and sparked his interest in the Talmud. He worked on four stories, including Ein Hungerkünstler (A Hunger Artist), which were published shortly after his death.",
"title": "Life"
},
{
"paragraph_id": 23,
"text": "Kafka had a lifelong suspicion that people found him mentally and physically repulsive. However, many of those who met him found him to possess obvious intelligence and a sense of humour; they also found him handsome, although of austere appearance. Brod compared Kafka to Heinrich von Kleist, noting that both writers had the ability to describe a situation realistically with precise details. Brod thought Kafka was one of the most entertaining people he had met; Kafka enjoyed sharing humour with his friends, but also helped them in difficult situations with good advice. According to Brod, he was a passionate reciter, able to phrase his speech as though it were music. Brod felt that two of Kafka's most distinguishing traits were \"absolute truthfulness\" (absolute Wahrhaftigkeit) and \"precise conscientiousness\" (präzise Gewissenhaftigkeit). He explored details, the inconspicuous, in depth and with such love and precision that things surfaced that were unforeseen, seemingly strange, but absolutely true (nichts als wahr).",
"title": "Life"
},
{
"paragraph_id": 24,
"text": "Although Kafka showed little interest in exercise as a child, he later developed a passion for games and physical activity, and was an accomplished rider, swimmer, and rower. On weekends, he and his friends embarked on long hikes, often planned by Kafka himself. His other interests included alternative medicine, modern education systems such as Montessori, and technological novelties such as airplanes and film. Writing was vitally important to Kafka; he considered it a \"form of prayer\". He was highly sensitive to noise and preferred absolute quiet when writing.",
"title": "Life"
},
{
"paragraph_id": 25,
"text": "Pérez-Álvarez has claimed that Kafka had symptomatology consistent with schizoid personality disorder. His style, it is claimed, not only in Die Verwandlung (The Metamorphosis) but in other writings, appears to show low- to medium-level schizoid traits, which Pérez-Álvarez claims to have influenced much of his work. His anguish can be seen in this diary entry from 21 June 1913:",
"title": "Life"
},
{
"paragraph_id": 26,
"text": "Die ungeheure Welt, die ich im Kopfe habe. Aber wie mich befreien und sie befreien, ohne zu zerreißen. Und tausendmal lieber zerreißen, als in mir sie zurückhalten oder begraben. Dazu bin ich ja hier, das ist mir ganz klar.",
"title": "Life"
},
{
"paragraph_id": 27,
"text": "The tremendous world I have inside my head, but how to free myself and free it without being torn to pieces. And a thousand times rather be torn to pieces than retain it in me or bury it. That, indeed, is why I am here, that is quite clear to me.",
"title": "Life"
},
{
"paragraph_id": 28,
"text": "and in Zürau Aphorism number 50:",
"title": "Life"
},
{
"paragraph_id": 29,
"text": "Man cannot live without a permanent trust in something indestructible within himself, though both that indestructible something and his own trust in it may remain permanently concealed from him.",
"title": "Life"
},
{
"paragraph_id": 30,
"text": "Alessia Coralli and Antonio Perciaccante of San Giovanni di Dio Hospital have posited that Kafka may have had borderline personality disorder with co-occurring psychophysiological insomnia. Joan Lachkar interpreted Die Verwandlung as \"a vivid depiction of the borderline personality\" and described the story as \"model for Kafka's own abandonment fears, anxiety, depression, and parasitic dependency needs. Kafka illuminated the borderline's general confusion of normal and healthy desires, wishes, and needs with something ugly and disdainful.\"",
"title": "Life"
},
{
"paragraph_id": 31,
"text": "Though Kafka never married, he held marriage and children in high esteem. He had several girlfriends and lovers across his life. He may have suffered from an eating disorder. Doctor Manfred M. Fichter of the Psychiatric Clinic, University of Munich, presented \"evidence for the hypothesis that the writer Franz Kafka had suffered from an atypical anorexia nervosa\", and that Kafka was not just lonely and depressed but also \"occasionally suicidal\". In his 1995 book Franz Kafka, the Jewish Patient, Sander Gilman investigated \"why a Jew might have been considered 'hypochondriacal' or 'homosexual' and how Kafka incorporates aspects of these ways of understanding the Jewish male into his own self-image and writing\". Kafka considered suicide at least once, in late 1912.",
"title": "Life"
},
{
"paragraph_id": 32,
"text": "Before World War I, Kafka attended several meetings of the Klub mladých, a Czech anarchist, anti-militarist, and anti-clerical organization. Hugo Bergmann, who attended the same elementary and high schools as Kafka, fell out with Kafka during their last academic year (1900–1901) because \"[Kafka's] socialism and my Zionism were much too strident\". Bergmann said: \"Franz became a socialist, I became a Zionist in 1898. The synthesis of Zionism and socialism did not yet exist.\" Bergmann claims that Kafka wore a red carnation to school to show his support for socialism. In one diary entry, Kafka made reference to the influential anarchist philosopher Peter Kropotkin: \"Don't forget Kropotkin!\"",
"title": "Life"
},
{
"paragraph_id": 33,
"text": "During the communist era, the legacy of Kafka's work for Eastern Bloc socialism was hotly debated. Opinions ranged from the notion that he satirised the bureaucratic bungling of a crumbling Austro-Hungarian Empire, to the belief that he embodied the rise of socialism. A further key point was Marx's theory of alienation. While the orthodox position was that Kafka's depictions of alienation were no longer relevant for a society that had supposedly eliminated alienation, a 1963 conference held in Liblice, Czechoslovakia, on the eightieth anniversary of his birth, reassessed the importance of Kafka's portrayal of bureaucracy. Whether or not Kafka was a political writer is still an issue of debate.",
"title": "Life"
},
{
"paragraph_id": 34,
"text": "Kafka grew up in Prague as a German-speaking Jew. He was deeply fascinated by the Jews of Eastern Europe, who he thought possessed an intensity of spiritual life that was absent from Jews in the West. His diary contains many references to Yiddish writers. Yet he was at times alienated from Judaism and Jewish life. On 8 January 1914, he wrote in his diary:",
"title": "Life"
},
{
"paragraph_id": 35,
"text": "Was habe ich mit Juden gemeinsam? Ich habe kaum etwas mit mir gemeinsam und sollte mich ganz still, zufrieden damit daß ich atmen kann in einen Winkel stellen. (What have I in common with Jews? I have hardly anything in common with myself and should stand very quietly in a corner, content that I can breathe.)",
"title": "Life"
},
{
"paragraph_id": 36,
"text": "In his adolescent years, Kafka declared himself an atheist.",
"title": "Life"
},
{
"paragraph_id": 37,
"text": "Hawes suggests that Kafka, though very aware of his own Jewishness, did not incorporate it into his work, which, according to Hawes, lacks Jewish characters, scenes or themes. In the opinion of literary critic Harold Bloom, although Kafka was uneasy with his Jewish heritage, he was the quintessential Jewish writer. Lothar Kahn is likewise unequivocal: \"The presence of Jewishness in Kafka's oeuvre is no longer subject to doubt\". Pavel Eisner, one of Kafka's first translators, interprets Der Process (The Trial) as the embodiment of the \"triple dimension of Jewish existence in Prague ... his protagonist Josef K. is (symbolically) arrested by a German (Rabensteiner), a Czech (Kullich), and a Jew (Kaminer). He stands for the 'guiltless guilt' that imbues the Jew in the modern world, although there is no evidence that he himself is a Jew\".",
"title": "Life"
},
{
"paragraph_id": 38,
"text": "In his essay Sadness in Palestine?!, Dan Miron explores Kafka's connection to Zionism: \"It seems that those who claim that there was such a connection and that Zionism played a central role in his life and literary work, and those who deny the connection altogether or dismiss its importance, are both wrong. The truth lies in some very elusive place between these two simplistic poles.\" Kafka considered moving to Palestine with Felice Bauer, and later with Dora Diamant. He studied Hebrew while living in Berlin, hiring a friend of Brod's from Palestine, Pua Bat-Tovim, to tutor him and attending Rabbi Julius Grünthal and Rabbi Julius Guttmann's classes in the Berlin Hochschule für die Wissenschaft des Judentums (College for the Study of Judaism).",
"title": "Life"
},
{
"paragraph_id": 39,
"text": "Livia Rothkirchen calls Kafka the \"symbolic figure of his era\". His contemporaries included numerous Jewish, Czech, and German writers who were sensitive to Jewish, Czech, and German culture. According to Rothkirchen, \"This situation lent their writings a broad cosmopolitan outlook and a quality of exaltation bordering on transcendental metaphysical contemplation. An illustrious example is Franz Kafka\".",
"title": "Life"
},
{
"paragraph_id": 40,
"text": "Towards the end of his life Kafka sent a postcard to his friend Hugo Bergmann in Tel Aviv, announcing his intention to emigrate to Palestine. Bergmann refused to host Kafka because he had young children and was afraid that Kafka would infect them with tuberculosis.",
"title": "Life"
},
{
"paragraph_id": 41,
"text": "Kafka's laryngeal tuberculosis worsened and in March 1924 he returned from Berlin to Prague, where members of his family, principally his sister Ottla and Dora Diamant, took care of him. He went to Hugo Hoffmann's sanatorium in Kierling just outside Vienna for treatment on 10 April, and died there on 3 June 1924. The cause of death seemed to be starvation: the condition of Kafka's throat made eating too painful for him, and since parenteral nutrition had not yet been developed, there was no way to feed him. Kafka was editing \"A Hunger Artist\" on his deathbed, a story whose composition he had begun before his throat closed to the point that he could not take any nourishment. His body was brought back to Prague where he was buried on 11 June 1924, in the New Jewish Cemetery in Prague-Žižkov. Kafka was virtually unknown during his own lifetime, but he did not consider fame important. He rose to fame rapidly after his death, particularly after World War II. The Kafka tombstone was designed by architect Leopold Ehrmann.",
"title": "Death"
},
{
"paragraph_id": 42,
"text": "All of Kafka's published works, except some letters he wrote in Czech to Milena Jesenská, were written in German. What little was published during his lifetime attracted scant public attention.",
"title": "Works"
},
{
"paragraph_id": 43,
"text": "Kafka finished none of his full-length novels and burned around 90 percent of his work, much of it during the period he lived in Berlin with Diamant, who helped him burn the drafts. In his early years as a writer he was influenced by von Kleist, whose work he described in a letter to Bauer as frightening and whom he considered closer than his own family.",
"title": "Works"
},
{
"paragraph_id": 44,
"text": "Kafka drew and sketched extensively. Until May 2021, only about 40 of his drawings were known. In 2022, Yale University Press published Franz Kafka: The Drawings.",
"title": "Works"
},
{
"paragraph_id": 45,
"text": "Kafka's earliest published works were eight stories which appeared in 1908 in the first issue of the literary journal Hyperion under the title Betrachtung (Contemplation). He wrote the story \"Beschreibung eines Kampfes\" (\"Description of a Struggle\") in 1904; he showed it to Brod in 1905 who advised him to continue writing and convinced him to submit it to Hyperion. Kafka published a fragment in 1908 and two sections in the spring of 1909, all in Munich.",
"title": "Works"
},
{
"paragraph_id": 46,
"text": "In a creative outburst on the night of 22 September 1912, Kafka wrote the story \"Das Urteil\" (\"The Judgment\", literally: \"The Verdict\") and dedicated it to Felice Bauer. Brod noted the similarity in names of the main character and his fictional fiancée, Georg Bendemann and Frieda Brandenfeld, to Franz Kafka and Felice Bauer. The story is often considered Kafka's breakthrough work. It deals with the troubled relationship of a son and his dominant father, facing a new situation after the son's engagement. Kafka later described writing it as \"a complete opening of body and soul\", a story that \"evolved as a true birth, covered with filth and slime\". The story was first published in Leipzig in 1912 and dedicated \"to Miss Felice Bauer\", and in subsequent editions \"for F.\"",
"title": "Works"
},
{
"paragraph_id": 47,
"text": "In 1912, Kafka wrote Die Verwandlung (The Metamorphosis, or The Transformation), published in 1915 in Leipzig. The story begins with a travelling salesman waking to find himself transformed into an ungeheures Ungeziefer, a monstrous vermin, Ungeziefer being a general term for unwanted and unclean pests, especially insects. Critics regard the work as one of the seminal works of fiction of the 20th century. The story \"In der Strafkolonie\" (\"In the Penal Colony\"), dealing with an elaborate torture and execution device, was written in October 1914, revised in 1918, and published in Leipzig during October 1919. The story \"Ein Hungerkünstler\" (\"A Hunger Artist\"), published in the periodical Die neue Rundschau in 1924, describes a victimized protagonist who experiences a decline in the appreciation of his strange craft of starving himself for extended periods. His last story, \"Josefine, die Sängerin oder Das Volk der Mäuse\" (\"Josephine the Singer, or the Mouse Folk\"), also deals with the relationship between an artist and his audience.",
"title": "Works"
},
{
"paragraph_id": 48,
"text": "Kafka began his first novel in 1912; its first chapter is the story \"Der Heizer\" (\"The Stoker\"). He called the work, which remained unfinished, Der Verschollene (The Man Who Disappeared or The Missing Person), but when Brod published it after Kafka's death he named it Amerika. The inspiration for the novel was the time Kafka spent in the audience of Yiddish theatre the previous year, bringing him to a new awareness of his heritage, which led to the thought that an innate appreciation for one's heritage lives deep within each person. More explicitly humorous and slightly more realistic than most of Kafka's works, the novel shares the motif of an oppressive and intangible system putting the protagonist repeatedly in bizarre situations. It uses many details of experiences from his relatives who had emigrated to America and is the only work for which Kafka considered an optimistic ending.",
"title": "Works"
},
{
"paragraph_id": 49,
"text": "In 1914 Kafka began the novel Der Process (The Trial), the story of a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. He did not complete the novel, although he finished the final chapter. According to Nobel Prize winner and Kafka scholar Elias Canetti, Felice is central to the plot of Der Process and Kafka said it was \"her story\". Canetti titled his book on Kafka's letters to Felice Kafka's Other Trial, in recognition of the relationship between the letters and the novel. Michiko Kakutani notes in a review for The New York Times that Kafka's letters have the \"earmarks of his fiction: the same nervous attention to minute particulars; the same paranoid awareness of shifting balances of power; the same atmosphere of emotional suffocation—combined, surprisingly enough, with moments of boyish ardour and delight.\"",
"title": "Works"
},
{
"paragraph_id": 50,
"text": "According to his diary, Kafka was already planning his novel Das Schloss (The Castle), by 11 June 1914; however, he did not begin writing it until 27 January 1922. The protagonist is the Landvermesser (land surveyor) named K., who struggles for unknown reasons to gain access to the mysterious authorities of a castle who govern the village. Kafka's intent was that the castle's authorities notify K. on his deathbed that his \"legal claim to live in the village was not valid, yet, taking certain auxiliary circumstances into account, he was to be permitted to live and work there\". Dark and at times surreal, the novel is focused on alienation, bureaucracy, the seemingly endless frustrations of man's attempts to stand against the system, and the futile and hopeless pursuit of an unattainable goal. Hartmut M. Rastalsky noted in his thesis: \"Like dreams, his texts combine precise 'realistic' detail with absurdity, careful observation and reasoning on the part of the protagonists with inexplicable obliviousness and carelessness.\"",
"title": "Works"
},
{
"paragraph_id": 51,
"text": "Kafka's stories were initially published in literary periodicals. His first eight were printed in 1908 in the first issue of the bi-monthly Hyperion. Franz Blei published two dialogues in 1909 which became part of \"Beschreibung eines Kampfes\" (\"Description of a Struggle\"). A fragment of the story \"Die Aeroplane in Brescia\" (\"The Aeroplanes at Brescia\"), written on a trip to Italy with Brod, appeared in the daily Bohemia on 28 September 1909. On 27 March 1910, several stories that later became part of the book Betrachtung were published in the Easter edition of Bohemia. In Leipzig during 1913, Brod and publisher Kurt Wolff included \"Das Urteil. Eine Geschichte von Franz Kafka.\" (\"The Judgment. A Story by Franz Kafka.\") in their literary yearbook for the art poetry Arkadia. In the same year, Wolff published \"Der Heizer\" (\"The Stoker\") in the Jüngste Tag series, where it enjoyed three printings. The story \"Vor dem Gesetz\" (\"Before the Law\") was published in the 1915 New Year's edition of the independent Jewish weekly Selbstwehr; it was reprinted in 1919 as part of the story collection Ein Landarzt (A Country Doctor) and became part of the novel Der Process. Other stories were published in various publications, including Martin Buber's Der Jude, the paper Prager Tagblatt, and the periodicals Die neue Rundschau, Genius, and Prager Presse.",
"title": "Works"
},
{
"paragraph_id": 52,
"text": "Kafka's first published book, Betrachtung (Contemplation, or Meditation), was a collection of 18 stories written between 1904 and 1912. On a summer trip to Weimar, Brod initiated a meeting between Kafka and Kurt Wolff; Wolff published Betrachtung in the Rowohlt Verlag at the end of 1912 (with the year given as 1913). Kafka dedicated it to Brod, \"Für M.B.\", and added in the personal copy given to his friend \"So wie es hier schon gedruckt ist, für meinen liebsten Max—Franz K.\" (\"As it is already printed here, for my dearest Max\").",
"title": "Works"
},
{
"paragraph_id": 53,
"text": "Kafka's novella Die Verwandlung (The Metamorphosis) was first printed in the October 1915 issue of Die Weißen Blätter, a monthly edition of expressionist literature, edited by René Schickele. Another story collection, Ein Landarzt (A Country Doctor), was published by Kurt Wolff in 1919, dedicated to Kafka's father. Kafka prepared a final collection of four stories for print, Ein Hungerkünstler (A Hunger Artist), which appeared in 1924 after his death, in Verlag Die Schmiede. On 20 April 1924, the Berliner Börsen-Courier published Kafka's essay on Adalbert Stifter.",
"title": "Works"
},
{
"paragraph_id": 54,
"text": "Kafka left his work, both published and unpublished, to his friend and literary executor Max Brod with explicit instructions that it should be destroyed on Kafka's death; Kafka wrote: \"Dearest Max, my last request: Everything I leave behind me ... in the way of diaries, manuscripts, letters (my own and others'), sketches, and so on, [is] to be burned unread.\" Brod ignored this request and published the novels and collected works between 1925 and 1935. Brod defended his action by claiming that he had told Kafka, \"I shall not carry out your wishes\", and that \"Franz should have appointed another executor if he had been absolutely determined that his instructions should stand\".",
"title": "Works"
},
{
"paragraph_id": 55,
"text": "Brod took many of Kafka's papers, which remain unpublished, with him in suitcases to Palestine when he fled there in 1939. Kafka's last lover, Dora Diamant (later, Dymant-Lask), also ignored his wishes, secretly keeping 20 notebooks and 35 letters. These were confiscated by the Gestapo in 1933, but scholars continue to search for them.",
"title": "Works"
},
{
"paragraph_id": 56,
"text": "As Brod published the bulk of the writings in his possession, Kafka's work began to attract wider attention and critical acclaim. Brod found it difficult to arrange Kafka's notebooks in chronological order. One problem was that Kafka often began writing in different parts of the book; sometimes in the middle, sometimes working backwards from the end. Brod finished many of Kafka's incomplete works for publication. For example, Kafka left Der Process with unnumbered and incomplete chapters and Das Schloss with incomplete sentences and ambiguous content; Brod rearranged chapters, copy-edited the text, and changed the punctuation. Der Process appeared in 1925 in Verlag Die Schmiede. Kurt Wolff published two other novels, Das Schloss in 1926 and Amerika in 1927. In 1931, Brod edited a collection of prose and unpublished stories as Beim Bau der Chinesischen Mauer (The Great Wall of China), including the story of the same name. The book appeared in the Gustav Kiepenheuer Verlag. Brod's sets are usually called the \"Definitive Editions\".",
"title": "Works"
},
{
"paragraph_id": 57,
"text": "In 1961 Malcolm Pasley acquired for the Oxford Bodleian Library most of Kafka's original handwritten works. The text for Der Process was later purchased through auction and is stored at the German Literary Archives in Marbach am Neckar, Germany. Subsequently, Pasley headed a team (including Gerhard Neumann, Jost Schillemeit and Jürgen Born) which reconstructed the German novels; S. Fischer Verlag republished them. Pasley was the editor for Das Schloss, published in 1982, and Der Process (The Trial), published in 1990. Jost Schillemeit was the editor of Der Verschollene (Amerika) published in 1983. These are called the \"Critical Editions\" or the \"Fischer Editions\".",
"title": "Works"
},
{
"paragraph_id": 58,
"text": "In 2023, the first unexpurgated edition of Kafka's diaries was published in English, \"more than three decades after this complete text appeared in German. The sole previous English edition, with Brod’s edits, was issued in the late 1940s\".",
"title": "Works"
},
{
"paragraph_id": 59,
"text": "When Brod died in 1968, he left Kafka's unpublished papers, which are believed to number in the thousands, to his secretary Esther Hoffe. She released or sold some, but left most to her daughters, Eva and Ruth, who also refused to release the papers. A court battle began in 2008 between the sisters and the National Library of Israel, which claimed these works became the property of the nation of Israel when Brod emigrated to British Palestine in 1939. Esther Hoffe sold the original manuscript of Der Process for US$2 million in 1988 to the German Literary Archive Museum of Modern Literature in Marbach am Neckar. A ruling by a Tel Aviv family court in 2010 held that the papers must be released and a few were, including a previously unknown story, but the legal battle continued. The Hoffes claim the papers are their personal property, while the National Library of Israel argues they are \"cultural assets belonging to the Jewish people\". The National Library also suggests that Brod bequeathed the papers to them in his will. The Tel Aviv Family Court ruled in October 2012, six months after Ruth's death, that the papers were the property of the National Library. The Israeli Supreme Court upheld the decision in December 2016.",
"title": "Works"
},
{
"paragraph_id": 60,
"text": "The poet W. H. Auden called Kafka \"the Dante of the twentieth century\"; the novelist Vladimir Nabokov placed him among the greatest writers of the 20th century. Gabriel García Márquez noted the reading of Kafka's The Metamorphosis showed him \"that it was possible to write in a different way\". A prominent theme of Kafka's work, first established in the short story \"Das Urteil\", is father–son conflict: the guilt induced in the son is resolved through suffering and atonement. Other prominent themes and archetypes include alienation, physical and psychological brutality, characters on a terrifying quest, and mystical transformation.",
"title": "Critical response"
},
{
"paragraph_id": 61,
"text": "Kafka's style has been compared to that of Kleist as early as 1916, in a review of \"Die Verwandlung\" and \"Der Heizer\" by Oscar Walzel in Berliner Beiträge. The nature of Kafka's prose allows for varied interpretations and critics have placed his writing into a variety of literary schools. Marxists, for example, have sharply disagreed over how to interpret Kafka's works. Some accused him of distorting reality whereas others claimed he was critiquing capitalism. The hopelessness and absurdity common to his works are seen as emblematic of existentialism. Some of Kafka's books are influenced by the expressionist movement, though the majority of his literary output was associated with the experimental modernist genre. Kafka also touches on the theme of human conflict with bureaucracy. William Burrows claims that such work is centred on the concepts of struggle, pain, solitude, and the need for relationships. Others, such as Thomas Mann, see Kafka's work as allegorical: a quest, metaphysical in nature, for God.",
"title": "Critical response"
},
{
"paragraph_id": 62,
"text": "According to Gilles Deleuze and Félix Guattari, the themes of alienation and persecution, although present in Kafka's work, have been overemphasised by critics. They argue that Kafka's work is more deliberate and subversive—and more joyful—than may first appear. They point out that reading Kafka while focusing on the futility of his characters' struggles reveals Kafka's humour; he is not necessarily commenting on his own problems, but rather pointing out how people tend to invent problems. In his work, Kafka often creates malevolent, absurd worlds. Kafka read drafts of his works to his friends, typically concentrating on his humorous prose. The writer Milan Kundera suggests that Kafka's surrealist humour may have been an inversion of Dostoyevsky's presentation of characters who are punished for a crime. In Kafka's work, a character is punished although a crime has not been committed. Kundera believes that Kafka's inspirations for his characteristic situations came both from growing up in a patriarchal family and from living in a totalitarian state.",
"title": "Critical response"
},
{
"paragraph_id": 63,
"text": "Attempts have been made to identify the influence of Kafka's legal background and the role of law in his fiction. Most interpretations identify aspects of law and legality as important in his work, in which the legal system is often oppressive. The law in Kafka's works, rather than being representative of any particular legal or political entity, is usually interpreted to represent a collection of anonymous, incomprehensible forces. These are hidden from the individual but control the lives of the people, who are innocent victims of systems beyond their control. Critics who support this absurdist interpretation cite instances where Kafka describes himself in conflict with an absurd universe, such as the following entry from his diary:",
"title": "Critical response"
},
{
"paragraph_id": 64,
"text": "Enclosed in my own four walls, I found myself as an immigrant imprisoned in a foreign country;... I saw my family as strange aliens whose foreign customs, rites, and very language defied comprehension;... though I did not want it, they forced me to participate in their bizarre rituals;... I could not resist.",
"title": "Critical response"
},
{
"paragraph_id": 65,
"text": "However, James Hawes argues many of Kafka's descriptions of the legal proceedings in Der Process—metaphysical, absurd, bewildering and nightmarish as they might appear—are based on accurate and informed descriptions of German and Austrian criminal proceedings of the time, which were inquisitorial rather than adversarial. Although he worked in insurance, as a trained lawyer Kafka was \"keenly aware of the legal debates of his day\". In an early 21st-century publication that uses Kafka's office writings as its point of departure, Pothik Ghosh states that with Kafka, law \"has no meaning outside its fact of being a pure force of domination and determination\".",
"title": "Critical response"
},
{
"paragraph_id": 66,
"text": "The first instance of Kafka being translated into English was in 1925, when William A. Drake published \"A Report for an Academy\" in The New York Herald Tribune. Eugene Jolas translated Kafka's \"The Judgment\" for the modernist journal transition in 1928. In 1930, Edwin and Willa Muir translated the first German edition of Das Schloss. This was published as The Castle by Secker & Warburg in England and Alfred A. Knopf in the United States. A 1941 edition, including a homage by Thomas Mann, spurred a surge in Kafka's popularity in the United States during the late 1940s. The Muirs translated all shorter works that Kafka had seen fit to print; they were published by Schocken Books in 1948 as The Penal Colony: Stories and Short Pieces, including additionally The First Long Train Journey, written by Kafka and Brod, Kafka's \"A Novel about Youth\", a review of Felix Sternheim's Die Geschichte des jungen Oswald, his essay on Kleist's \"Anecdotes\", his review of the literary magazine Hyperion, and an epilogue by Brod.",
"title": "Critical response"
},
{
"paragraph_id": 67,
"text": "Later editions, notably those of 1954 (Dearest Father: Stories and Other Writings), included text, translated by Eithne Wilkins and Ernst Kaiser, that had been deleted by earlier publishers. Known as \"Definitive Editions\", they include translations of The Trial, Definitive, The Castle, Definitive, and other writings. These translations are generally accepted to have a number of biases and are considered to be dated in interpretation. Published in 1961 by Schocken Books, Parables and Paradoxes presented in a bilingual edition by Nahum N. Glatzer selected writings, drawn from notebooks, diaries, letters, short fictional works and the novel Der Process.",
"title": "Critical response"
},
{
"paragraph_id": 68,
"text": "New translations were completed and published based on the recompiled German text of Pasley and Schillemeit—The Castle, Critical by Mark Harman (Schocken Books, 1998), The Trial, Critical by Breon Mitchell (Schocken Books, 1998), and The Man Who Disappeared (Amerika) by Michael Hofmann (Penguin Books, 1996) and Amerika: The Missing Person by Mark Harman (Schocken Books, 2008).",
"title": "Critical response"
},
{
"paragraph_id": 69,
"text": "Kafka often made extensive use of a characteristic particular to German, which permits long sentences that sometimes can span an entire page. Kafka's sentences then deliver an unexpected impact just before the full stop—this being the finalizing meaning and focus. This is due to the construction of subordinate clauses in German, which require that the verb be at the end of the sentence. Such constructions are difficult to duplicate in English, so it is up to the translator to provide the reader with the same (or at least equivalent) effect as the original text. German's more flexible word order and syntactical differences provide for multiple ways in which the same German writing can be translated into English. An example is the first sentence of Kafka's The Metamorphosis, which is crucial to the setting and understanding of the entire story:",
"title": "Critical response"
},
{
"paragraph_id": 70,
"text": "The sentence above also exemplifies an instance of another difficult problem facing translators: dealing with the author's intentional use of ambiguous idioms and words that have several meanings, which results in phrasing that is difficult to translate precisely. English translators often render the word Ungeziefer as 'insect'; in Middle German, however, Ungeziefer literally means 'an animal unclean for sacrifice'; in today's German, it means 'vermin'. It is sometimes used colloquially to mean 'bug'—a very general term, unlike the scientific 'insect'. Kafka had no intention of labeling Gregor, the protagonist of the story, as any specific thing but instead wanted to convey Gregor's disgust at his transformation. Another example of this can be found in the final sentence of \"Das Urteil\" (\"The Judgement\"), with Kafka's use of the German noun Verkehr. Literally, Verkehr means 'intercourse' and, as in English, can have either a sexual or a non-sexual meaning. The word is additionally used to mean 'transport' or 'traffic', therefore the sentence can also be translated as: \"At that moment an unending stream of traffic crossed over the bridge.\" The double meaning of Verkehr is given added weight by Kafka's confession to Brod that when he wrote that final line he was thinking of \"a violent ejaculation.\"",
"title": "Critical response"
},
{
"paragraph_id": 71,
"text": "Unlike many famous writers, Kafka is rarely quoted by others. Instead, he is noted more for his visions and perspective. Kafka had a strong influence on Gabriel García Márquez, Milan Kundera and the novel The Palace of Dreams by Ismail Kadare. Shimon Sandbank, a professor, literary critic, and writer, also identifies Kafka as having influenced Jorge Luis Borges, Albert Camus, Eugène Ionesco, J. M. Coetzee and Jean-Paul Sartre. A Financial Times literary critic credits Kafka with influencing José Saramago, and Al Silverman, a writer and editor, states that J. D. Salinger loved to read Kafka's works. The Romanian writer Mircea Cărtărescu said \"Kafka is the author I love the most and who means, for me, the gate to literature\"; he also described Kafka as \"the saint of literature\". Kafka has been cited as an influence on the Swedish writer Stig Dagerman, and the Japanese writer Haruki Murakami, who paid homage to Kafka in his novel Kafka on the Shore with the namesake protagonist.",
"title": "Legacy"
},
{
"paragraph_id": 72,
"text": "In 1999 a committee of 99 authors, scholars, and literary critics ranked Der Process and Das Schloss the second and ninth most significant German-language novels of the 20th century. Harold Bloom said \"when he is most himself, Kafka gives us a continuous inventiveness and originality that rivals Dante and truly challenges Proust and Joyce as that of the dominant Western author of our century\". Sandbank argues that despite Kafka's pervasiveness, his enigmatic style has yet to be emulated. Neil Christian Pages, a professor of German Studies and Comparative Literature at Binghamton University who specialises in Kafka's works, says Kafka's influence transcends literature and literary scholarship; it impacts visual arts, music, and popular culture. Harry Steinhauer, a professor of German and Jewish literature, says that Kafka \"has made a more powerful impact on literate society than any other writer of the twentieth century\". Brod said that the 20th century will one day be known as the \"century of Kafka\".",
"title": "Legacy"
},
{
"paragraph_id": 73,
"text": "Michel-André Bossy writes that Kafka created a rigidly inflexible and sterile bureaucratic universe. Kafka wrote in an aloof manner full of legal and scientific terms. Yet his serious universe also had insightful humour, all highlighting the \"irrationality at the roots of a supposedly rational world\". His characters are trapped, confused, full of guilt, frustrated, and lacking understanding of their surreal world. Much post-Kafka fiction, especially science fiction, follows the themes and precepts of Kafka's universe. This can be seen in the works of authors such as George Orwell and Ray Bradbury.",
"title": "Legacy"
},
{
"paragraph_id": 74,
"text": "The following are examples of works across a range of dramatic, literary, and musical genres that demonstrate the extent of Kafka's cultural influence:",
"title": "Legacy"
},
{
"paragraph_id": 75,
"text": "The term \"Kafkaesque\" is used to describe concepts and situations reminiscent of Kafka's work, particularly Der Process (The Trial) and Die Verwandlung (The Metamorphosis). Examples include instances in which bureaucracies overpower people, often in a surreal, nightmarish milieu that evokes feelings of senselessness, disorientation, and helplessness. Characters in a Kafkaesque setting often lack a clear course of action to escape a labyrinthine situation. Kafkaesque elements often appear in existential works, but the term has transcended the literary realm to apply to real-life occurrences and situations that are incomprehensibly complex, bizarre, or illogical.",
"title": "Legacy"
},
{
"paragraph_id": 76,
"text": "Numerous films and television works have been described as Kafkaesque, and the style is particularly prominent in dystopian science fiction. Works in this genre that have been thus described include Patrick Bokanowski's film The Angel (1982), Terry Gilliam's film Brazil (1985), and Alex Proyas' science fiction film noir, Dark City (1998). Films from other genres which have been similarly described include Roman Polanski's The Tenant (1976) and the Coen brothers' Barton Fink (1991). The television series The Prisoner and The Twilight Zone are also frequently described as Kafkaesque.",
"title": "Legacy"
},
{
"paragraph_id": 77,
"text": "However, with common usage, the term has become so ubiquitous that Kafka scholars note it is often misused. More accurately then, according to author Ben Marcus, paraphrased in \"What it Means to be Kafkaesque\" by Joe Fassler in The Atlantic, \"Kafka's quintessential qualities are affecting use of language, a setting that straddles fantasy and reality, and a sense of striving even in the face of bleakness—hopelessly and full of hope.\"",
"title": "Legacy"
},
{
"paragraph_id": 78,
"text": "3412 Kafka is an asteroid from the inner regions of the asteroid belt, approximately 6 kilometers in diameter. It was discovered on 10 January 1983 by American astronomers Randolph Kirk and Donald Rudy at Palomar Observatory in California, United States, and named after Kafka by them.",
"title": "Legacy"
},
{
"paragraph_id": 79,
"text": "Apache Kafka, an open-source stream processing platform originally released in January 2011, is named after Kafka.",
"title": "Legacy"
},
{
"paragraph_id": 80,
"text": "The Franz Kafka Museum in Prague is dedicated to Kafka and his work. A major component of the museum is an exhibit, The City of K. Franz Kafka and Prague, which was first shown in Barcelona in 1999, moved to the Jewish Museum in New York City, and finally established in Prague in Malá Strana (Lesser Town), along the Moldau, in 2005. The Franz Kafka Museum calls its display of original photos and documents Město K. Franz Kafka a Praha (\"City K. Kafka and Prague\") and aims to immerse the visitor into the world in which Kafka lived and about which he wrote.",
"title": "Legacy"
},
{
"paragraph_id": 81,
"text": "The Franz Kafka Prize, established in 2001, is an annual literary award of the Franz Kafka Society and the City of Prague. It recognizes the merits of literature as \"humanistic character and contribution to cultural, national, language and religious tolerance, its existential, timeless character, its generally human validity, and its ability to hand over a testimony about our times\". The selection committee and recipients come from all over the world, but are limited to living authors who have had at least one work published in Czech. The recipient receives $10,000, a diploma, and a bronze statuette at a presentation in Prague's Old Town Hall, on the Czech State Holiday in late October.",
"title": "Legacy"
},
{
"paragraph_id": 82,
"text": "San Diego State University operates the Kafka Project, which began in 1998 as the official international search for Kafka's last writings.",
"title": "Legacy"
},
{
"paragraph_id": 83,
"text": "Kafka Dome is an off-axis oceanic core complex in the central Atlantic named after Kafka.",
"title": "Legacy"
},
{
"paragraph_id": 84,
"text": "Journals",
"title": "Further reading"
},
{
"paragraph_id": 85,
"text": "",
"title": "External links"
}
]
| Franz Kafka was a German-speaking Bohemian novelist and short-story writer based in Prague, who is widely regarded as one of the major figures of 20th-century literature. His work fuses elements of realism and the fantastic. It typically features isolated protagonists facing bizarre or surrealistic predicaments and incomprehensible socio-bureaucratic powers. It has been interpreted as exploring themes of alienation, existential anxiety, guilt, and absurdity. His best known works include the novella The Metamorphosis and novels The Trial and The Castle. The term Kafkaesque has entered English to describe absurd situations like those depicted in his writing. Kafka was born into a middle-class German-speaking Czech Jewish family in Prague, the capital of the Kingdom of Bohemia, then part of the Austro-Hungarian Empire. He trained as a lawyer, and after completing his legal education was employed full-time by an insurance company, forcing him to relegate writing to his spare time. Over the course of his life, Kafka wrote hundreds of letters to family and close friends, including his father, with whom he had a strained and formal relationship. He became engaged to several women but never married. He died in obscurity in 1924 at the age of 40 from tuberculosis. Kafka was a prolific writer, spending most of his free time writing, often late in the night. He burned an estimated 90 per cent of his total work due to his persistent struggles with self-doubt. Much of the remaining 10 per cent is lost or otherwise unpublished. Few of Kafka's works were published during his lifetime: the story collections Contemplation and A Country Doctor, and individual stories were published in literary magazines but received little public attention. In his will, Kafka instructed his close friend and literary executor Max Brod to destroy his unfinished works, including his novels The Trial, The Castle, and Amerika, but Brod ignored these instructions and had much of his work published. Kafka's writings became famous in German-speaking countries after World War II, influencing their literature, and its influence spread elsewhere in the world in the 1960s. It has also influenced artists, composers, and philosophers. | 2001-10-19T00:19:36Z | 2023-12-22T17:52:14Z | [
"Template:Curlie",
"Template:Snds",
"Template:Cite book",
"Template:ISFDB name",
"Template:Navboxes",
"Template:Webarchive",
"Template:Internet Archive author",
"Template:Librivox author",
"Template:Blockquote",
"Template:Further",
"Template:Verse translation",
"Template:Refbegin",
"Template:Refend",
"Template:Use dmy dates",
"Template:Infobox person",
"Template:Multiple image",
"Template:Wiktionary",
"Template:Sfn",
"Template:Lang",
"Template:Nbsp",
"Template:Nsmdns",
"Template:Cite journal",
"Template:Short description",
"Template:Pp-pc",
"Template:Efn",
"Template:Gutenberg author",
"Template:Britannica",
"Template:Cite magazine",
"Template:DNB portal",
"Template:Clear left",
"Template:Notelist",
"Template:Cite news",
"Template:Authority control",
"Template:Reflist",
"Template:Sister project links",
"Template:Wikisourcelang",
"Template:Cite web",
"Template:IMDb name",
"Template:Featured article",
"Template:Redirect",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/Franz_Kafka |
10,859 | Fields Medal | The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields.
The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics, although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria. According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide, and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics.
The prize includes a monetary award which, since 2006, has been CA$15,000. Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge.
The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medalist. In total, 64 people have been awarded the Fields Medal.
The most recent group of Fields Medalists received their awards on 5 July 2022 in an online event which was live-streamed from Helsinki, Finland. It was originally meant to be held in Saint Petersburg, Russia, but was moved following the 2022 Russian invasion of Ukraine.
The Fields Medal has for a long time been regarded as the most prestigious award in the field of mathematics and is often described as the Nobel Prize of Mathematics. Unlike the Nobel Prize, the Fields Medal is only awarded every four years. The Fields Medal also has an age limit: a recipient must be under age 40 on 1 January of the year in which the medal is awarded. The under-40 rule is based on Fields's desire that "while it was in recognition of work already done, it was at the same time intended to be an encouragement for further achievement on the part of the recipients and a stimulus to renewed effort on the part of others." Moreover, an individual can only be awarded one Fields Medal; winners are ineligible to be awarded future medals.
First awarded in 1936, 64 people have won the medal as of 2022. With the exception of two PhD holders in physics (Edward Witten and Martin Hairer), only people with a PhD in mathematics have won the medal.
In certain years, the Fields medalists have been officially cited for particular mathematical achievements, while in other years such specificities have not been given. However, in every year that the medal has been awarded, noted mathematicians have lectured at the International Congress of Mathematicians on each medalist's body of work. In the following table, official citations are quoted when possible (namely for the years 1958, 1998, and every year since 2006). For the other years through 1986, summaries of the ICM lectures, as written by Donald Albers, Gerald L. Alexanderson, and Constance Reid, are quoted. In the remaining years (1990, 1994, and 2002), part of the text of the ICM lecture itself has been quoted.
The medal was first awarded in 1936 to the Finnish mathematician Lars Ahlfors and the American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions.
In 1954, Jean-Pierre Serre became the youngest winner of the Fields Medal, at 27. He retains this distinction.
In 1966, Alexander Grothendieck boycotted the ICM, held in Moscow, to protest Soviet military actions taking place in Eastern Europe. Léon Motchane, founder and director of the Institut des Hautes Études Scientifiques, attended and accepted Grothendieck's Fields Medal on his behalf.
In 1970, Sergei Novikov, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Nice to receive his medal.
In 1978, Grigory Margulis, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Helsinki to receive his medal. The award was accepted on his behalf by Jacques Tits, who said in his address: "I cannot but express my deep disappointment—no doubt shared by many people here—in the absence of Margulis from this ceremony. In view of the symbolic meaning of this city of Helsinki, I had indeed grounds to hope that I would have a chance at last to meet a mathematician whom I know only through his work and for whom I have the greatest respect and admiration."
In 1982, the congress was due to be held in Warsaw but had to be rescheduled to the next year, because of martial law introduced in Poland on 13 December 1981. The awards were announced at the ninth General Assembly of the IMU earlier in the year and awarded at the 1983 Warsaw congress.
In 1990, Edward Witten became the first physicist to win the award.
In 1998, at the ICM, Andrew Wiles was presented by the chair of the Fields Medal Committee, Yuri I. Manin, with the first-ever IMU silver plaque in recognition of his proof of Fermat's Last Theorem. Don Zagier referred to the plaque as a "quantized Fields Medal". Accounts of this award frequently make reference that at the time of the award Wiles was over the age limit for the Fields medal. Although Wiles was slightly over the age limit in 1994, he was thought to be a favorite to win the medal; however, a gap (later resolved by Taylor and Wiles) in the proof was found in 1993.
In 2006, Grigori Perelman, who proved the Poincaré conjecture, refused his Fields Medal and did not attend the congress.
In 2014, Maryam Mirzakhani became the first Iranian as well as the first woman to win the Fields Medal, and Artur Avila became the first South American and Manjul Bhargava became the first person of Indian origin to do so.
In 2022, Maryna Viazovska became the first Ukrainian to win the Fields Medal, and June Huh became the first person of Korean origin to do so.
The medal was designed by Canadian sculptor R. Tait McKenzie. It is made of 14KT gold, has a diameter of 63.5mm, and weighs 169g.
Translation: "Mathematicians gathered from the entire world have awarded [understood but not written: 'this prize'] for outstanding writings."
In the background, there is the representation of Archimedes' tomb, with the carving illustrating his theorem On the Sphere and Cylinder, behind an olive branch. (This is the mathematical result of which Archimedes was reportedly most proud: Given a sphere and a circumscribed cylinder of the same height and diameter, the ratio between their volumes is equal to 2⁄3.)
The rim bears the name of the prizewinner.
The Fields Medal has had two female recipients, Maryam Mirzakhani from Iran in 2014, and Maryna Viazovska from Ukraine in 2022.
The Fields Medal gained some recognition in popular culture due to references in the 1997 film, Good Will Hunting. In the movie, Gerald Lambeau (Stellan Skarsgård) is an MIT professor who won the award prior to the events of the story. Throughout the film, references made to the award are meant to convey its prestige in the field. | [
{
"paragraph_id": 0,
"text": "The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics, although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria. According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide, and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The prize includes a monetary award which, since 2006, has been CA$15,000. Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medalist. In total, 64 people have been awarded the Fields Medal.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The most recent group of Fields Medalists received their awards on 5 July 2022 in an online event which was live-streamed from Helsinki, Finland. It was originally meant to be held in Saint Petersburg, Russia, but was moved following the 2022 Russian invasion of Ukraine.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The Fields Medal has for a long time been regarded as the most prestigious award in the field of mathematics and is often described as the Nobel Prize of Mathematics. Unlike the Nobel Prize, the Fields Medal is only awarded every four years. The Fields Medal also has an age limit: a recipient must be under age 40 on 1 January of the year in which the medal is awarded. The under-40 rule is based on Fields's desire that \"while it was in recognition of work already done, it was at the same time intended to be an encouragement for further achievement on the part of the recipients and a stimulus to renewed effort on the part of others.\" Moreover, an individual can only be awarded one Fields Medal; winners are ineligible to be awarded future medals.",
"title": "Conditions of the award"
},
{
"paragraph_id": 6,
"text": "First awarded in 1936, 64 people have won the medal as of 2022. With the exception of two PhD holders in physics (Edward Witten and Martin Hairer), only people with a PhD in mathematics have won the medal.",
"title": "Conditions of the award"
},
{
"paragraph_id": 7,
"text": "In certain years, the Fields medalists have been officially cited for particular mathematical achievements, while in other years such specificities have not been given. However, in every year that the medal has been awarded, noted mathematicians have lectured at the International Congress of Mathematicians on each medalist's body of work. In the following table, official citations are quoted when possible (namely for the years 1958, 1998, and every year since 2006). For the other years through 1986, summaries of the ICM lectures, as written by Donald Albers, Gerald L. Alexanderson, and Constance Reid, are quoted. In the remaining years (1990, 1994, and 2002), part of the text of the ICM lecture itself has been quoted.",
"title": "List of Fields medalists"
},
{
"paragraph_id": 8,
"text": "The medal was first awarded in 1936 to the Finnish mathematician Lars Ahlfors and the American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions.",
"title": "Landmarks"
},
{
"paragraph_id": 9,
"text": "In 1954, Jean-Pierre Serre became the youngest winner of the Fields Medal, at 27. He retains this distinction.",
"title": "Landmarks"
},
{
"paragraph_id": 10,
"text": "In 1966, Alexander Grothendieck boycotted the ICM, held in Moscow, to protest Soviet military actions taking place in Eastern Europe. Léon Motchane, founder and director of the Institut des Hautes Études Scientifiques, attended and accepted Grothendieck's Fields Medal on his behalf.",
"title": "Landmarks"
},
{
"paragraph_id": 11,
"text": "In 1970, Sergei Novikov, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Nice to receive his medal.",
"title": "Landmarks"
},
{
"paragraph_id": 12,
"text": "In 1978, Grigory Margulis, because of restrictions placed on him by the Soviet government, was unable to travel to the congress in Helsinki to receive his medal. The award was accepted on his behalf by Jacques Tits, who said in his address: \"I cannot but express my deep disappointment—no doubt shared by many people here—in the absence of Margulis from this ceremony. In view of the symbolic meaning of this city of Helsinki, I had indeed grounds to hope that I would have a chance at last to meet a mathematician whom I know only through his work and for whom I have the greatest respect and admiration.\"",
"title": "Landmarks"
},
{
"paragraph_id": 13,
"text": "In 1982, the congress was due to be held in Warsaw but had to be rescheduled to the next year, because of martial law introduced in Poland on 13 December 1981. The awards were announced at the ninth General Assembly of the IMU earlier in the year and awarded at the 1983 Warsaw congress.",
"title": "Landmarks"
},
{
"paragraph_id": 14,
"text": "In 1990, Edward Witten became the first physicist to win the award.",
"title": "Landmarks"
},
{
"paragraph_id": 15,
"text": "In 1998, at the ICM, Andrew Wiles was presented by the chair of the Fields Medal Committee, Yuri I. Manin, with the first-ever IMU silver plaque in recognition of his proof of Fermat's Last Theorem. Don Zagier referred to the plaque as a \"quantized Fields Medal\". Accounts of this award frequently make reference that at the time of the award Wiles was over the age limit for the Fields medal. Although Wiles was slightly over the age limit in 1994, he was thought to be a favorite to win the medal; however, a gap (later resolved by Taylor and Wiles) in the proof was found in 1993.",
"title": "Landmarks"
},
{
"paragraph_id": 16,
"text": "In 2006, Grigori Perelman, who proved the Poincaré conjecture, refused his Fields Medal and did not attend the congress.",
"title": "Landmarks"
},
{
"paragraph_id": 17,
"text": "In 2014, Maryam Mirzakhani became the first Iranian as well as the first woman to win the Fields Medal, and Artur Avila became the first South American and Manjul Bhargava became the first person of Indian origin to do so.",
"title": "Landmarks"
},
{
"paragraph_id": 18,
"text": "In 2022, Maryna Viazovska became the first Ukrainian to win the Fields Medal, and June Huh became the first person of Korean origin to do so.",
"title": "Landmarks"
},
{
"paragraph_id": 19,
"text": "The medal was designed by Canadian sculptor R. Tait McKenzie. It is made of 14KT gold, has a diameter of 63.5mm, and weighs 169g.",
"title": "Medal"
},
{
"paragraph_id": 20,
"text": "Translation: \"Mathematicians gathered from the entire world have awarded [understood but not written: 'this prize'] for outstanding writings.\"",
"title": "Medal"
},
{
"paragraph_id": 21,
"text": "In the background, there is the representation of Archimedes' tomb, with the carving illustrating his theorem On the Sphere and Cylinder, behind an olive branch. (This is the mathematical result of which Archimedes was reportedly most proud: Given a sphere and a circumscribed cylinder of the same height and diameter, the ratio between their volumes is equal to 2⁄3.)",
"title": "Medal"
},
{
"paragraph_id": 22,
"text": "The rim bears the name of the prizewinner.",
"title": "Medal"
},
{
"paragraph_id": 23,
"text": "The Fields Medal has had two female recipients, Maryam Mirzakhani from Iran in 2014, and Maryna Viazovska from Ukraine in 2022.",
"title": "Female recipients"
},
{
"paragraph_id": 24,
"text": "The Fields Medal gained some recognition in popular culture due to references in the 1997 film, Good Will Hunting. In the movie, Gerald Lambeau (Stellan Skarsgård) is an MIT professor who won the award prior to the events of the story. Throughout the film, references made to the award are meant to convey its prestige in the field.",
"title": "In popular culture"
}
]
| The Fields Medal is a prize awarded to two, three, or four mathematicians under 40 years of age at the International Congress of the International Mathematical Union (IMU), a meeting that takes place every four years. The name of the award honours the Canadian mathematician John Charles Fields. The Fields Medal is regarded as one of the highest honors a mathematician can receive, and has been described as the Nobel Prize of Mathematics, although there are several major differences, including frequency of award, number of awards, age limits, monetary value, and award criteria. According to the annual Academic Excellence Survey by ARWU, the Fields Medal is consistently regarded as the top award in the field of mathematics worldwide, and in another reputation survey conducted by IREG in 2013–14, the Fields Medal came closely after the Abel Prize as the second most prestigious international award in mathematics. The prize includes a monetary award which, since 2006, has been CA$15,000. Fields was instrumental in establishing the award, designing the medal himself, and funding the monetary component, though he died before it was established and his plan was overseen by John Lighton Synge. The medal was first awarded in 1936 to Finnish mathematician Lars Ahlfors and American mathematician Jesse Douglas, and it has been awarded every four years since 1950. Its purpose is to give recognition and support to younger mathematical researchers who have made major contributions. In 2014, the Iranian mathematician Maryam Mirzakhani became the first female Fields Medalist. In total, 64 people have been awarded the Fields Medal. The most recent group of Fields Medalists received their awards on 5 July 2022 in an online event which was live-streamed from Helsinki, Finland. It was originally meant to be held in Saint Petersburg, Russia, but was moved following the 2022 Russian invasion of Ukraine. | 2001-08-08T20:08:01Z | 2023-12-08T01:41:51Z | [
"Template:Div col",
"Template:Webarchive",
"Template:Commons category",
"Template:Lang",
"Template:Div col end",
"Template:Reflist",
"Template:Cite book",
"Template:Harvnb",
"Template:IMUPrizes",
"Template:International mathematical activities",
"Template:CA$",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Official website",
"Template:Authority control",
"Template:Frac",
"Template:Sortname",
"Template:Use American English",
"Template:Refend",
"Template:Cite magazine",
"Template:Portal",
"Template:Cite web",
"Template:Fields medalists",
"Template:Notelist",
"Template:Citation needed",
"Template:Cite journal",
"Template:Distinguish",
"Template:Use dmy dates",
"Template:Infobox award",
"Template:Dubious",
"Template:Efn",
"Template:Refbegin",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Fields_Medal |
10,861 | The Trial | The Trial (German: Der Process, previously Der Proceß, Der Prozeß and Der Prozess) is a novel written by Franz Kafka in 1914 and 1915 and published posthumously on 26 April 1925. One of his best-known works, it tells the story of Josef K., a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. Heavily influenced by Dostoevsky's Crime and Punishment and The Brothers Karamazov, Kafka even went so far as to call Dostoevsky a blood relative. Like Kafka's two other novels, The Castle and Amerika, The Trial was never completed, although it does include a chapter which appears to bring the story to an intentionally abrupt ending.
After Kafka's death in 1924 his friend and literary executor Max Brod edited the text for publication by Verlag Die Schmiede. The original manuscript is held at the Museum of Modern Literature, Marbach am Neckar, Germany. The first English-language translation, by Willa and Edwin Muir, was published in 1937. In 1999, the book was listed in Le Monde's 100 Books of the Century and as No. 2 of the Best German Novels of the Twentieth Century.
Kafka drafted the opening sentence of The Trial in August 1914, and continued work on the novel throughout 1915. This was an unusually productive period for Kafka, despite the outbreak of World War I, which significantly increased the pressures of his day job as an insurance agent.
Having begun by writing the opening and concluding sections of the novel, Kafka worked on the intervening scenes in a haphazard manner, using several different notebooks simultaneously. His friend Max Brod, knowing Kafka's habit of destroying his own work, eventually took the manuscript into safekeeping. This manuscript consisted of 161 loose pages torn from notebooks, which Kafka had bundled together into chapters. The order of the chapters was not made clear to Brod, nor was he told which parts were complete and which unfinished. Following Kafka's death in 1924, Brod edited the work and assembled it into a novel to the best of his ability. Further editorial work has been done by later scholars, but Kafka's final vision for The Trial remains unknown.
On the morning of his thirtieth birthday, Josef K., the chief clerk of a bank, is unexpectedly arrested by two unidentified agents from an unspecified agency for an unspecified crime. Josef is not imprisoned, however, but left "free" and told to await instructions from the Committee of Affairs. Josef's landlady, Frau Grubach, tries to console Josef about the trial, but insinuates that the procedure may be related to an immoral relationship with his neighbor Fräulein Bürstner. Josef visits Bürstner to vent his worries, and then kisses her.
A few days later, Josef finds that Fräulein Montag, a lodger from another room, has moved in with Fräulein Bürstner. He suspects that this maneuver is meant to distance him from Bürstner. Josef is ordered to appear at the court's address the coming Sunday, without being told the exact time or room. After a period of exploration, Josef finds the court in the attic. Josef is severely reproached for his tardiness, and he arouses the assembly's hostility after a passionate plea about the absurdity of the trial and the emptiness of the accusation.
Josef later tries to confront the presiding judge over his case, but only finds an attendant's wife. The woman gives him information about the process and attempts to seduce him before a law student bursts into the room and takes the woman away, claiming her to be his mistress. The woman's husband then takes Josef on a tour of the court offices, which ends after Josef becomes extremely weak in the presence of other court officials and accused.
One evening, in a storage room at his own bank, Josef discovers the two agents who arrested him being whipped by a flogger for asking Josef for bribes and as a result of complaints Josef made at court. Josef tries to argue with the flogger, saying that the men need not be whipped, but the flogger cannot be swayed. The next day he returns to the storage room and is shocked to find everything as he had found it the day before, including the whipper and the two agents.
Josef is visited by his uncle, a traveling countryman. Worried by the rumors about his nephew, the uncle introduces Josef to Herr Huld, a sickly and bedridden lawyer tended to by Leni, a young nurse who shows an immediate attraction to Josef. During the conversation, Leni calls Josef away and takes him to the next room for a sexual encounter. Afterward, Josef meets his angry uncle outside, who claims that Josef's lack of respect for the process has hurt his case.
During subsequent visits to Huld, Josef realizes that he is a capricious character who will not be of much help. At the bank, one of Josef's clients recommends him to seek the advice of Titorelli, the court's official painter. Titorelli has no real influence within the court, but his deep experience of the process is painfully illuminating to Josef, and he can only suggest complex and unpleasant hypothetical options, as no definitive acquittal has ever been managed. Josef finally decides to dismiss Huld and take control of matters himself. Upon arriving at Huld's office, Josef meets a downtrodden individual, Rudi Block, a client who offers Josef some insight from a client's perspective. Block's case has continued for five years and he has gone from being a successful businessman to being almost bankrupt and is virtually enslaved by his dependence on the lawyer and Leni, with whom he appears to be sexually involved. The lawyer mocks Block in front of Josef for his dog-like subservience. This experience further poisons Josef's opinion of his lawyer.
Josef is put in charge of accompanying an important Italian client to the city's cathedral. While inside the cathedral, a priest calls Josef by name and tells him a fable (which was published earlier as "Before the Law") that is meant to explain his situation. The priest tells Josef that the parable is an ancient text of the court, and many generations of court officials have interpreted it differently. On the eve of Josef's thirty-first birthday, two men arrive at his apartment to execute him. They lead him to a small quarry outside the city, and kill him with a butcher's knife. Josef summarizes his situation with his last words: "Like a dog!"
In addition, a graphic novel adaptation by Chantal Montellier (illustrations) and David Zane Mairowitz (adaptation) appeared on April 15, 2008. | [
{
"paragraph_id": 0,
"text": "The Trial (German: Der Process, previously Der Proceß, Der Prozeß and Der Prozess) is a novel written by Franz Kafka in 1914 and 1915 and published posthumously on 26 April 1925. One of his best-known works, it tells the story of Josef K., a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. Heavily influenced by Dostoevsky's Crime and Punishment and The Brothers Karamazov, Kafka even went so far as to call Dostoevsky a blood relative. Like Kafka's two other novels, The Castle and Amerika, The Trial was never completed, although it does include a chapter which appears to bring the story to an intentionally abrupt ending.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After Kafka's death in 1924 his friend and literary executor Max Brod edited the text for publication by Verlag Die Schmiede. The original manuscript is held at the Museum of Modern Literature, Marbach am Neckar, Germany. The first English-language translation, by Willa and Edwin Muir, was published in 1937. In 1999, the book was listed in Le Monde's 100 Books of the Century and as No. 2 of the Best German Novels of the Twentieth Century.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kafka drafted the opening sentence of The Trial in August 1914, and continued work on the novel throughout 1915. This was an unusually productive period for Kafka, despite the outbreak of World War I, which significantly increased the pressures of his day job as an insurance agent.",
"title": "Development"
},
{
"paragraph_id": 3,
"text": "Having begun by writing the opening and concluding sections of the novel, Kafka worked on the intervening scenes in a haphazard manner, using several different notebooks simultaneously. His friend Max Brod, knowing Kafka's habit of destroying his own work, eventually took the manuscript into safekeeping. This manuscript consisted of 161 loose pages torn from notebooks, which Kafka had bundled together into chapters. The order of the chapters was not made clear to Brod, nor was he told which parts were complete and which unfinished. Following Kafka's death in 1924, Brod edited the work and assembled it into a novel to the best of his ability. Further editorial work has been done by later scholars, but Kafka's final vision for The Trial remains unknown.",
"title": "Development"
},
{
"paragraph_id": 4,
"text": "On the morning of his thirtieth birthday, Josef K., the chief clerk of a bank, is unexpectedly arrested by two unidentified agents from an unspecified agency for an unspecified crime. Josef is not imprisoned, however, but left \"free\" and told to await instructions from the Committee of Affairs. Josef's landlady, Frau Grubach, tries to console Josef about the trial, but insinuates that the procedure may be related to an immoral relationship with his neighbor Fräulein Bürstner. Josef visits Bürstner to vent his worries, and then kisses her.",
"title": "Plot summary"
},
{
"paragraph_id": 5,
"text": "A few days later, Josef finds that Fräulein Montag, a lodger from another room, has moved in with Fräulein Bürstner. He suspects that this maneuver is meant to distance him from Bürstner. Josef is ordered to appear at the court's address the coming Sunday, without being told the exact time or room. After a period of exploration, Josef finds the court in the attic. Josef is severely reproached for his tardiness, and he arouses the assembly's hostility after a passionate plea about the absurdity of the trial and the emptiness of the accusation.",
"title": "Plot summary"
},
{
"paragraph_id": 6,
"text": "Josef later tries to confront the presiding judge over his case, but only finds an attendant's wife. The woman gives him information about the process and attempts to seduce him before a law student bursts into the room and takes the woman away, claiming her to be his mistress. The woman's husband then takes Josef on a tour of the court offices, which ends after Josef becomes extremely weak in the presence of other court officials and accused.",
"title": "Plot summary"
},
{
"paragraph_id": 7,
"text": "One evening, in a storage room at his own bank, Josef discovers the two agents who arrested him being whipped by a flogger for asking Josef for bribes and as a result of complaints Josef made at court. Josef tries to argue with the flogger, saying that the men need not be whipped, but the flogger cannot be swayed. The next day he returns to the storage room and is shocked to find everything as he had found it the day before, including the whipper and the two agents.",
"title": "Plot summary"
},
{
"paragraph_id": 8,
"text": "Josef is visited by his uncle, a traveling countryman. Worried by the rumors about his nephew, the uncle introduces Josef to Herr Huld, a sickly and bedridden lawyer tended to by Leni, a young nurse who shows an immediate attraction to Josef. During the conversation, Leni calls Josef away and takes him to the next room for a sexual encounter. Afterward, Josef meets his angry uncle outside, who claims that Josef's lack of respect for the process has hurt his case.",
"title": "Plot summary"
},
{
"paragraph_id": 9,
"text": "During subsequent visits to Huld, Josef realizes that he is a capricious character who will not be of much help. At the bank, one of Josef's clients recommends him to seek the advice of Titorelli, the court's official painter. Titorelli has no real influence within the court, but his deep experience of the process is painfully illuminating to Josef, and he can only suggest complex and unpleasant hypothetical options, as no definitive acquittal has ever been managed. Josef finally decides to dismiss Huld and take control of matters himself. Upon arriving at Huld's office, Josef meets a downtrodden individual, Rudi Block, a client who offers Josef some insight from a client's perspective. Block's case has continued for five years and he has gone from being a successful businessman to being almost bankrupt and is virtually enslaved by his dependence on the lawyer and Leni, with whom he appears to be sexually involved. The lawyer mocks Block in front of Josef for his dog-like subservience. This experience further poisons Josef's opinion of his lawyer.",
"title": "Plot summary"
},
{
"paragraph_id": 10,
"text": "Josef is put in charge of accompanying an important Italian client to the city's cathedral. While inside the cathedral, a priest calls Josef by name and tells him a fable (which was published earlier as \"Before the Law\") that is meant to explain his situation. The priest tells Josef that the parable is an ancient text of the court, and many generations of court officials have interpreted it differently. On the eve of Josef's thirty-first birthday, two men arrive at his apartment to execute him. They lead him to a small quarry outside the city, and kill him with a butcher's knife. Josef summarizes his situation with his last words: \"Like a dog!\"",
"title": "Plot summary"
},
{
"paragraph_id": 11,
"text": "",
"title": "Translations into English"
},
{
"paragraph_id": 12,
"text": "In addition, a graphic novel adaptation by Chantal Montellier (illustrations) and David Zane Mairowitz (adaptation) appeared on April 15, 2008.",
"title": "Translations into English"
}
]
| The Trial is a novel written by Franz Kafka in 1914 and 1915 and published posthumously on 26 April 1925. One of his best-known works, it tells the story of Josef K., a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. Heavily influenced by Dostoevsky's Crime and Punishment and The Brothers Karamazov, Kafka even went so far as to call Dostoevsky a blood relative. Like Kafka's two other novels, The Castle and Amerika, The Trial was never completed, although it does include a chapter which appears to bring the story to an intentionally abrupt ending. After Kafka's death in 1924 his friend and literary executor Max Brod edited the text for publication by Verlag Die Schmiede. The original manuscript is held at the Museum of Modern Literature, Marbach am Neckar, Germany. The first English-language translation, by Willa and Edwin Muir, was published in 1937. In 1999, the book was listed in Le Monde's 100 Books of the Century and as No. 2 of the Best German Novels of the Twentieth Century. | 2001-03-08T19:54:10Z | 2023-11-06T00:45:26Z | [
"Template:Reflist",
"Template:Commons category",
"Template:Short description",
"Template:'s",
"Template:Interlanguage link",
"Template:Cite journal",
"Template:Cite web",
"Template:Authority control",
"Template:Lang-de",
"Template:Lang",
"Template:ISBN",
"Template:Wikisourcelang",
"Template:IMDb title",
"Template:For",
"Template:Use dmy dates",
"Template:Infobox book",
"Template:The Trial",
"Template:Franz Kafka",
"Template:Cite AV media",
"Template:Gutenberg",
"Template:Portal bar",
"Template:About",
"Template:Anchor",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/The_Trial |
10,862 | The Metamorphosis | Metamorphosis (German: Die Verwandlung) is a novella written by Franz Kafka and first published in 1915. One of Kafka's best-known works, Metamorphosis tells the story of salesman Gregor Samsa, who wakes one morning to find himself inexplicably transformed into a huge insect (German: ungeheueres Ungeziefer, lit. "monstrous vermin") and subsequently struggles to adjust to this new condition. The novella has been widely discussed among literary critics, who have offered varied interpretations. In popular culture and adaptations of the novella, the insect is commonly depicted as a cockroach.
With a length of about 70 printed pages over three chapters, it is the longest of the stories Kafka considered complete and published during his lifetime. The text was first published in 1915 in the October issue of the journal Die weißen Blätter under the editorship of René Schickele. The first edition in book form appeared in December 1915 in the series Der jüngste Tag, edited by Kurt Wolff.
Gregor Samsa wakes up one morning to find himself transformed into a "monstrous vermin". He initially considers the transformation to be temporary and slowly ponders the consequences of this metamorphosis. Stuck on his back and unable to get up and leave the bed, Gregor reflects on his job as a traveling salesman and cloth merchant, which he characterizes as being full of "temporary and constantly changing human relationships, which never come from the heart". He sees his employer as a despot and would quickly quit his job if he were not his family's sole breadwinner and working off his bankrupt father's debts. While trying to move, Gregor finds that his office manager, the chief clerk, has shown up to check on him, indignant about Gregor's unexcused absence. Gregor attempts to communicate with both the manager and his family, but all they can hear from behind the door is incomprehensible vocalizations. Gregor laboriously drags himself across the floor and opens the door. The clerk, upon seeing the transformed Gregor, flees the apartment. Gregor's family is horrified, and his father drives him back into his room, injuring his side by shoving him when he gets stuck in the doorway.
With Gregor's unexpected transformation, his family is deprived of financial stability. They keep Gregor locked in his room, and he begins to accept his new identity and adapt to his new body. His sister Grete is the only one willing to bring him food, which they find Gregor only likes if it is rotten. He spends much of his time crawling around on the floor, walls, and ceiling and, upon discovering Gregor's new pastime, Grete decides to remove his furniture to give him more space. She and her mother begin to empty the room of everything, except the sofa under which Gregor hides whenever anyone comes in, but he finds their actions deeply distressing in fear that he might forget his past, while he still was a human, and desperately tries to save a particularly loved portrait on the wall of a woman clad in fur. His mother loses consciousness at the sight of him clinging to the image to protect it. When Grete rushes out of the room to get some aromatic spirits, Gregor follows her and is slightly hurt when she drops a medicine bottle and it breaks. Their father returns home and angrily hurls apples at Gregor, one of which becomes lodged in a sensitive spot in his back and severely wounds him.
Gregor suffers from his injuries for the rest of his life and takes very little food. His father, mother, and sister all get jobs and increasingly begin to neglect him, and his room begins to be used for storage. For a time, his family leaves Gregor's door open in the evenings so he can listen to them talk to each other, but this happens less frequently once they rent a room in the apartment to three male tenants, since they are not told about Gregor. One day the charwoman, who briefly looks in on Gregor each day when she arrives and before she leaves, neglects to close his door fully. Attracted by Grete's violin-playing in the living room, Gregor crawls out and is spotted by the unsuspecting tenants, who complain about the apartment's unhygienic conditions and say they are leaving, will not pay anything for the time they have already stayed, and may take legal action. Grete, who has tired of taking care of Gregor and realizes the burden his existence puts on each member of the family, tells her parents they must get rid of "it" or they will all be ruined. Gregor, understanding that he is no longer wanted, laboriously makes his way back to his room and dies of starvation before sunrise. His body is discovered by the charwoman, who alerts his family and then disposes of the corpse. The relieved and optimistic father, mother, and sister all take the day off work. They travel by tram into the countryside and make plans to move to a smaller apartment to save money. During the short trip, Mr. and Mrs. Samsa realize that, despite the hardships that have brought some paleness to her face, Grete has grown up into a pretty young lady with a good figure and they think about finding her a husband.
Gregor is the main character of the story. He works as a traveling salesman in order to provide money for his sister and parents. He wakes up one morning finding himself transformed into an insect. After the metamorphosis, Gregor becomes unable to work and is confined to his room for most of the remainder of the story. This prompts his family to begin working once again. Gregor is depicted as isolated from society and often both misunderstands the true intentions of others and is misunderstood.
The name "Gregor Samsa" appears to derive partly from literary works Kafka had read. A character in The Story of Young Renate Fuchs, by German novelist Jakob Wassermann (1873–1934), is named Gregor Samassa. The Viennese author Leopold von Sacher-Masoch, whose sexual imagination gave rise to the idea of masochism, is also an influence. Sacher-Masoch wrote Venus in Furs (1870), a novel whose hero assumes the name Gregor at one point. A "Venus in furs" recurs in The Metamorphosis in the picture that Gregor Samsa has hung on his bedroom wall.
Grete is Gregor's younger sister, and she becomes his caretaker after his metamorphosis. They initially have a close relationship, but this quickly fades. At first, she volunteers to feed him and clean his room, but she grows increasingly impatient with the burden and begins to leave his room in disarray out of spite. Her initial decision to take care of Gregor may have come from a desire to contribute and be useful to the family, since she becomes angry and upset when the mother cleans his room. It is made clear that Grete is disgusted by Gregor, as she always opens the window upon entering his room to keep from feeling nauseous and leaves without doing anything if Gregor is in plain sight. She plays the violin and dreams of going to the conservatory to study, a dream Gregor had intended to make happen; he had planned on making the announcement on Christmas Day. To help provide an income for the family after Gregor's transformation, she starts working as a salesgirl. Grete is also the first to suggest getting rid of Gregor, which causes Gregor to plan his own death. At the end of the story, Grete's parents realize that she has become beautiful and full-figured and decide to consider finding her a husband.
Mr Samsa is Gregor's father. After the metamorphosis, he is forced to return to work in order to support the family financially. His attitude towards his son is harsh. He regards the transformed Gregor with disgust and possibly even fear and attacks Gregor on several occasions. Even when Gregor was human, Mr Samsa regarded him mostly as a source of income for the family. Gregor's relationship with his father is modelled after Kafka's own relationship with his father. The theme of alienation becomes quite evident here.
Mrs Samsa is Gregor's mother. She is portrayed as a submissive wife. She suffers from asthma, which is a constant source of concern for Gregor. She is initially shocked at Gregor's transformation, but she still wants to enter his room. However, it proves too much for her and gives rise to a conflict between her maternal impulse and sympathy and her fear and revulsion at Gregor's new form.
The charwoman is an old widowed lady who is employed by the Samsa family after their previous maid begs to be dismissed on account of the fright she experiences owing to Gregor's new form. She is paid to take care of their household duties. Apart from Grete and her father, the charwoman is the only person who is in close contact with Gregor, and she is unafraid in her dealings with Gregor. She does not question his changed state; she seemingly accepts it as a normal part of his existence. She is the one who notices Gregor has died and disposes of his body.
Like most of Kafka's works, Metamorphosis tends to be given a religious (Max Brod) or psychological interpretation by most of its interpreters. It has been particularly common to read the story as an expression of Kafka's father complex, as was first done by Charles Neider in his The Frozen Sea: A Study of Franz Kafka (1948). Besides the psychological approach, interpretations focusing on sociological aspects, which see the Samsa family as a portrayal of general social circumstances, have also gained a large following.
Vladimir Nabokov rejected such interpretations, noting that they do not live up to Kafka's art. He instead chose an interpretation guided by the artistic detail, but categorically excluded any attempts at deciphering a symbolic or allegoric level of meaning. Arguing against the popular father-complex theory, he observed that it is the sister more than the father who should be considered the cruelest person in the story, since she is the one backstabbing Gregor. In Nabokov's view, the central narrative theme is the artist's struggle for existence in a society replete with philistines that destroys him step by step. Commenting on Kafka's style he writes "The transparency of his style underlines the dark richness of his fantasy world. Contrast and uniformity, style and the depicted, portrayal and fable are seamlessly intertwined".
In 1989, Nina Pelikan Straus wrote a feminist interpretation of Metamorphosis, noting that the story is not only about the metamorphosis of Gregor, but also about the metamorphosis of his family, and in particular, his younger sister Grete. Straus suggested that the social and psychoanalytic resonances of the text depend on Grete's role as woman, daughter, and sister, and that prior interpretations failed to recognize Grete's centrality to the story.
In 1999, Gerhard Rieck pointed out that Gregor and his sister, Grete, form a pair, which is typical of many of Kafka's texts: it is made up of one passive, rather austere, person and another active, more libidinal, person. The appearance of figures with such almost irreconcilable personalities who form couples in Kafka's works has been evident since he wrote his short story "Description of a Struggle" (e.g. the narrator/young man and his "acquaintance"). They also appear in "The Judgment" (Georg and his friend in Russia), in all three of his novels (e.g. Robinson and Delamarche in Amerika) as well as in his short stories "A Country Doctor" (the country doctor and the groom) and "A Hunger Artist" (the hunger artist and the panther). Rieck views these pairs as parts of one single person (hence the similarity between the names Gregor and Grete) and in the final analysis as the two determining components of the author's personality. Not only in Kafka's life but also in his oeuvre does Rieck see the description of a fight between these two parts.
Reiner Stach argued in 2004 that no elucidating comments were needed to illustrate the story and that it was convincing by itself, self-contained, even absolute. He believes that there is no doubt the story would have been admitted to the canon of world literature even if we had known nothing about its author.
According to Peter-André Alt (2005), the figure of the beetle becomes a drastic expression of Gregor Samsa's deprived existence. Reduced to carrying out his professional responsibilities, anxious to guarantee his advancement and vexed with the fear of making commercial mistakes, he is the creature of a functionalistic professional life.
In 2007, Ralf Sudau took the view that particular attention should be paid to the motifs of self-abnegation and disregard for reality. Gregor's earlier behavior was characterized by self-renunciation and his pride in being able to provide a secure and leisured existence for his family. When he finds himself in a situation where he himself is in need of attention and assistance and in danger of becoming a parasite, he doesn't want to admit this new role to himself and be disappointed by the treatment he receives from his family, which is becoming more and more careless and even hostile over time. According to Sudau, Gregor is self-denyingly hiding his nauseating appearance under the sofa and gradually famishing, thus pretty much complying with the more or less blatant wish of his family. His gradual emaciation and "self-reduction" shows signs of a fatal hunger strike (which on the part of Gregor is unconscious and unsuccessful, on the part of his family not understood or ignored). Sudau also lists the names of selected interpreters of The Metamorphosis (e.g. Beicken, Sokel, Sautermeister and Schwarz). According to them, the narrative is a metaphor for the suffering resulting from leprosy, an escape into the disease or a symptom onset, an image of an existence which is defaced by the career, or a revealing staging which cracks the veneer and superficiality of everyday circumstances and exposes its cruel essence. He further notes that Kafka's representational style is on one hand characterized by an idiosyncratic interpenetration of realism and fantasy, a worldly mind, rationality, and clarity of observation, and on the other hand by folly, outlandishness, and fallacy. He also points to the grotesque and tragicomical, silent film-like elements.
Fernando Bermejo-Rubio (2012) argued that the story is often viewed unjustly as inconclusive. He derives his interpretative approach from the fact that the descriptions of Gregor and his family environment in The Metamorphosis contradict each other. Diametrically opposed versions exist of Gregor's back, his voice, of whether he is ill or already undergoing the metamorphosis, whether he is dreaming or not, which treatment he deserves, of his moral point of view (false accusations made by Grete), and whether his family is blameless or not. Bermejo-Rubio emphasizes that Kafka ordered in 1915 that there should be no illustration of Gregor. He argues that it is exactly this absence of a visual narrator that is essential for Kafka's project, for he who depicts Gregor would stylize himself as an omniscient narrator. Another reason why Kafka opposed such an illustration is that the reader should not be biased in any way before reading. That the descriptions are not compatible with each other is indicative of the fact that the opening statement is not to be trusted. If the reader isn't hoodwinked by the first sentence and still thinks of Gregor as a human being, he will view the story as conclusive and realize that Gregor is a victim of his own degeneration.
Volker Drüke (2013) believes that the crucial metamorphosis in the story is that of Grete. She is the character the title is directed at. Gregor's metamorphosis is followed by him languishing and ultimately dying. Grete, by contrast, has matured as a result of the new family circumstances and assumed responsibility. In the end – after the brother's death – the parents also notice that their daughter, "who was getting more animated all the time, [...] had recently blossomed into a pretty and shapely girl", and want to look for a partner for her. From this standpoint Grete's transition, her metamorphosis from a girl into a woman, is the subtextual theme of the story.
Translators of the novel into English have given widely different texts, including of the opening sentence, which in the original is "Als Gregor Samsa eines Morgens aus unruhigen Träumen erwachte, fand er sich in seinem Bett zu einem ungeheuren Ungeziefer verwandelt". In their 1933 translation of the story, Willa Muir and Edwin Muir rendered it as "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect".
The phrase "ungeheuren Ungeziefer" in particular has been rendered in many different ways by translators. These include:
In Middle High German, Ungeziefer literally means "unclean animal not suitable for sacrifice" and is sometimes used colloquially to mean "bug", with the gist of "dirty, nasty bug". It can also be translated as "vermin". English translators of The Metamorphosis have often rendered it as "insect".
What kind of bug or vermin Kafka envisaged remains a debated mystery. Kafka had no intention of labeling Gregor as any specific thing, but instead was trying to convey Gregor's disgust at his transformation. In his letter to his publisher of 25 October 1915, in which he discusses his concern about the cover illustration for the first edition, Kafka does use the term Insekt, though, saying "The insect itself is not to be drawn. It is not even to be seen from a distance."
Vladimir Nabokov, who was a lepidopterist as well as a writer and literary critic, concluded from details in the text that Gregor was not a cockroach, but a beetle with wings under his shell, and capable of flight. Nabokov left a sketch annotated "just over three feet long" on the opening page of his English teaching copy. In his accompanying lecture notes, he discusses the type of insect Gregor has been transformed into. Noting that the cleaning lady addressed Gregor as "dung beetle" (Mistkäfer), e.g., 'Come here for a bit, old dung beetle!' or 'Hey, look at the old dung beetle!'", Nabokov remarks that this was just her way of friendly addressing and that Gregor "is not, technically, a dung beetle. He is merely a big beetle."
Online editions
Commentary
Related | [
{
"paragraph_id": 0,
"text": "Metamorphosis (German: Die Verwandlung) is a novella written by Franz Kafka and first published in 1915. One of Kafka's best-known works, Metamorphosis tells the story of salesman Gregor Samsa, who wakes one morning to find himself inexplicably transformed into a huge insect (German: ungeheueres Ungeziefer, lit. \"monstrous vermin\") and subsequently struggles to adjust to this new condition. The novella has been widely discussed among literary critics, who have offered varied interpretations. In popular culture and adaptations of the novella, the insect is commonly depicted as a cockroach.",
"title": ""
},
{
"paragraph_id": 1,
"text": "With a length of about 70 printed pages over three chapters, it is the longest of the stories Kafka considered complete and published during his lifetime. The text was first published in 1915 in the October issue of the journal Die weißen Blätter under the editorship of René Schickele. The first edition in book form appeared in December 1915 in the series Der jüngste Tag, edited by Kurt Wolff.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Gregor Samsa wakes up one morning to find himself transformed into a \"monstrous vermin\". He initially considers the transformation to be temporary and slowly ponders the consequences of this metamorphosis. Stuck on his back and unable to get up and leave the bed, Gregor reflects on his job as a traveling salesman and cloth merchant, which he characterizes as being full of \"temporary and constantly changing human relationships, which never come from the heart\". He sees his employer as a despot and would quickly quit his job if he were not his family's sole breadwinner and working off his bankrupt father's debts. While trying to move, Gregor finds that his office manager, the chief clerk, has shown up to check on him, indignant about Gregor's unexcused absence. Gregor attempts to communicate with both the manager and his family, but all they can hear from behind the door is incomprehensible vocalizations. Gregor laboriously drags himself across the floor and opens the door. The clerk, upon seeing the transformed Gregor, flees the apartment. Gregor's family is horrified, and his father drives him back into his room, injuring his side by shoving him when he gets stuck in the doorway.",
"title": "Plot"
},
{
"paragraph_id": 3,
"text": "With Gregor's unexpected transformation, his family is deprived of financial stability. They keep Gregor locked in his room, and he begins to accept his new identity and adapt to his new body. His sister Grete is the only one willing to bring him food, which they find Gregor only likes if it is rotten. He spends much of his time crawling around on the floor, walls, and ceiling and, upon discovering Gregor's new pastime, Grete decides to remove his furniture to give him more space. She and her mother begin to empty the room of everything, except the sofa under which Gregor hides whenever anyone comes in, but he finds their actions deeply distressing in fear that he might forget his past, while he still was a human, and desperately tries to save a particularly loved portrait on the wall of a woman clad in fur. His mother loses consciousness at the sight of him clinging to the image to protect it. When Grete rushes out of the room to get some aromatic spirits, Gregor follows her and is slightly hurt when she drops a medicine bottle and it breaks. Their father returns home and angrily hurls apples at Gregor, one of which becomes lodged in a sensitive spot in his back and severely wounds him.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "Gregor suffers from his injuries for the rest of his life and takes very little food. His father, mother, and sister all get jobs and increasingly begin to neglect him, and his room begins to be used for storage. For a time, his family leaves Gregor's door open in the evenings so he can listen to them talk to each other, but this happens less frequently once they rent a room in the apartment to three male tenants, since they are not told about Gregor. One day the charwoman, who briefly looks in on Gregor each day when she arrives and before she leaves, neglects to close his door fully. Attracted by Grete's violin-playing in the living room, Gregor crawls out and is spotted by the unsuspecting tenants, who complain about the apartment's unhygienic conditions and say they are leaving, will not pay anything for the time they have already stayed, and may take legal action. Grete, who has tired of taking care of Gregor and realizes the burden his existence puts on each member of the family, tells her parents they must get rid of \"it\" or they will all be ruined. Gregor, understanding that he is no longer wanted, laboriously makes his way back to his room and dies of starvation before sunrise. His body is discovered by the charwoman, who alerts his family and then disposes of the corpse. The relieved and optimistic father, mother, and sister all take the day off work. They travel by tram into the countryside and make plans to move to a smaller apartment to save money. During the short trip, Mr. and Mrs. Samsa realize that, despite the hardships that have brought some paleness to her face, Grete has grown up into a pretty young lady with a good figure and they think about finding her a husband.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "Gregor is the main character of the story. He works as a traveling salesman in order to provide money for his sister and parents. He wakes up one morning finding himself transformed into an insect. After the metamorphosis, Gregor becomes unable to work and is confined to his room for most of the remainder of the story. This prompts his family to begin working once again. Gregor is depicted as isolated from society and often both misunderstands the true intentions of others and is misunderstood.",
"title": "Characters"
},
{
"paragraph_id": 6,
"text": "The name \"Gregor Samsa\" appears to derive partly from literary works Kafka had read. A character in The Story of Young Renate Fuchs, by German novelist Jakob Wassermann (1873–1934), is named Gregor Samassa. The Viennese author Leopold von Sacher-Masoch, whose sexual imagination gave rise to the idea of masochism, is also an influence. Sacher-Masoch wrote Venus in Furs (1870), a novel whose hero assumes the name Gregor at one point. A \"Venus in furs\" recurs in The Metamorphosis in the picture that Gregor Samsa has hung on his bedroom wall.",
"title": "Characters"
},
{
"paragraph_id": 7,
"text": "Grete is Gregor's younger sister, and she becomes his caretaker after his metamorphosis. They initially have a close relationship, but this quickly fades. At first, she volunteers to feed him and clean his room, but she grows increasingly impatient with the burden and begins to leave his room in disarray out of spite. Her initial decision to take care of Gregor may have come from a desire to contribute and be useful to the family, since she becomes angry and upset when the mother cleans his room. It is made clear that Grete is disgusted by Gregor, as she always opens the window upon entering his room to keep from feeling nauseous and leaves without doing anything if Gregor is in plain sight. She plays the violin and dreams of going to the conservatory to study, a dream Gregor had intended to make happen; he had planned on making the announcement on Christmas Day. To help provide an income for the family after Gregor's transformation, she starts working as a salesgirl. Grete is also the first to suggest getting rid of Gregor, which causes Gregor to plan his own death. At the end of the story, Grete's parents realize that she has become beautiful and full-figured and decide to consider finding her a husband.",
"title": "Characters"
},
{
"paragraph_id": 8,
"text": "Mr Samsa is Gregor's father. After the metamorphosis, he is forced to return to work in order to support the family financially. His attitude towards his son is harsh. He regards the transformed Gregor with disgust and possibly even fear and attacks Gregor on several occasions. Even when Gregor was human, Mr Samsa regarded him mostly as a source of income for the family. Gregor's relationship with his father is modelled after Kafka's own relationship with his father. The theme of alienation becomes quite evident here.",
"title": "Characters"
},
{
"paragraph_id": 9,
"text": "Mrs Samsa is Gregor's mother. She is portrayed as a submissive wife. She suffers from asthma, which is a constant source of concern for Gregor. She is initially shocked at Gregor's transformation, but she still wants to enter his room. However, it proves too much for her and gives rise to a conflict between her maternal impulse and sympathy and her fear and revulsion at Gregor's new form.",
"title": "Characters"
},
{
"paragraph_id": 10,
"text": "The charwoman is an old widowed lady who is employed by the Samsa family after their previous maid begs to be dismissed on account of the fright she experiences owing to Gregor's new form. She is paid to take care of their household duties. Apart from Grete and her father, the charwoman is the only person who is in close contact with Gregor, and she is unafraid in her dealings with Gregor. She does not question his changed state; she seemingly accepts it as a normal part of his existence. She is the one who notices Gregor has died and disposes of his body.",
"title": "Characters"
},
{
"paragraph_id": 11,
"text": "Like most of Kafka's works, Metamorphosis tends to be given a religious (Max Brod) or psychological interpretation by most of its interpreters. It has been particularly common to read the story as an expression of Kafka's father complex, as was first done by Charles Neider in his The Frozen Sea: A Study of Franz Kafka (1948). Besides the psychological approach, interpretations focusing on sociological aspects, which see the Samsa family as a portrayal of general social circumstances, have also gained a large following.",
"title": "Interpretation"
},
{
"paragraph_id": 12,
"text": "Vladimir Nabokov rejected such interpretations, noting that they do not live up to Kafka's art. He instead chose an interpretation guided by the artistic detail, but categorically excluded any attempts at deciphering a symbolic or allegoric level of meaning. Arguing against the popular father-complex theory, he observed that it is the sister more than the father who should be considered the cruelest person in the story, since she is the one backstabbing Gregor. In Nabokov's view, the central narrative theme is the artist's struggle for existence in a society replete with philistines that destroys him step by step. Commenting on Kafka's style he writes \"The transparency of his style underlines the dark richness of his fantasy world. Contrast and uniformity, style and the depicted, portrayal and fable are seamlessly intertwined\".",
"title": "Interpretation"
},
{
"paragraph_id": 13,
"text": "In 1989, Nina Pelikan Straus wrote a feminist interpretation of Metamorphosis, noting that the story is not only about the metamorphosis of Gregor, but also about the metamorphosis of his family, and in particular, his younger sister Grete. Straus suggested that the social and psychoanalytic resonances of the text depend on Grete's role as woman, daughter, and sister, and that prior interpretations failed to recognize Grete's centrality to the story.",
"title": "Interpretation"
},
{
"paragraph_id": 14,
"text": "In 1999, Gerhard Rieck pointed out that Gregor and his sister, Grete, form a pair, which is typical of many of Kafka's texts: it is made up of one passive, rather austere, person and another active, more libidinal, person. The appearance of figures with such almost irreconcilable personalities who form couples in Kafka's works has been evident since he wrote his short story \"Description of a Struggle\" (e.g. the narrator/young man and his \"acquaintance\"). They also appear in \"The Judgment\" (Georg and his friend in Russia), in all three of his novels (e.g. Robinson and Delamarche in Amerika) as well as in his short stories \"A Country Doctor\" (the country doctor and the groom) and \"A Hunger Artist\" (the hunger artist and the panther). Rieck views these pairs as parts of one single person (hence the similarity between the names Gregor and Grete) and in the final analysis as the two determining components of the author's personality. Not only in Kafka's life but also in his oeuvre does Rieck see the description of a fight between these two parts.",
"title": "Interpretation"
},
{
"paragraph_id": 15,
"text": "Reiner Stach argued in 2004 that no elucidating comments were needed to illustrate the story and that it was convincing by itself, self-contained, even absolute. He believes that there is no doubt the story would have been admitted to the canon of world literature even if we had known nothing about its author.",
"title": "Interpretation"
},
{
"paragraph_id": 16,
"text": "According to Peter-André Alt (2005), the figure of the beetle becomes a drastic expression of Gregor Samsa's deprived existence. Reduced to carrying out his professional responsibilities, anxious to guarantee his advancement and vexed with the fear of making commercial mistakes, he is the creature of a functionalistic professional life.",
"title": "Interpretation"
},
{
"paragraph_id": 17,
"text": "In 2007, Ralf Sudau took the view that particular attention should be paid to the motifs of self-abnegation and disregard for reality. Gregor's earlier behavior was characterized by self-renunciation and his pride in being able to provide a secure and leisured existence for his family. When he finds himself in a situation where he himself is in need of attention and assistance and in danger of becoming a parasite, he doesn't want to admit this new role to himself and be disappointed by the treatment he receives from his family, which is becoming more and more careless and even hostile over time. According to Sudau, Gregor is self-denyingly hiding his nauseating appearance under the sofa and gradually famishing, thus pretty much complying with the more or less blatant wish of his family. His gradual emaciation and \"self-reduction\" shows signs of a fatal hunger strike (which on the part of Gregor is unconscious and unsuccessful, on the part of his family not understood or ignored). Sudau also lists the names of selected interpreters of The Metamorphosis (e.g. Beicken, Sokel, Sautermeister and Schwarz). According to them, the narrative is a metaphor for the suffering resulting from leprosy, an escape into the disease or a symptom onset, an image of an existence which is defaced by the career, or a revealing staging which cracks the veneer and superficiality of everyday circumstances and exposes its cruel essence. He further notes that Kafka's representational style is on one hand characterized by an idiosyncratic interpenetration of realism and fantasy, a worldly mind, rationality, and clarity of observation, and on the other hand by folly, outlandishness, and fallacy. He also points to the grotesque and tragicomical, silent film-like elements.",
"title": "Interpretation"
},
{
"paragraph_id": 18,
"text": "Fernando Bermejo-Rubio (2012) argued that the story is often viewed unjustly as inconclusive. He derives his interpretative approach from the fact that the descriptions of Gregor and his family environment in The Metamorphosis contradict each other. Diametrically opposed versions exist of Gregor's back, his voice, of whether he is ill or already undergoing the metamorphosis, whether he is dreaming or not, which treatment he deserves, of his moral point of view (false accusations made by Grete), and whether his family is blameless or not. Bermejo-Rubio emphasizes that Kafka ordered in 1915 that there should be no illustration of Gregor. He argues that it is exactly this absence of a visual narrator that is essential for Kafka's project, for he who depicts Gregor would stylize himself as an omniscient narrator. Another reason why Kafka opposed such an illustration is that the reader should not be biased in any way before reading. That the descriptions are not compatible with each other is indicative of the fact that the opening statement is not to be trusted. If the reader isn't hoodwinked by the first sentence and still thinks of Gregor as a human being, he will view the story as conclusive and realize that Gregor is a victim of his own degeneration.",
"title": "Interpretation"
},
{
"paragraph_id": 19,
"text": "Volker Drüke (2013) believes that the crucial metamorphosis in the story is that of Grete. She is the character the title is directed at. Gregor's metamorphosis is followed by him languishing and ultimately dying. Grete, by contrast, has matured as a result of the new family circumstances and assumed responsibility. In the end – after the brother's death – the parents also notice that their daughter, \"who was getting more animated all the time, [...] had recently blossomed into a pretty and shapely girl\", and want to look for a partner for her. From this standpoint Grete's transition, her metamorphosis from a girl into a woman, is the subtextual theme of the story.",
"title": "Interpretation"
},
{
"paragraph_id": 20,
"text": "Translators of the novel into English have given widely different texts, including of the opening sentence, which in the original is \"Als Gregor Samsa eines Morgens aus unruhigen Träumen erwachte, fand er sich in seinem Bett zu einem ungeheuren Ungeziefer verwandelt\". In their 1933 translation of the story, Willa Muir and Edwin Muir rendered it as \"As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect\".",
"title": "Translation of the opening sentence"
},
{
"paragraph_id": 21,
"text": "The phrase \"ungeheuren Ungeziefer\" in particular has been rendered in many different ways by translators. These include:",
"title": "Translation of the opening sentence"
},
{
"paragraph_id": 22,
"text": "In Middle High German, Ungeziefer literally means \"unclean animal not suitable for sacrifice\" and is sometimes used colloquially to mean \"bug\", with the gist of \"dirty, nasty bug\". It can also be translated as \"vermin\". English translators of The Metamorphosis have often rendered it as \"insect\".",
"title": "Translation of the opening sentence"
},
{
"paragraph_id": 23,
"text": "What kind of bug or vermin Kafka envisaged remains a debated mystery. Kafka had no intention of labeling Gregor as any specific thing, but instead was trying to convey Gregor's disgust at his transformation. In his letter to his publisher of 25 October 1915, in which he discusses his concern about the cover illustration for the first edition, Kafka does use the term Insekt, though, saying \"The insect itself is not to be drawn. It is not even to be seen from a distance.\"",
"title": "Translation of the opening sentence"
},
{
"paragraph_id": 24,
"text": "Vladimir Nabokov, who was a lepidopterist as well as a writer and literary critic, concluded from details in the text that Gregor was not a cockroach, but a beetle with wings under his shell, and capable of flight. Nabokov left a sketch annotated \"just over three feet long\" on the opening page of his English teaching copy. In his accompanying lecture notes, he discusses the type of insect Gregor has been transformed into. Noting that the cleaning lady addressed Gregor as \"dung beetle\" (Mistkäfer), e.g., 'Come here for a bit, old dung beetle!' or 'Hey, look at the old dung beetle!'\", Nabokov remarks that this was just her way of friendly addressing and that Gregor \"is not, technically, a dung beetle. He is merely a big beetle.\"",
"title": "Translation of the opening sentence"
},
{
"paragraph_id": 25,
"text": "Online editions",
"title": "External links"
},
{
"paragraph_id": 26,
"text": "Commentary",
"title": "External links"
},
{
"paragraph_id": 27,
"text": "Related",
"title": "External links"
}
]
| Metamorphosis is a novella written by Franz Kafka and first published in 1915. One of Kafka's best-known works, Metamorphosis tells the story of salesman Gregor Samsa, who wakes one morning to find himself inexplicably transformed into a huge insect and subsequently struggles to adjust to this new condition. The novella has been widely discussed among literary critics, who have offered varied interpretations. In popular culture and adaptations of the novella, the insect is commonly depicted as a cockroach. With a length of about 70 printed pages over three chapters, it is the longest of the stories Kafka considered complete and published during his lifetime. The text was first published in 1915 in the October issue of the journal Die weißen Blätter under the editorship of René Schickele. The first edition in book form appeared in December 1915 in the series Der jüngste Tag, edited by Kurt Wolff. | 2001-12-12T09:00:52Z | 2023-11-28T20:11:41Z | [
"Template:Redirect",
"Template:Lit.",
"Template:Authority control",
"Template:Gutenberg",
"Template:Clarification needed",
"Template:Cite news",
"Template:Wikisourcelang",
"Template:Wikisource",
"Template:Modernism",
"Template:Lang-de",
"Template:Anchor",
"Template:Reflist",
"Template:In lang",
"Template:ISBN",
"Template:Doi",
"Template:Short description",
"Template:About",
"Template:Main",
"Template:Cite journal",
"Template:Infobox book",
"Template:Cite book",
"Template:Commons category",
"Template:Webarchive",
"Template:Cite web",
"Template:Librivox book",
"Template:Kafka",
"Template:Use dmy dates",
"Template:German literature",
"Template:The Metamorphosis"
]
| https://en.wikipedia.org/wiki/The_Metamorphosis |
10,865 | FSF | FSF may refer to: | [
{
"paragraph_id": 0,
"text": "FSF may refer to:",
"title": ""
}
]
| FSF may refer to: | 2002-02-25T15:43:11Z | 2023-09-29T16:49:11Z | [
"Template:TOC right",
"Template:Lang",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/FSF |
10,868 | Francisco Goya | Francisco José de Goya y Lucientes (/ˈɡɔɪə/; Spanish: [fɾanˈθisko xoˈse ðe ˈɣoʝa i luˈθjentes]; 30 March 1746 – 16 April 1828) was a Spanish romantic painter and printmaker. He is considered the most important Spanish artist of the late 18th and early 19th centuries. His paintings, drawings, and engravings reflected contemporary historical upheavals and influenced important 19th- and 20th-century painters. Goya is often referred to as the last of the Old Masters and the first of the moderns.
Goya was born to a middle-class family in 1746, in Fuendetodos in Aragon. He studied painting from age 14 under José Luzán y Martinez and moved to Madrid to study with Anton Raphael Mengs. He married Josefa Bayeu in 1773. Goya became a court painter to the Spanish Crown in 1786 and this early portion of his career is marked by portraits of the Spanish aristocracy and royalty, and Rococo-style tapestry cartoons designed for the royal palace.
Although Goya's letters and writings survive, little is known about his thoughts. He had a severe and undiagnosed illness in 1793 that left him deaf, after which his work became progressively darker and pessimistic. His later easel and mural paintings, prints and drawings appear to reflect a bleak outlook on personal, social and political levels, and contrast with his social climbing. He was appointed Director of the Royal Academy in 1795, the year Manuel Godoy made an unfavorable treaty with France. In 1799, Goya became Primer Pintor de Cámara (Prime Court Painter), the highest rank for a Spanish court painter. In the late 1790s, commissioned by Godoy, he completed his La maja desnuda, a remarkably daring nude for the time and clearly indebted to Diego Velázquez. In 1800–01, he painted Charles IV of Spain and His Family, also influenced by Velázquez.
In 1807, Napoleon led the French army into the Peninsular War against Spain. Goya remained in Madrid during the war, which seems to have affected him deeply. Although he did not speak his thoughts in public, they can be inferred from his Disasters of War series of prints (although published 35 years after his death) and his 1814 paintings The Second of May 1808 and The Third of May 1808. Other works from his mid-period include the Caprichos and Los Disparates etching series, and a wide variety of paintings concerned with insanity, mental asylums, witches, fantastical creatures and religious and political corruption, all of which suggest that he feared for both his country's fate and his own mental and physical health.
His late period culminates with the Black Paintings of 1819–1823, applied on oil on the plaster walls of his house the Quinta del Sordo (House of the Deaf Man) where, disillusioned by political and social developments in Spain, he lived in near isolation. Goya eventually abandoned Spain in 1824 to retire to the French city of Bordeaux, accompanied by his much younger maid and companion, Leocadia Weiss, who may have been his lover. There he completed his La Tauromaquia series and a number of other works. Following a stroke that left him paralyzed on his right side, Goya died and was buried on 16 April 1828 aged 82.
Francisco de Goya was born in Fuendetodos, Aragón, Spain, on 30 March 1746 to José Benito de Goya y Franque and Gracia de Lucientes y Salvador. The family had moved that year from the city of Zaragoza, but there is no record why; likely José was commissioned to work there. They were lower middle-class. José was the son of a notary and of Basque origin, his ancestors being from Zerain, earning his living as a gilder, specialising in religious and decorative craftwork. He oversaw the gilding and most of the ornamentation during the rebuilding of the Basilica of Our Lady of the Pillar (Santa Maria del Pilar), the principal cathedral of Zaragoza. Francisco was their fourth child, following his sister Rita (b. 1737), brother Tomás (b. 1739) (who was to follow in his father's trade) and second sister Jacinta (b. 1743). There were two younger sons, Mariano (b. 1750) and Camilo (b. 1753).
His mother's family had pretensions of nobility and the house, a modest brick cottage, was owned by her family and, perhaps fancifully, bore their crest. About 1749 José and Gracia bought a home in Zaragoza and were able to return to live in the city. Although there are no surviving records, it is thought that Goya may have attended the Escuelas Pías de San Antón, which offered free schooling. His education seems to have been adequate but not enlightening; he had reading, writing and numeracy, and some knowledge of the classics. According to Robert Hughes the artist "seems to have taken no more interest than a carpenter in philosophical or theological matters, and his views on painting ... were very down to earth: Goya was no theoretician." At school he formed a close and lifelong friendship with fellow pupil Martín Zapater; the 131 letters Goya wrote to him from 1775 until Zapater's death in 1803 give valuable insight into Goya's early years at the court in Madrid.
At age 14 Goya studied under the painter José Luzán, where he copied stamps for 4 years until he decided to work on his own, as he wrote later on "paint from my invention". He moved to Madrid to study with Anton Raphael Mengs, a popular painter with Spanish royalty. He clashed with his master, and his examinations were unsatisfactory. Goya submitted entries for the Real Academia de Bellas Artes de San Fernando in 1763 and 1766 but was denied entrance into the academia.
Rome was then the cultural capital of Europe and held all the prototypes of classical antiquity, while Spain lacked a coherent artistic direction, with all of its significant visual achievements in the past. Having failed to earn a scholarship, Goya relocated at his own expense to Rome in the old tradition of European artists stretching back at least to Albrecht Dürer. He was an unknown at the time and so the records are scant and uncertain. Early biographers have him travelling to Rome with a gang of bullfighters, where he worked as a street acrobat, or for a Russian diplomat, or fell in love with a beautiful young nun whom he plotted to abduct from her convent. It is possible that Goya completed two surviving mythological paintings during the visit, a Sacrifice to Vesta and a Sacrifice to Pan, both dated 1771.
In 1771 he won second prize in a painting competition organized by the City of Parma. That year he returned to Zaragoza and painted elements of the cupolas of the Basilica of the Pillar (including Adoration of the Name of God), a cycle of frescoes for the monastic church of the Charterhouse of Aula Dei, and the frescoes of the Sobradiel Palace. He studied with the Aragonese artist Francisco Bayeu y Subías and his painting began to show signs of the delicate tonalities for which he became famous. He befriended Francisco Bayeu and married his sister Josefa (he nicknamed her "Pepa") on 25 July 1773. Their first child, Antonio Juan Ramon Carlos, was born on 29 August 1774. Of their seven children only one, a son named Javier, survived into adulthood.
Francisco Bayeu (Josefa Bayeu's brother), 1765 membership of the Real Academia de Bellas Artes de San Fernando, and directorship of the tapestry works from 1777 helped Goya earn a commission for a series of tapestry cartoons for the Royal Tapestry Factory. Over five years he designed some 42 patterns, many of which were used to decorate and insulate the stone walls of El Escorial and the Palacio Real del Pardo, the residences of the Spanish monarchs. While designing tapestries was neither prestigious nor well paid, his cartoons are mostly popularist in a rococo style, and Goya used them to bring himself to wider attention.
The cartoons were not his only royal commissions, and were accompanied by a series of engravings, mostly copies after old masters such as Marcantonio Raimondi and Velázquez. Goya had a complicated relationship to the latter artist; while many of his contemporaries saw folly in Goya's attempts to copy and emulate him, he had access to a wide range of the long-dead painter's works that had been contained in the royal collection. Nonetheless, etching was a medium that the young artist was to master, a medium that was to reveal both the true depths of his imagination and his political beliefs. His c. 1779 etching of The Garrotted Man ("El agarrotado") was the largest work he had produced to date, and an obvious foreboding of his later "Disasters of War" series.
Goya was beset by illness, and his condition was used against him by his rivals, who looked jealously upon any artist seen to be rising in stature. Some of the larger cartoons, such as The Wedding, were more than 8 by 10 feet, and had proved a drain on his physical strength. Ever resourceful, Goya turned this misfortune around, claiming that his illness had allowed him the insight to produce works that were more personal and informal. However, he found the format limiting, as it did not allow him to capture complex color shifts or texture, and was unsuited to the impasto and glazing techniques he was by then applying to his painted works. The tapestries seem as comments on human types, fashion and fads.
Other works from the period include a canvas for the altar of the Church of San Francisco El Grande in Madrid, which led to his appointment as a member of the Royal Academy of Fine Art.
In 1783, the Count of Floridablanca, favorite of King Charles III, commissioned Goya to paint his portrait. He became friends with the King's half-brother Luis, and spent two summers working on portraits of both the Infante and his family. During the 1780s, his circle of patrons grew to include the Duke and Duchess of Osuna, the King and other notable people of the kingdom whom he painted. In 1786, Goya was given a salaried position as painter to Charles III.
Goya was appointed court painter to Charles IV in 1789. The following year he became First Court Painter, with a salary of 50,000 reales and an allowance of 500 ducats for a coach. He painted portraits of the king and the queen, and the Spanish Prime Minister Manuel de Godoy and many other nobles. These portraits are notable for their disinclination to flatter; his Charles IV of Spain and His Family is an especially brutal assessment of a royal family. Modern interpreters view the portrait as satirical; it is thought to reveal the corruption behind the rule of Charles IV. Under his reign his wife Louisa was thought to have had the real power, and thus Goya placed her at the center of the group portrait. From the back left of the painting one can see the artist himself looking out at the viewer, and the painting behind the family depicts Lot and his daughters, thus once again echoing the underlying message of corruption and decay.
Goya earned commissions from the highest ranks of the Spanish nobility, including Pedro Téllez-Girón, 9th Duke of Osuna and his wife María Josefa Pimentel, 12th Countess-Duchess of Benavente, José Álvarez de Toledo, Duke of Alba and his wife María del Pilar de Silva, and María Ana de Pontejos y Sandoval, Marchioness of Pontejos. In 1801 he painted Godoy in a commission to commemorate the victory in the brief War of the Oranges against Portugal. The two were friends, even if Goya's 1801 portrait is usually seen as satire. Yet even after Godoy's fall from grace the politician referred to the artist in warm terms. Godoy saw himself as instrumental in the publication of the Caprichos and is widely believed to have commissioned La maja desnuda.
La Maja Desnuda (La maja desnuda) has been described as "the first totally profane life-size female nude in Western art" without pretense to allegorical or mythological meaning. The identity of the Majas is uncertain. The most popularly cited models are the Duchess of Alba, with whom Goya was sometimes thought to have had an affair, and Pepita Tudó, mistress of Manuel de Godoy. Neither theory has been verified, and it remains as likely that the paintings represent an idealized composite. The paintings were never publicly exhibited during Goya's lifetime and were owned by Godoy. In 1808 all Godoy's property was seized by Ferdinand VII after his fall from power and exile, and in 1813 the Inquisition confiscated both works as 'obscene', returning them in 1836 to the Academy of Fine Arts of San Fernando. In 1798 he painted luminous and airy scenes for the pendentives and cupola of the Real Ermita (Chapel) of San Antonio de la Florida in Madrid. His depiction of a miracle of Saint Anthony of Padua is devoid of the customary angels and instead treats the miracle as if it were a theatrical event performed by ordinary people.
At some time between late 1792 and early 1793 an undiagnosed illness left Goya deaf. He became withdrawn and introspective while the direction and tone of his work changed. He began the series of aquatinted etchings, published in 1799 as the Caprichos—completed in parallel with the more official commissions of portraits and religious paintings. In 1799 Goya published 80 Caprichos prints depicting what he described as "the innumerable foibles and follies to be found in any civilized society, and from the common prejudices and deceitful practices which custom, ignorance, or self-interest have made usual". The visions in these prints are partly explained by the caption "The sleep of reason produces monsters". Yet these are not solely bleak; they demonstrate the artist's sharp satirical wit, as in Capricho number 52, What a Tailor Can Do!
While convalescing between 1793 and 1794, Goya completed a set of eleven small pictures painted on tin that mark a significant change in the tone and subject matter of his art, and draw from the dark and dramatic realms of fantasy nightmare. Yard with Lunatics is a vision of loneliness, fear and social alienation. The condemnation of brutality towards prisoners (whether criminal or insane) is a subject that Goya assayed in later works that focused on the degradation of the human figure. It was one of the first of Goya's mid-1790s cabinet paintings, in which his earlier search for ideal beauty gave way to an examination of the relationship between naturalism and fantasy that would preoccupy him for the rest of his career. He was undergoing a nervous breakdown and entering prolonged physical illness, and admitted that the series was created to reflect his own self-doubt, anxiety and fear that he was losing his mind. Goya wrote that the works served "to occupy my imagination, tormented as it is by contemplation of my sufferings." The series, he said, consisted of pictures which "normally find no place in commissioned works."
Goya's physical and mental breakdown seems to have happened a few weeks after the French declaration of war on Spain. A contemporary reported, "The noises in his head and deafness aren't improving, yet his vision is much better and he is back in control of his balance." These symptoms may indicate a prolonged viral encephalitis, or possibly a series of miniature strokes resulting from high blood pressure and which affected the hearing and balance centers of the brain. Symptoms of tinnitus, episodes of imbalance and progressive deafness are typical of Ménière's disease. It is possible that Goya had cumulative lead poisoning, as he used massive amounts of lead white—which he ground himself—in his paintings, both as a canvas primer and as a primary color.
Other postmortem diagnostic assessments point toward paranoid dementia, possibly due to brain trauma, as evidenced by marked changes in his work after his recovery, culminating in the "black" paintings. Art historians have noted Goya's singular ability to express his personal demons as horrific and fantastic imagery that speaks universally, and allows his audience to find its own catharsis in the images.
The French army invaded Spain in 1808, leading to the Peninsular War of 1808–1814. The extent of Goya's involvement with the court of the "intruder king", Joseph I, the brother of Napoleon Bonaparte, is not known; he painted works for French patrons and sympathisers, but kept neutral during the fighting. After the restoration of the Spanish King Ferdinand VII in 1814, Goya denied any involvement with the French. By the time of his wife Josefa's death in 1812, he was painting The Second of May 1808 and The Third of May 1808, and preparing the series of etchings later known as The Disasters of War (Los desastres de la guerra). Ferdinand VII returned to Spain in 1814 but relations with Goya were not cordial. The artist completed portraits of the king for a variety of ministries, but not for the king himself.
Although Goya did not make his intention known when creating The Disasters of War, art historians view them as a visual protest against the violence of the 1808 Dos de Mayo Uprising, the subsequent Peninsular War and the move against liberalism in the aftermath of the restoration of the Bourbon monarchy in 1814. The scenes are singularly disturbing, sometimes macabre in their depiction of battlefield horror, and represent an outraged conscience in the face of death and destruction. They were not published until 1863, 35 years after his death. It is likely that only then was it considered politically safe to distribute a sequence of artworks criticising both the French and restored Bourbons.
The first 47 plates in the series focus on incidents from the war and show the consequences of the conflict on individual soldiers and civilians. The middle series (plates 48 to 64) record the effects of the famine that hit Madrid in 1811–12, before the city was liberated from the French. The final 17 reflect the bitter disappointment of liberals when the restored Bourbon monarchy, encouraged by the Catholic hierarchy, rejected the Spanish Constitution of 1812 and opposed both state and religious reform. Since their first publication, Goya's scenes of atrocities, starvation, degradation and humiliation have been described as the "prodigious flowering of rage".
His works from 1814 to 1819 are mostly commissioned portraits, but also include the altarpiece of Santa Justa and Santa Rufina for the Cathedral of Seville, the print series of La Tauromaquia depicting scenes from bullfighting, and probably the etchings of Los Disparates.
Records of Goya's later life are relatively scant, and ever politically aware, he suppressed a number of his works from this period, working instead in private. He was tormented by a dread of old age and fear of madness. Goya had been a successful and royally placed artist, but withdrew from public life during his final years. From the late 1810s he lived in near-solitude outside Madrid in a farmhouse converted into a studio. The house had become known as "La Quinta del Sordo" (The House of the Deaf Man), after the nearest farmhouse that had coincidentally also belonged to a deaf man.
Art historians assume Goya felt alienated from the social and political trends that followed the 1814 restoration of the Bourbon monarchy, and that he viewed these developments as reactionary means of social control. In his unpublished art he seems to have railed against what he saw as a tactical retreat into Medievalism. It is thought that he had hoped for political and religious reform, but like many liberals became disillusioned when the restored Bourbon monarchy and Catholic hierarchy rejected the Spanish Constitution of 1812.
At the age of 75, alone and in mental and physical despair, he completed the work of his 14 Black Paintings, all of which were executed in oil directly onto the plaster walls of his house. Goya did not intend for the paintings to be exhibited, did not write of them, and likely never spoke of them. Around 1874, 50 years after his death, they were taken down and transferred to a canvas support by owner Baron Frédéric Émile d'Erlanger. Many of the works were significantly altered during the restoration, and in the words of Arthur Lubow what remain are "at best a crude facsimile of what Goya painted." The effects of time on the murals, coupled with the inevitable damage caused by the delicate operation of mounting the crumbling plaster on canvas, meant that most of the murals suffered extensive damage and loss of paint. Today, they are on permanent display at the Museo del Prado, Madrid.
Leocadia Weiss (née Zorrilla, 1790–1856), the artist's maid, younger by 35 years, and a distant relative, lived with and cared for Goya after Bayeu's death. She stayed with him in his Quinta del Sordo villa until 1824 with her daughter Rosario. Leocadia was probably similar in features to Goya's first wife Josefa Bayeu, to the point that one of his well-known portraits bears the cautious title of Josefa Bayeu (or Leocadia Weiss).
Not much is known about her beyond her fiery temperament. She was likely related to the Goicoechea family, a wealthy dynasty into which the artist's son, Javier, had married. It is known that Leocadia had an unhappy marriage with a jeweler, Isidore Weiss, but was separated from him since 1811, after he had accused her of "illicit conduct". She had two children before that time, and bore a third, Rosario, in 1814 when she was 26. Isidore was not the father, and it has often been speculated—although with little firm evidence—that the child belonged to Goya. There has been much speculation that Goya and Weiss were romantically linked; however, it is more likely the affection between them was sentimental.
Goya died on 16 April 1828. Leocadia was left nothing in Goya's will; mistresses were often omitted in such circumstances, but it is also likely that he did not want to dwell on his mortality by thinking about or revising his will. She wrote to a number of Goya's friends to complain of her exclusion but many of her friends were Goya's also and by then were old men or had died, and did not reply. Largely destitute, she moved into rented accommodation, later passing on her copy of the Caprichos for free.
Goya's body was later re-interred in the Real Ermita de San Antonio de la Florida in Madrid. Goya's skull was missing, a detail the Spanish consul immediately communicated to his superiors in Madrid, who wired back, "Send Goya, with or without head." | [
{
"paragraph_id": 0,
"text": "Francisco José de Goya y Lucientes (/ˈɡɔɪə/; Spanish: [fɾanˈθisko xoˈse ðe ˈɣoʝa i luˈθjentes]; 30 March 1746 – 16 April 1828) was a Spanish romantic painter and printmaker. He is considered the most important Spanish artist of the late 18th and early 19th centuries. His paintings, drawings, and engravings reflected contemporary historical upheavals and influenced important 19th- and 20th-century painters. Goya is often referred to as the last of the Old Masters and the first of the moderns.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Goya was born to a middle-class family in 1746, in Fuendetodos in Aragon. He studied painting from age 14 under José Luzán y Martinez and moved to Madrid to study with Anton Raphael Mengs. He married Josefa Bayeu in 1773. Goya became a court painter to the Spanish Crown in 1786 and this early portion of his career is marked by portraits of the Spanish aristocracy and royalty, and Rococo-style tapestry cartoons designed for the royal palace.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Although Goya's letters and writings survive, little is known about his thoughts. He had a severe and undiagnosed illness in 1793 that left him deaf, after which his work became progressively darker and pessimistic. His later easel and mural paintings, prints and drawings appear to reflect a bleak outlook on personal, social and political levels, and contrast with his social climbing. He was appointed Director of the Royal Academy in 1795, the year Manuel Godoy made an unfavorable treaty with France. In 1799, Goya became Primer Pintor de Cámara (Prime Court Painter), the highest rank for a Spanish court painter. In the late 1790s, commissioned by Godoy, he completed his La maja desnuda, a remarkably daring nude for the time and clearly indebted to Diego Velázquez. In 1800–01, he painted Charles IV of Spain and His Family, also influenced by Velázquez.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1807, Napoleon led the French army into the Peninsular War against Spain. Goya remained in Madrid during the war, which seems to have affected him deeply. Although he did not speak his thoughts in public, they can be inferred from his Disasters of War series of prints (although published 35 years after his death) and his 1814 paintings The Second of May 1808 and The Third of May 1808. Other works from his mid-period include the Caprichos and Los Disparates etching series, and a wide variety of paintings concerned with insanity, mental asylums, witches, fantastical creatures and religious and political corruption, all of which suggest that he feared for both his country's fate and his own mental and physical health.",
"title": ""
},
{
"paragraph_id": 4,
"text": "His late period culminates with the Black Paintings of 1819–1823, applied on oil on the plaster walls of his house the Quinta del Sordo (House of the Deaf Man) where, disillusioned by political and social developments in Spain, he lived in near isolation. Goya eventually abandoned Spain in 1824 to retire to the French city of Bordeaux, accompanied by his much younger maid and companion, Leocadia Weiss, who may have been his lover. There he completed his La Tauromaquia series and a number of other works. Following a stroke that left him paralyzed on his right side, Goya died and was buried on 16 April 1828 aged 82.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Francisco de Goya was born in Fuendetodos, Aragón, Spain, on 30 March 1746 to José Benito de Goya y Franque and Gracia de Lucientes y Salvador. The family had moved that year from the city of Zaragoza, but there is no record why; likely José was commissioned to work there. They were lower middle-class. José was the son of a notary and of Basque origin, his ancestors being from Zerain, earning his living as a gilder, specialising in religious and decorative craftwork. He oversaw the gilding and most of the ornamentation during the rebuilding of the Basilica of Our Lady of the Pillar (Santa Maria del Pilar), the principal cathedral of Zaragoza. Francisco was their fourth child, following his sister Rita (b. 1737), brother Tomás (b. 1739) (who was to follow in his father's trade) and second sister Jacinta (b. 1743). There were two younger sons, Mariano (b. 1750) and Camilo (b. 1753).",
"title": "Early years (1746–1771)"
},
{
"paragraph_id": 6,
"text": "His mother's family had pretensions of nobility and the house, a modest brick cottage, was owned by her family and, perhaps fancifully, bore their crest. About 1749 José and Gracia bought a home in Zaragoza and were able to return to live in the city. Although there are no surviving records, it is thought that Goya may have attended the Escuelas Pías de San Antón, which offered free schooling. His education seems to have been adequate but not enlightening; he had reading, writing and numeracy, and some knowledge of the classics. According to Robert Hughes the artist \"seems to have taken no more interest than a carpenter in philosophical or theological matters, and his views on painting ... were very down to earth: Goya was no theoretician.\" At school he formed a close and lifelong friendship with fellow pupil Martín Zapater; the 131 letters Goya wrote to him from 1775 until Zapater's death in 1803 give valuable insight into Goya's early years at the court in Madrid.",
"title": "Early years (1746–1771)"
},
{
"paragraph_id": 7,
"text": "At age 14 Goya studied under the painter José Luzán, where he copied stamps for 4 years until he decided to work on his own, as he wrote later on \"paint from my invention\". He moved to Madrid to study with Anton Raphael Mengs, a popular painter with Spanish royalty. He clashed with his master, and his examinations were unsatisfactory. Goya submitted entries for the Real Academia de Bellas Artes de San Fernando in 1763 and 1766 but was denied entrance into the academia.",
"title": "Visit to Italy"
},
{
"paragraph_id": 8,
"text": "Rome was then the cultural capital of Europe and held all the prototypes of classical antiquity, while Spain lacked a coherent artistic direction, with all of its significant visual achievements in the past. Having failed to earn a scholarship, Goya relocated at his own expense to Rome in the old tradition of European artists stretching back at least to Albrecht Dürer. He was an unknown at the time and so the records are scant and uncertain. Early biographers have him travelling to Rome with a gang of bullfighters, where he worked as a street acrobat, or for a Russian diplomat, or fell in love with a beautiful young nun whom he plotted to abduct from her convent. It is possible that Goya completed two surviving mythological paintings during the visit, a Sacrifice to Vesta and a Sacrifice to Pan, both dated 1771.",
"title": "Visit to Italy"
},
{
"paragraph_id": 9,
"text": "In 1771 he won second prize in a painting competition organized by the City of Parma. That year he returned to Zaragoza and painted elements of the cupolas of the Basilica of the Pillar (including Adoration of the Name of God), a cycle of frescoes for the monastic church of the Charterhouse of Aula Dei, and the frescoes of the Sobradiel Palace. He studied with the Aragonese artist Francisco Bayeu y Subías and his painting began to show signs of the delicate tonalities for which he became famous. He befriended Francisco Bayeu and married his sister Josefa (he nicknamed her \"Pepa\") on 25 July 1773. Their first child, Antonio Juan Ramon Carlos, was born on 29 August 1774. Of their seven children only one, a son named Javier, survived into adulthood.",
"title": "Visit to Italy"
},
{
"paragraph_id": 10,
"text": "Francisco Bayeu (Josefa Bayeu's brother), 1765 membership of the Real Academia de Bellas Artes de San Fernando, and directorship of the tapestry works from 1777 helped Goya earn a commission for a series of tapestry cartoons for the Royal Tapestry Factory. Over five years he designed some 42 patterns, many of which were used to decorate and insulate the stone walls of El Escorial and the Palacio Real del Pardo, the residences of the Spanish monarchs. While designing tapestries was neither prestigious nor well paid, his cartoons are mostly popularist in a rococo style, and Goya used them to bring himself to wider attention.",
"title": "Madrid (1775–1789)"
},
{
"paragraph_id": 11,
"text": "The cartoons were not his only royal commissions, and were accompanied by a series of engravings, mostly copies after old masters such as Marcantonio Raimondi and Velázquez. Goya had a complicated relationship to the latter artist; while many of his contemporaries saw folly in Goya's attempts to copy and emulate him, he had access to a wide range of the long-dead painter's works that had been contained in the royal collection. Nonetheless, etching was a medium that the young artist was to master, a medium that was to reveal both the true depths of his imagination and his political beliefs. His c. 1779 etching of The Garrotted Man (\"El agarrotado\") was the largest work he had produced to date, and an obvious foreboding of his later \"Disasters of War\" series.",
"title": "Madrid (1775–1789)"
},
{
"paragraph_id": 12,
"text": "Goya was beset by illness, and his condition was used against him by his rivals, who looked jealously upon any artist seen to be rising in stature. Some of the larger cartoons, such as The Wedding, were more than 8 by 10 feet, and had proved a drain on his physical strength. Ever resourceful, Goya turned this misfortune around, claiming that his illness had allowed him the insight to produce works that were more personal and informal. However, he found the format limiting, as it did not allow him to capture complex color shifts or texture, and was unsuited to the impasto and glazing techniques he was by then applying to his painted works. The tapestries seem as comments on human types, fashion and fads.",
"title": "Madrid (1775–1789)"
},
{
"paragraph_id": 13,
"text": "Other works from the period include a canvas for the altar of the Church of San Francisco El Grande in Madrid, which led to his appointment as a member of the Royal Academy of Fine Art.",
"title": "Madrid (1775–1789)"
},
{
"paragraph_id": 14,
"text": "In 1783, the Count of Floridablanca, favorite of King Charles III, commissioned Goya to paint his portrait. He became friends with the King's half-brother Luis, and spent two summers working on portraits of both the Infante and his family. During the 1780s, his circle of patrons grew to include the Duke and Duchess of Osuna, the King and other notable people of the kingdom whom he painted. In 1786, Goya was given a salaried position as painter to Charles III.",
"title": "Court painter"
},
{
"paragraph_id": 15,
"text": "Goya was appointed court painter to Charles IV in 1789. The following year he became First Court Painter, with a salary of 50,000 reales and an allowance of 500 ducats for a coach. He painted portraits of the king and the queen, and the Spanish Prime Minister Manuel de Godoy and many other nobles. These portraits are notable for their disinclination to flatter; his Charles IV of Spain and His Family is an especially brutal assessment of a royal family. Modern interpreters view the portrait as satirical; it is thought to reveal the corruption behind the rule of Charles IV. Under his reign his wife Louisa was thought to have had the real power, and thus Goya placed her at the center of the group portrait. From the back left of the painting one can see the artist himself looking out at the viewer, and the painting behind the family depicts Lot and his daughters, thus once again echoing the underlying message of corruption and decay.",
"title": "Court painter"
},
{
"paragraph_id": 16,
"text": "Goya earned commissions from the highest ranks of the Spanish nobility, including Pedro Téllez-Girón, 9th Duke of Osuna and his wife María Josefa Pimentel, 12th Countess-Duchess of Benavente, José Álvarez de Toledo, Duke of Alba and his wife María del Pilar de Silva, and María Ana de Pontejos y Sandoval, Marchioness of Pontejos. In 1801 he painted Godoy in a commission to commemorate the victory in the brief War of the Oranges against Portugal. The two were friends, even if Goya's 1801 portrait is usually seen as satire. Yet even after Godoy's fall from grace the politician referred to the artist in warm terms. Godoy saw himself as instrumental in the publication of the Caprichos and is widely believed to have commissioned La maja desnuda.",
"title": "Court painter"
},
{
"paragraph_id": 17,
"text": "La Maja Desnuda (La maja desnuda) has been described as \"the first totally profane life-size female nude in Western art\" without pretense to allegorical or mythological meaning. The identity of the Majas is uncertain. The most popularly cited models are the Duchess of Alba, with whom Goya was sometimes thought to have had an affair, and Pepita Tudó, mistress of Manuel de Godoy. Neither theory has been verified, and it remains as likely that the paintings represent an idealized composite. The paintings were never publicly exhibited during Goya's lifetime and were owned by Godoy. In 1808 all Godoy's property was seized by Ferdinand VII after his fall from power and exile, and in 1813 the Inquisition confiscated both works as 'obscene', returning them in 1836 to the Academy of Fine Arts of San Fernando. In 1798 he painted luminous and airy scenes for the pendentives and cupola of the Real Ermita (Chapel) of San Antonio de la Florida in Madrid. His depiction of a miracle of Saint Anthony of Padua is devoid of the customary angels and instead treats the miracle as if it were a theatrical event performed by ordinary people.",
"title": "Middle period (1793–1799)"
},
{
"paragraph_id": 18,
"text": "At some time between late 1792 and early 1793 an undiagnosed illness left Goya deaf. He became withdrawn and introspective while the direction and tone of his work changed. He began the series of aquatinted etchings, published in 1799 as the Caprichos—completed in parallel with the more official commissions of portraits and religious paintings. In 1799 Goya published 80 Caprichos prints depicting what he described as \"the innumerable foibles and follies to be found in any civilized society, and from the common prejudices and deceitful practices which custom, ignorance, or self-interest have made usual\". The visions in these prints are partly explained by the caption \"The sleep of reason produces monsters\". Yet these are not solely bleak; they demonstrate the artist's sharp satirical wit, as in Capricho number 52, What a Tailor Can Do!",
"title": "Middle period (1793–1799)"
},
{
"paragraph_id": 19,
"text": "While convalescing between 1793 and 1794, Goya completed a set of eleven small pictures painted on tin that mark a significant change in the tone and subject matter of his art, and draw from the dark and dramatic realms of fantasy nightmare. Yard with Lunatics is a vision of loneliness, fear and social alienation. The condemnation of brutality towards prisoners (whether criminal or insane) is a subject that Goya assayed in later works that focused on the degradation of the human figure. It was one of the first of Goya's mid-1790s cabinet paintings, in which his earlier search for ideal beauty gave way to an examination of the relationship between naturalism and fantasy that would preoccupy him for the rest of his career. He was undergoing a nervous breakdown and entering prolonged physical illness, and admitted that the series was created to reflect his own self-doubt, anxiety and fear that he was losing his mind. Goya wrote that the works served \"to occupy my imagination, tormented as it is by contemplation of my sufferings.\" The series, he said, consisted of pictures which \"normally find no place in commissioned works.\"",
"title": "Middle period (1793–1799)"
},
{
"paragraph_id": 20,
"text": "Goya's physical and mental breakdown seems to have happened a few weeks after the French declaration of war on Spain. A contemporary reported, \"The noises in his head and deafness aren't improving, yet his vision is much better and he is back in control of his balance.\" These symptoms may indicate a prolonged viral encephalitis, or possibly a series of miniature strokes resulting from high blood pressure and which affected the hearing and balance centers of the brain. Symptoms of tinnitus, episodes of imbalance and progressive deafness are typical of Ménière's disease. It is possible that Goya had cumulative lead poisoning, as he used massive amounts of lead white—which he ground himself—in his paintings, both as a canvas primer and as a primary color.",
"title": "Middle period (1793–1799)"
},
{
"paragraph_id": 21,
"text": "Other postmortem diagnostic assessments point toward paranoid dementia, possibly due to brain trauma, as evidenced by marked changes in his work after his recovery, culminating in the \"black\" paintings. Art historians have noted Goya's singular ability to express his personal demons as horrific and fantastic imagery that speaks universally, and allows his audience to find its own catharsis in the images.",
"title": "Middle period (1793–1799)"
},
{
"paragraph_id": 22,
"text": "The French army invaded Spain in 1808, leading to the Peninsular War of 1808–1814. The extent of Goya's involvement with the court of the \"intruder king\", Joseph I, the brother of Napoleon Bonaparte, is not known; he painted works for French patrons and sympathisers, but kept neutral during the fighting. After the restoration of the Spanish King Ferdinand VII in 1814, Goya denied any involvement with the French. By the time of his wife Josefa's death in 1812, he was painting The Second of May 1808 and The Third of May 1808, and preparing the series of etchings later known as The Disasters of War (Los desastres de la guerra). Ferdinand VII returned to Spain in 1814 but relations with Goya were not cordial. The artist completed portraits of the king for a variety of ministries, but not for the king himself.",
"title": "Peninsular War (1808–1814)"
},
{
"paragraph_id": 23,
"text": "Although Goya did not make his intention known when creating The Disasters of War, art historians view them as a visual protest against the violence of the 1808 Dos de Mayo Uprising, the subsequent Peninsular War and the move against liberalism in the aftermath of the restoration of the Bourbon monarchy in 1814. The scenes are singularly disturbing, sometimes macabre in their depiction of battlefield horror, and represent an outraged conscience in the face of death and destruction. They were not published until 1863, 35 years after his death. It is likely that only then was it considered politically safe to distribute a sequence of artworks criticising both the French and restored Bourbons.",
"title": "Peninsular War (1808–1814)"
},
{
"paragraph_id": 24,
"text": "The first 47 plates in the series focus on incidents from the war and show the consequences of the conflict on individual soldiers and civilians. The middle series (plates 48 to 64) record the effects of the famine that hit Madrid in 1811–12, before the city was liberated from the French. The final 17 reflect the bitter disappointment of liberals when the restored Bourbon monarchy, encouraged by the Catholic hierarchy, rejected the Spanish Constitution of 1812 and opposed both state and religious reform. Since their first publication, Goya's scenes of atrocities, starvation, degradation and humiliation have been described as the \"prodigious flowering of rage\".",
"title": "Peninsular War (1808–1814)"
},
{
"paragraph_id": 25,
"text": "His works from 1814 to 1819 are mostly commissioned portraits, but also include the altarpiece of Santa Justa and Santa Rufina for the Cathedral of Seville, the print series of La Tauromaquia depicting scenes from bullfighting, and probably the etchings of Los Disparates.",
"title": "Peninsular War (1808–1814)"
},
{
"paragraph_id": 26,
"text": "Records of Goya's later life are relatively scant, and ever politically aware, he suppressed a number of his works from this period, working instead in private. He was tormented by a dread of old age and fear of madness. Goya had been a successful and royally placed artist, but withdrew from public life during his final years. From the late 1810s he lived in near-solitude outside Madrid in a farmhouse converted into a studio. The house had become known as \"La Quinta del Sordo\" (The House of the Deaf Man), after the nearest farmhouse that had coincidentally also belonged to a deaf man.",
"title": "Quinta del Sordo and Black Paintings (1819–1822)"
},
{
"paragraph_id": 27,
"text": "Art historians assume Goya felt alienated from the social and political trends that followed the 1814 restoration of the Bourbon monarchy, and that he viewed these developments as reactionary means of social control. In his unpublished art he seems to have railed against what he saw as a tactical retreat into Medievalism. It is thought that he had hoped for political and religious reform, but like many liberals became disillusioned when the restored Bourbon monarchy and Catholic hierarchy rejected the Spanish Constitution of 1812.",
"title": "Quinta del Sordo and Black Paintings (1819–1822)"
},
{
"paragraph_id": 28,
"text": "At the age of 75, alone and in mental and physical despair, he completed the work of his 14 Black Paintings, all of which were executed in oil directly onto the plaster walls of his house. Goya did not intend for the paintings to be exhibited, did not write of them, and likely never spoke of them. Around 1874, 50 years after his death, they were taken down and transferred to a canvas support by owner Baron Frédéric Émile d'Erlanger. Many of the works were significantly altered during the restoration, and in the words of Arthur Lubow what remain are \"at best a crude facsimile of what Goya painted.\" The effects of time on the murals, coupled with the inevitable damage caused by the delicate operation of mounting the crumbling plaster on canvas, meant that most of the murals suffered extensive damage and loss of paint. Today, they are on permanent display at the Museo del Prado, Madrid.",
"title": "Quinta del Sordo and Black Paintings (1819–1822)"
},
{
"paragraph_id": 29,
"text": "Leocadia Weiss (née Zorrilla, 1790–1856), the artist's maid, younger by 35 years, and a distant relative, lived with and cared for Goya after Bayeu's death. She stayed with him in his Quinta del Sordo villa until 1824 with her daughter Rosario. Leocadia was probably similar in features to Goya's first wife Josefa Bayeu, to the point that one of his well-known portraits bears the cautious title of Josefa Bayeu (or Leocadia Weiss).",
"title": "Bordeaux (October 1824 – 1828)"
},
{
"paragraph_id": 30,
"text": "Not much is known about her beyond her fiery temperament. She was likely related to the Goicoechea family, a wealthy dynasty into which the artist's son, Javier, had married. It is known that Leocadia had an unhappy marriage with a jeweler, Isidore Weiss, but was separated from him since 1811, after he had accused her of \"illicit conduct\". She had two children before that time, and bore a third, Rosario, in 1814 when she was 26. Isidore was not the father, and it has often been speculated—although with little firm evidence—that the child belonged to Goya. There has been much speculation that Goya and Weiss were romantically linked; however, it is more likely the affection between them was sentimental.",
"title": "Bordeaux (October 1824 – 1828)"
},
{
"paragraph_id": 31,
"text": "Goya died on 16 April 1828. Leocadia was left nothing in Goya's will; mistresses were often omitted in such circumstances, but it is also likely that he did not want to dwell on his mortality by thinking about or revising his will. She wrote to a number of Goya's friends to complain of her exclusion but many of her friends were Goya's also and by then were old men or had died, and did not reply. Largely destitute, she moved into rented accommodation, later passing on her copy of the Caprichos for free.",
"title": "Bordeaux (October 1824 – 1828)"
},
{
"paragraph_id": 32,
"text": "Goya's body was later re-interred in the Real Ermita de San Antonio de la Florida in Madrid. Goya's skull was missing, a detail the Spanish consul immediately communicated to his superiors in Madrid, who wired back, \"Send Goya, with or without head.\"",
"title": "Bordeaux (October 1824 – 1828)"
}
]
| Francisco José de Goya y Lucientes was a Spanish romantic painter and printmaker. He is considered the most important Spanish artist of the late 18th and early 19th centuries. His paintings, drawings, and engravings reflected contemporary historical upheavals and influenced important 19th- and 20th-century painters. Goya is often referred to as the last of the Old Masters and the first of the moderns. Goya was born to a middle-class family in 1746, in Fuendetodos in Aragon. He studied painting from age 14 under José Luzán y Martinez and moved to Madrid to study with Anton Raphael Mengs. He married Josefa Bayeu in 1773. Goya became a court painter to the Spanish Crown in 1786 and this early portion of his career is marked by portraits of the Spanish aristocracy and royalty, and Rococo-style tapestry cartoons designed for the royal palace. Although Goya's letters and writings survive, little is known about his thoughts. He had a severe and undiagnosed illness in 1793 that left him deaf, after which his work became progressively darker and pessimistic. His later easel and mural paintings, prints and drawings appear to reflect a bleak outlook on personal, social and political levels, and contrast with his social climbing. He was appointed Director of the Royal Academy in 1795, the year Manuel Godoy made an unfavorable treaty with France. In 1799, Goya became Primer Pintor de Cámara, the highest rank for a Spanish court painter. In the late 1790s, commissioned by Godoy, he completed his La maja desnuda, a remarkably daring nude for the time and clearly indebted to Diego Velázquez. In 1800–01, he painted Charles IV of Spain and His Family, also influenced by Velázquez. In 1807, Napoleon led the French army into the Peninsular War against Spain. Goya remained in Madrid during the war, which seems to have affected him deeply. Although he did not speak his thoughts in public, they can be inferred from his Disasters of War series of prints and his 1814 paintings The Second of May 1808 and The Third of May 1808. Other works from his mid-period include the Caprichos and Los Disparates etching series, and a wide variety of paintings concerned with insanity, mental asylums, witches, fantastical creatures and religious and political corruption, all of which suggest that he feared for both his country's fate and his own mental and physical health. His late period culminates with the Black Paintings of 1819–1823, applied on oil on the plaster walls of his house the Quinta del Sordo where, disillusioned by political and social developments in Spain, he lived in near isolation. Goya eventually abandoned Spain in 1824 to retire to the French city of Bordeaux, accompanied by his much younger maid and companion, Leocadia Weiss, who may have been his lover. There he completed his La Tauromaquia series and a number of other works. Following a stroke that left him paralyzed on his right side, Goya died and was buried on 16 April 1828 aged 82. | 2001-06-29T09:32:43Z | 2023-12-14T18:24:57Z | [
"Template:Pp-move-indef",
"Template:Goya",
"Template:Short description",
"Template:Efn-ua",
"Template:Notelist-ua",
"Template:Reflist",
"Template:Cite news",
"Template:C.",
"Template:Cite encyclopedia",
"Template:Citation needed",
"Template:Cvt",
"Template:Commons and category",
"Template:Wikiquote",
"Template:Multiple image",
"Template:ISBN",
"Template:Romanticism",
"Template:Authority control (arts)",
"Template:Circa",
"Template:Redirect",
"Template:Family name hatnote",
"Template:Infobox artist",
"Template:IPAc-en",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Cite EB1911",
"Template:See also",
"Template:Lang",
"Template:Cite journal",
"Template:IPAc-es",
"Template:Which",
"Template:Small",
"Template:Spnd"
]
| https://en.wikipedia.org/wiki/Francisco_Goya |
10,869 | Frequentist probability | Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials (the long-run probability). Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.
The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, e.g. the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.
In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: it occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.
A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies.
The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.
As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.
As William Feller noted:
There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines "out of infinitely many worlds one is selected at random..." Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.
Feller's comment was criticism of Pierre-Simon Laplace, who published a solution to the sunrise problem using an alternative probability interpretation. Despite Laplace's explicit and immediate disclaimer in the source, based on expertise in astronomy as well as probability, two centuries of criticism have followed.
The frequentist view may have been foreshadowed by Aristotle, in Rhetoric, when he wrote:
the probable is that which for the most part happens
Poisson clearly distinguished between objective and subjective probabilities in 1837. Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis ("On the Foundations of the Theory of Probabilities" and "Remarks on the Fundamental Principles of the Theory of Probabilities"), Cournot (Exposition de la théorie des chances et des probabilités) and Fries introduced the frequentist view. Venn provided a thorough exposition (The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability (published editions in 1866, 1876, 1888)) two decades later. These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences. The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability.
Alternatively, Jacob Bernoulli (AKA James or Jacques) understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously in 1713. He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem). Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson. Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation. Any criticism by Gauss and Laplace was muted and implicit. (Their later derivations did not use inverse probability.)
Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population" (Rubin, 2020); Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist. All were suspicious of "inverse probability" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, "...the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected." (from his Statistical Methods for Research Workers). While Neyman was a pure frequentist, Fisher's views of probability were unique; Both had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era.
According to the Oxford English Dictionary, the term 'frequentist' was first used by M. G. Kendall in 1949, to contrast with Bayesians, whom he called "non-frequentists". He observed
"The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921).
The historical sequence: probability concepts were introduced and much of probability mathematics derived (prior to the 20th century), classical statistical inference methods were developed, the mathematical foundations of probability were solidified and current terminology was introduced (all in the 20th century). The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian) and frequentist probability.
Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability.
Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations; All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book. | [
{
"paragraph_id": 0,
"text": "Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials (the long-run probability). Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, e.g. the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: it occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.",
"title": "Scope"
},
{
"paragraph_id": 5,
"text": "As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on p-values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.",
"title": "Scope"
},
{
"paragraph_id": 6,
"text": "As William Feller noted:",
"title": "Scope"
},
{
"paragraph_id": 7,
"text": "There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines \"out of infinitely many worlds one is selected at random...\" Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.",
"title": "Scope"
},
{
"paragraph_id": 8,
"text": "Feller's comment was criticism of Pierre-Simon Laplace, who published a solution to the sunrise problem using an alternative probability interpretation. Despite Laplace's explicit and immediate disclaimer in the source, based on expertise in astronomy as well as probability, two centuries of criticism have followed.",
"title": "Scope"
},
{
"paragraph_id": 9,
"text": "The frequentist view may have been foreshadowed by Aristotle, in Rhetoric, when he wrote:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "the probable is that which for the most part happens",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Poisson clearly distinguished between objective and subjective probabilities in 1837. Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis (\"On the Foundations of the Theory of Probabilities\" and \"Remarks on the Fundamental Principles of the Theory of Probabilities\"), Cournot (Exposition de la théorie des chances et des probabilités) and Fries introduced the frequentist view. Venn provided a thorough exposition (The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability (published editions in 1866, 1876, 1888)) two decades later. These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences. The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Alternatively, Jacob Bernoulli (AKA James or Jacques) understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously in 1713. He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem). Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson. Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative \"inverse\" (subjective, Bayesian) probability interpretation. Any criticism by Gauss and Laplace was muted and implicit. (Their later derivations did not use inverse probability.)",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Major contributors to \"classical\" statistics in the early 20th century included Fisher, Neyman and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of \"repeated sampling from the same population\" (Rubin, 2020); Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist. All were suspicious of \"inverse probability\" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, \"...the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected.\" (from his Statistical Methods for Research Workers). While Neyman was a pure frequentist, Fisher's views of probability were unique; Both had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "According to the Oxford English Dictionary, the term 'frequentist' was first used by M. G. Kendall in 1949, to contrast with Bayesians, whom he called \"non-frequentists\". He observed",
"title": "Etymology"
},
{
"paragraph_id": 15,
"text": "\"The Frequency Theory of Probability\" was used a generation earlier as a chapter title in Keynes (1921).",
"title": "Etymology"
},
{
"paragraph_id": 16,
"text": "The historical sequence: probability concepts were introduced and much of probability mathematics derived (prior to the 20th century), classical statistical inference methods were developed, the mathematical foundations of probability were solidified and current terminology was introduced (all in the 20th century). The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian) and frequentist probability.",
"title": "Etymology"
},
{
"paragraph_id": 17,
"text": "Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability.",
"title": "Alternative views"
},
{
"paragraph_id": 18,
"text": "Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations; All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book.",
"title": "Alternative views"
}
]
| Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. The continued use of frequentist methods in scientific inference, however, has been called into question. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, e.g. the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning. | 2002-02-25T15:51:15Z | 2023-12-23T16:57:07Z | [
"Template:Citation",
"Template:Statistics",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Citation needed",
"Template:Blockquote",
"Template:Main",
"Template:Cite journal",
"Template:Quote",
"Template:Cite web",
"Template:Short description",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/Frequentist_probability |
10,870 | List of French-language poets | List of poets who have written in the French language: | [
{
"paragraph_id": 0,
"text": "List of poets who have written in the French language:",
"title": ""
}
]
| List of poets who have written in the French language: | 2001-11-07T22:40:29Z | 2023-11-07T02:27:31Z | [
"Template:Cite book",
"Template:Cite DCB",
"Template:Lists of poets",
"Template:Short description",
"Template:Use dmy dates",
"Template:French literature",
"Template:Dynamic list",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/List_of_French-language_poets |
10,871 | FM-2030 | FM-2030 (born Fereidoun M. Esfandiary; Persian: فریدون اسفندیاری; October 15, 1930 – July 8, 2000) was a Belgian-born Iranian-American author, teacher, transhumanist philosopher, futurist, consultant, and Olympic athlete.
He became notable as a transhumanist with the book Are You a Transhuman?: Monitoring and Stimulating Your Personal Rate of Growth in a Rapidly Changing World, published in 1989. In addition, he wrote a number of works of fiction under his original name F. M. Esfandiary.
FM-2030 was born Fereydoon M. Esfandiary on October 15, 1930, in Belgium to Iranian diplomat Abdol-Hossein “A. H.” Sadigh Esfandiary (1894–1986), who served from 1920 to 1960. He travelled widely as a child, having lived in 17 countries including Iran, India, and Afghanistan, by age 11. He represented Iran as a basketball player and wrestler at the 1948 Olympic Games in London. He attended primary school in Iran and England and completed his secondary education at Colleges Des Freres, a Jesuit school in Jerusalem. By the time he was 18, aside from his native Persian, he learned to speak 4 languages: Arabic, Hebrew, French and English. He then started his college education at the University of California, Berkeley, but later transferred to the University of California, Los Angeles, where he graduated in 1952. Afterwards, he served on the United Nations Conciliation Commission for Palestine from 1952 to 1954.
In 1970, after publishing his book Optimism One, F. M. Esfandiary started going by FM-2030 for two main reasons: firstly, to reflect the hope and belief that he would live to celebrate his 100th birthday in 2030; secondly, and more importantly, to break free of the widespread practice of naming conventions that he saw as rooted in a collectivist mentality, and existing only as a relic of humankind's tribalistic past. He formalized his name change in 1988. He viewed traditional names as almost always stamping a label of collective identity – varying from gender to nationality – on the individual, thereby existing as prima facie elements of thought processes in the human cultural fabric, that tended to degenerate into stereotyping, factionalism, and discrimination. In his own words, "Conventional names define a person's past: ancestry, ethnicity, nationality, religion. I am not who I was ten years ago and certainly not who I will be in twenty years. [...] The name 2030 reflects my conviction that the years around 2030 will be a magical time. In 2030 we will be ageless and everyone will have an excellent chance to live forever. 2030 is a dream and a goal." As a staunch anti-nationalist, he believed "There are no illegal immigrants, only irrelevant borders."
In 1973, he published a political manifesto UpWingers: A Futurist Manifesto in which he portrays both the ideological left and right as outdated, and in their place proposes a schema of UpWingers (those who look to the sky and the future) and DownWingers (those who look to the earth and the past). FM-2030 identified with the former. He argued that the nuclear family structure and the idea of a city would disappear, being replaced by modular social communities he called mobilia, powered by communitarianism, which would persist and then disappear.
FM-2030 believed that synthetic body parts would one day make life expectancy irrelevant; shortly before his death from pancreatic cancer, he described the pancreas as "a stupid, dumb, wretched organ."
In terms of civilization, he stated: "No civilization of the past was great. They were all primitive and persecutory, founded on mass subjugation and mass murder." In terms of identity, he stated "The young modern is not losing his identity. He is gladly disencumbering himself of it." He believed that eventually, nations would disappear and that identities would shift from cultural to personal. In a 1972 op-Ed in The New York Times, he wrote that the leadership in the Arab–Israeli conflict had failed, and that the warring sides were "acting like adolescents, refuse to resolve their wasteful 25-year-old brawl" and believed that the world was "irreversibly evolving beyond the concept of national homeland."
FM-2030 was a lifelong vegetarian and said he would not eat anything that had a mother. He famously refused to answer any questions about his nationality, age and upbringing, claiming that such questions were irrelevant and that he was a “global person”. FM-2030 once said, "I am a 21st century person who was accidentally launched in the 20th. I have a deep nostalgia for the future." As he spent much of his childhood in India, he was noted to have spoken English with a slight Indian accent. He taught at The New School, University of California, Los Angeles, and Florida International University. He worked as a corporate consultant for Lockheed and J. C. Penney. He was also an atheist. FM-2030 was, in his own words, a follower of "upwing" politics (i.e. neither right-wing nor left-wing but something else), and by which he meant that he endorsed universal progress. He had been in a non-exclusive "friendship" (his preferred term for relationship) with Flora Schnall, a lawyer and fellow Harvard Law Class of 1959 graduate, from the 1960s until his death. FM-2030 and Schnall attended the same class as Ruth Bader Ginsburg. He resided in Westwood, Los Angeles as well as Miami.
FM-2030 died on July 8, 2000, from pancreatic cancer at a friend's apartment in Manhattan. He was placed in cryonic suspension at the Alcor Life Extension Foundation in Scottsdale, Arizona, where his body remains today. He did not yet have remote standby arrangements, so no Alcor team member was present at his death, but FM-2030 was the first person to be vitrified, rather than simply frozen as previous cryonics patients had been. FM-2030 was survived by four sisters and one brother. | [
{
"paragraph_id": 0,
"text": "FM-2030 (born Fereidoun M. Esfandiary; Persian: فریدون اسفندیاری; October 15, 1930 – July 8, 2000) was a Belgian-born Iranian-American author, teacher, transhumanist philosopher, futurist, consultant, and Olympic athlete.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He became notable as a transhumanist with the book Are You a Transhuman?: Monitoring and Stimulating Your Personal Rate of Growth in a Rapidly Changing World, published in 1989. In addition, he wrote a number of works of fiction under his original name F. M. Esfandiary.",
"title": ""
},
{
"paragraph_id": 2,
"text": "FM-2030 was born Fereydoon M. Esfandiary on October 15, 1930, in Belgium to Iranian diplomat Abdol-Hossein “A. H.” Sadigh Esfandiary (1894–1986), who served from 1920 to 1960. He travelled widely as a child, having lived in 17 countries including Iran, India, and Afghanistan, by age 11. He represented Iran as a basketball player and wrestler at the 1948 Olympic Games in London. He attended primary school in Iran and England and completed his secondary education at Colleges Des Freres, a Jesuit school in Jerusalem. By the time he was 18, aside from his native Persian, he learned to speak 4 languages: Arabic, Hebrew, French and English. He then started his college education at the University of California, Berkeley, but later transferred to the University of California, Los Angeles, where he graduated in 1952. Afterwards, he served on the United Nations Conciliation Commission for Palestine from 1952 to 1954.",
"title": "Early life and education"
},
{
"paragraph_id": 3,
"text": "In 1970, after publishing his book Optimism One, F. M. Esfandiary started going by FM-2030 for two main reasons: firstly, to reflect the hope and belief that he would live to celebrate his 100th birthday in 2030; secondly, and more importantly, to break free of the widespread practice of naming conventions that he saw as rooted in a collectivist mentality, and existing only as a relic of humankind's tribalistic past. He formalized his name change in 1988. He viewed traditional names as almost always stamping a label of collective identity – varying from gender to nationality – on the individual, thereby existing as prima facie elements of thought processes in the human cultural fabric, that tended to degenerate into stereotyping, factionalism, and discrimination. In his own words, \"Conventional names define a person's past: ancestry, ethnicity, nationality, religion. I am not who I was ten years ago and certainly not who I will be in twenty years. [...] The name 2030 reflects my conviction that the years around 2030 will be a magical time. In 2030 we will be ageless and everyone will have an excellent chance to live forever. 2030 is a dream and a goal.\" As a staunch anti-nationalist, he believed \"There are no illegal immigrants, only irrelevant borders.\"",
"title": "Name change and views"
},
{
"paragraph_id": 4,
"text": "In 1973, he published a political manifesto UpWingers: A Futurist Manifesto in which he portrays both the ideological left and right as outdated, and in their place proposes a schema of UpWingers (those who look to the sky and the future) and DownWingers (those who look to the earth and the past). FM-2030 identified with the former. He argued that the nuclear family structure and the idea of a city would disappear, being replaced by modular social communities he called mobilia, powered by communitarianism, which would persist and then disappear.",
"title": "Name change and views"
},
{
"paragraph_id": 5,
"text": "FM-2030 believed that synthetic body parts would one day make life expectancy irrelevant; shortly before his death from pancreatic cancer, he described the pancreas as \"a stupid, dumb, wretched organ.\"",
"title": "Name change and views"
},
{
"paragraph_id": 6,
"text": "In terms of civilization, he stated: \"No civilization of the past was great. They were all primitive and persecutory, founded on mass subjugation and mass murder.\" In terms of identity, he stated \"The young modern is not losing his identity. He is gladly disencumbering himself of it.\" He believed that eventually, nations would disappear and that identities would shift from cultural to personal. In a 1972 op-Ed in The New York Times, he wrote that the leadership in the Arab–Israeli conflict had failed, and that the warring sides were \"acting like adolescents, refuse to resolve their wasteful 25-year-old brawl\" and believed that the world was \"irreversibly evolving beyond the concept of national homeland.\"",
"title": "Name change and views"
},
{
"paragraph_id": 7,
"text": "FM-2030 was a lifelong vegetarian and said he would not eat anything that had a mother. He famously refused to answer any questions about his nationality, age and upbringing, claiming that such questions were irrelevant and that he was a “global person”. FM-2030 once said, \"I am a 21st century person who was accidentally launched in the 20th. I have a deep nostalgia for the future.\" As he spent much of his childhood in India, he was noted to have spoken English with a slight Indian accent. He taught at The New School, University of California, Los Angeles, and Florida International University. He worked as a corporate consultant for Lockheed and J. C. Penney. He was also an atheist. FM-2030 was, in his own words, a follower of \"upwing\" politics (i.e. neither right-wing nor left-wing but something else), and by which he meant that he endorsed universal progress. He had been in a non-exclusive \"friendship\" (his preferred term for relationship) with Flora Schnall, a lawyer and fellow Harvard Law Class of 1959 graduate, from the 1960s until his death. FM-2030 and Schnall attended the same class as Ruth Bader Ginsburg. He resided in Westwood, Los Angeles as well as Miami.",
"title": "Personal life"
},
{
"paragraph_id": 8,
"text": "FM-2030 died on July 8, 2000, from pancreatic cancer at a friend's apartment in Manhattan. He was placed in cryonic suspension at the Alcor Life Extension Foundation in Scottsdale, Arizona, where his body remains today. He did not yet have remote standby arrangements, so no Alcor team member was present at his death, but FM-2030 was the first person to be vitrified, rather than simply frozen as previous cryonics patients had been. FM-2030 was survived by four sisters and one brother.",
"title": "Death"
}
]
| FM-2030 was a Belgian-born Iranian-American author, teacher, transhumanist philosopher, futurist, consultant, and Olympic athlete. He became notable as a transhumanist with the book Are You a Transhuman?: Monitoring and Stimulating Your Personal Rate of Growth in a Rapidly Changing World, published in 1989. In addition, he wrote a number of works of fiction under his original name F. M. Esfandiary. | 2001-07-04T07:42:34Z | 2023-11-10T00:36:30Z | [
"Template:Contains special characters",
"Template:Cite web",
"Template:Wikiquote",
"Template:Transhumanism footer",
"Template:Use mdy dates",
"Template:Infobox writer",
"Template:ISBN",
"Template:Reflist",
"Template:Cite news",
"Template:Iran Men Basketball Squad 1948 Summer Olympics",
"Template:Authority control",
"Template:Short description",
"Template:Lang-fa"
]
| https://en.wikipedia.org/wiki/FM-2030 |
10,874 | West Flemish | West Flemish (West-Vlams or West-Vloams or Vlaemsch (in French-Flanders), Dutch: West-Vlaams, French: flamand occidental) is a collection of Dutch dialects spoken in western Belgium and the neighbouring areas of France and the Netherlands.
West Flemish is spoken by about a million people in the Belgian province of West Flanders, and a further 50,000 in the neighbouring Dutch coastal district of Zeelandic Flanders (200,000 if including the closely related dialects of Zeelandic) and 10-20,000 in the northern part of the French department of Nord. Some of the main cities where West Flemish is widely spoken are Bruges, Dunkirk, Kortrijk, Ostend, Roeselare and Ypres.
West Flemish is listed as a "vulnerable" language in UNESCO's online Red Book of Endangered Languages.
West Flemish has a phonology that differs significantly from that of Standard Dutch, being similar to Afrikaans in the case of long E, O and A. Also where Standard Dutch has sch, in some parts of West Flanders, West-Flemish, like Afrikaans, has sk. However, the best known traits are the replacement of Standard Dutch (pre-)velar fricatives g and ch in Dutch (/x, ɣ/) with glottal h [h, ɦ],. The following differences are listed by their Dutch spelling, as some different letters have merged their sounds in Standard Dutch but remained separate sounds in West Flemish. Pronunciations can also differ slightly from region to region.
The absence of /x/ and /ɣ/ in West Flemish makes pronouncing them very difficult for native speakers. That often causes hypercorrection of the /h/ sounds to a /x/ or /ɣ/.
Standard Dutch also has many words with an -en (/ən/) suffix (mostly plural forms of verbs and nouns). While Standard Dutch and most dialects do not pronounce the final n, West Flemish typically drops the e and pronounces the n inside the base word. For base words already ending with n, the final n sound is often lengthened to clarify the suffix. That makes many words become similar to those of English: beaten, listen etc.
The short o ([ɔ]) can also be pronounced as a short u ([ɐ]), a phenomenon also occurring in Russian and some other Slavic languages, called akanye. That happens spontaneously to some words, but other words keep their original short o sounds. Similarly, the short a ([ɑ]) can turn into a short o ([ɔ]) in some words spontaneously.
The diphthong ui (/œy/) does not exist in West Flemish and is replaced by a long u ([y]) or a long ie ([i]). Like for the ui, the long o ([o]) can be replaced by an [ø] (eu) for some words but a [uo] for others. That often causes similarities to ranchers English.
Here are some examples showing the sound shifts that are part of the vocabulary:
* This is as an example as a lot of words are not the same. The actual word used for kom is menne.
Plural forms in Standard Dutch most often add -en, but West Flemish usually uses -s, like the Low Saxon dialects and even more prominently in English in which -en has become very rare. Under the influence of Standard Dutch, -s is being used by fewer people, and younger speakers tend to use -en.
The verbs zijn ("to be") and hebben ("to have") are also conjugated differently.
West Flemish often has a double subject.
Standard Dutch has an indefinite article that does not depend on gender, unlike in West Flemish. However, a gender-independent article is increasingly used. Like in English, n is pronounced only if the next word begins with a vowel sound.
Another feature of West Flemish is the conjugation of ja and nee ("yes" and "no") to the subject of the sentence. That is somewhat related to the double subject, but even when the rest of the sentence is not pronounced, ja and nee are generally used with the first part of the double subject. There is also an extra word, toet ([tut]), negates the previous sentence but gives a positive answer. It is an abbreviation of " 't en doe 't" - it does it.
Ja and nee can also all be strengthened by adding mo- or ba-. Both mean "but" and are derived from Dutch but or maar) and can be even used together (mobajoat). | [
{
"paragraph_id": 0,
"text": "West Flemish (West-Vlams or West-Vloams or Vlaemsch (in French-Flanders), Dutch: West-Vlaams, French: flamand occidental) is a collection of Dutch dialects spoken in western Belgium and the neighbouring areas of France and the Netherlands.",
"title": ""
},
{
"paragraph_id": 1,
"text": "West Flemish is spoken by about a million people in the Belgian province of West Flanders, and a further 50,000 in the neighbouring Dutch coastal district of Zeelandic Flanders (200,000 if including the closely related dialects of Zeelandic) and 10-20,000 in the northern part of the French department of Nord. Some of the main cities where West Flemish is widely spoken are Bruges, Dunkirk, Kortrijk, Ostend, Roeselare and Ypres.",
"title": ""
},
{
"paragraph_id": 2,
"text": "West Flemish is listed as a \"vulnerable\" language in UNESCO's online Red Book of Endangered Languages.",
"title": ""
},
{
"paragraph_id": 3,
"text": "West Flemish has a phonology that differs significantly from that of Standard Dutch, being similar to Afrikaans in the case of long E, O and A. Also where Standard Dutch has sch, in some parts of West Flanders, West-Flemish, like Afrikaans, has sk. However, the best known traits are the replacement of Standard Dutch (pre-)velar fricatives g and ch in Dutch (/x, ɣ/) with glottal h [h, ɦ],. The following differences are listed by their Dutch spelling, as some different letters have merged their sounds in Standard Dutch but remained separate sounds in West Flemish. Pronunciations can also differ slightly from region to region.",
"title": "Phonology"
},
{
"paragraph_id": 4,
"text": "The absence of /x/ and /ɣ/ in West Flemish makes pronouncing them very difficult for native speakers. That often causes hypercorrection of the /h/ sounds to a /x/ or /ɣ/.",
"title": "Phonology"
},
{
"paragraph_id": 5,
"text": "Standard Dutch also has many words with an -en (/ən/) suffix (mostly plural forms of verbs and nouns). While Standard Dutch and most dialects do not pronounce the final n, West Flemish typically drops the e and pronounces the n inside the base word. For base words already ending with n, the final n sound is often lengthened to clarify the suffix. That makes many words become similar to those of English: beaten, listen etc.",
"title": "Phonology"
},
{
"paragraph_id": 6,
"text": "The short o ([ɔ]) can also be pronounced as a short u ([ɐ]), a phenomenon also occurring in Russian and some other Slavic languages, called akanye. That happens spontaneously to some words, but other words keep their original short o sounds. Similarly, the short a ([ɑ]) can turn into a short o ([ɔ]) in some words spontaneously.",
"title": "Phonology"
},
{
"paragraph_id": 7,
"text": "The diphthong ui (/œy/) does not exist in West Flemish and is replaced by a long u ([y]) or a long ie ([i]). Like for the ui, the long o ([o]) can be replaced by an [ø] (eu) for some words but a [uo] for others. That often causes similarities to ranchers English.",
"title": "Phonology"
},
{
"paragraph_id": 8,
"text": "Here are some examples showing the sound shifts that are part of the vocabulary:",
"title": "Phonology"
},
{
"paragraph_id": 9,
"text": "* This is as an example as a lot of words are not the same. The actual word used for kom is menne.",
"title": "Phonology"
},
{
"paragraph_id": 10,
"text": "Plural forms in Standard Dutch most often add -en, but West Flemish usually uses -s, like the Low Saxon dialects and even more prominently in English in which -en has become very rare. Under the influence of Standard Dutch, -s is being used by fewer people, and younger speakers tend to use -en.",
"title": "Grammar"
},
{
"paragraph_id": 11,
"text": "The verbs zijn (\"to be\") and hebben (\"to have\") are also conjugated differently.",
"title": "Grammar"
},
{
"paragraph_id": 12,
"text": "West Flemish often has a double subject.",
"title": "Grammar"
},
{
"paragraph_id": 13,
"text": "Standard Dutch has an indefinite article that does not depend on gender, unlike in West Flemish. However, a gender-independent article is increasingly used. Like in English, n is pronounced only if the next word begins with a vowel sound.",
"title": "Grammar"
},
{
"paragraph_id": 14,
"text": "Another feature of West Flemish is the conjugation of ja and nee (\"yes\" and \"no\") to the subject of the sentence. That is somewhat related to the double subject, but even when the rest of the sentence is not pronounced, ja and nee are generally used with the first part of the double subject. There is also an extra word, toet ([tut]), negates the previous sentence but gives a positive answer. It is an abbreviation of \" 't en doe 't\" - it does it.",
"title": "Grammar"
},
{
"paragraph_id": 15,
"text": "Ja and nee can also all be strengthened by adding mo- or ba-. Both mean \"but\" and are derived from Dutch but or maar) and can be even used together (mobajoat).",
"title": "Grammar"
}
]
| West Flemish is a collection of Dutch dialects spoken in western Belgium and the neighbouring areas of France and the Netherlands. West Flemish is spoken by about a million people in the Belgian province of West Flanders, and a further 50,000 in the neighbouring Dutch coastal district of Zeelandic Flanders and 10-20,000 in the northern part of the French department of Nord. Some of the main cities where West Flemish is widely spoken are Bruges, Dunkirk, Kortrijk, Ostend, Roeselare and Ypres. West Flemish is listed as a "vulnerable" language in UNESCO's online Red Book of Endangered Languages. | 2001-10-15T15:24:49Z | 2023-11-24T09:18:40Z | [
"Template:Authority control",
"Template:Lang-fr",
"Template:Dutch dialects",
"Template:Reflist",
"Template:Citation",
"Template:InterWiki",
"Template:Cleanup reorganize",
"Template:Lang",
"Template:Commons category",
"Template:Languages of Belgium",
"Template:Germanic languages",
"Template:Clarification needed",
"Template:Cite web",
"Template:Languages of the Benelux",
"Template:Short description",
"Template:Expand Dutch",
"Template:Infobox language",
"Template:Interlanguage link",
"Template:IPA",
"Template:Lang-nl",
"Template:Clear left",
"Template:Refbegin",
"Template:Refend"
]
| https://en.wikipedia.org/wiki/West_Flemish |
10,875 | Fritz Leiber | Fritz Reuter Leiber Jr. (/ˈlaɪbər/ LEYE-bər; December 24, 1910 – September 5, 1992) was an American writer of fantasy, horror, and science fiction. He was also a poet, actor in theater and films, playwright, and chess expert. With writers such as Robert E. Howard and Michael Moorcock, Leiber is one of the fathers of sword and sorcery and coined the term.
Fritz Leiber was born December 24, 1910, in Chicago, Illinois, to the actors Fritz Leiber and Virginia Bronson Leiber. For a time, he seemed inclined to follow in his parents' footsteps; the theater and actors feature in his fiction. He spent 1928 touring with his parents' Shakespeare company (Fritz Leiber & Co.) before entering the University of Chicago, where he was elected to Phi Beta Kappa and received an undergraduate Ph.B. degree in psychology and physiology or biology with honors in 1932. From 1932 to 1933, he worked as a lay reader and studied as a candidate for the ministry, without taking a degree, at the General Theological Seminary in Chelsea, Manhattan, an affiliate of the Episcopal Church.
After pursuing graduate studies in philosophy at the University of Chicago from 1933 to 1934 and again not taking a degree, he remained in Chicago while touring under the stage name of "Francis Lathrop" intermittently with his parents' company and pursuing a literary career. Six short stories later included in the 2010 collection Strange Wonders: A Collection of Rare Fritz Leiber Works carry 1934 and 1935 dates. He also appeared alongside his father in uncredited parts in George Cukor's Camille (1936), James Whale's The Great Garrick (1937), and William Dieterle's The Hunchback of Notre Dame (1939).
In 1936, he initiated a brief, intense correspondence with H. P. Lovecraft, who "encouraged and influenced [Leiber's] literary development" before Lovecraft died in March 1937. Leiber introduced Fafhrd and the Gray Mouser in "Two Sought Adventure", his first professionally published short story in the August 1939 edition of Unknown, edited by John W. Campbell.
Leiber married Jonquil Stephens on January 16, 1936. Their only child, philosopher and science fiction writer Justin Leiber, was born in 1938. From 1937 to 1941, Fritz Leiber was employed by Consolidated Book Publishing as a staff writer for the Standard American Encyclopedia. In 1941, the family moved to California, where Leiber served as a speech and drama instructor at Occidental College during the 1941–1942 academic year.
Unable to conceal his disdain for academic politics as the United States entered World War II, he decided that the struggle against fascism mattered more than his long-held pacifist convictions. He accepted a position with Douglas Aircraft in quality inspection, primarily working on the C-47 Skytrain. Throughout the war, he continued to regularly publish fiction.
Thereafter, the family returned to Chicago, where Leiber served as associate editor of Science Digest from 1945 to 1956. During this decade (forestalled by a fallow interregnum from 1954 to 1956), his output (including the 1947 Arkham House anthology Night's Black Agents) was characterized by Poul Anderson as "a lot of the best science fiction and fantasy in the business". In 1958, the Leibers returned to Los Angeles. By then, he could afford to relinquish his journalistic career and support his family as a full-time fiction writer.
Jonquil's death in 1969 precipitated Leiber's permanent relocation to San Francisco and exacerbated his longstanding alcoholism after twelve years of fellowship in Alcoholics Anonymous. He gradually regained sobriety, an effort impeded by comorbid barbiturate abuse, over the next two decades. Perhaps as a result of his substance abuse, Leiber seems to have suffered periods of penury in the 1970s; Harlan Ellison wrote of his anger at finding that the much-awarded Leiber had to write his novels on a manual typewriter propped up over the sink in his apartment. Marc Laidlaw wrote that, when visiting Leiber as a fan in 1976, he "was shocked to find him occupying one small room of a seedy San Francisco residence hotel, its squalor relieved mainly by walls of books". Other reports suggest that Leiber preferred to live simply in the city, spending his money on dining, movies, and travel. In the last years of his life, royalty checks from TSR, Inc. (the makers of Dungeons & Dragons, who had licensed the mythos of the Fafhrd and Gray Mouser series) were enough in themselves to ensure that he lived comfortably. In 1977, he returned to his original form with a fantasy novel set in modern-day San Francisco, Our Lady of Darkness, which is about a writer of weird tales who must deal with the death of his wife and his recovery from alcoholism.
In 1992, the last year of his life, Leiber married his second wife, Margo Skinner, a journalist and poet with whom he had been friends for years. Leiber died a few weeks after a physical collapse while traveling from a science fiction convention in London, Ontario, with Skinner. His cause of death was a stroke.
He wrote a 100-page-plus memoir, Not Much Disorder and Not So Early Sex, which can be found in The Ghost Light (1984).
Leiber's own literary criticism, including several essays on Lovecraft, was collected in the volume Fafhrd and Me (1990).
As the child of two Shakespearean actors, Leiber was fascinated with the stage, describing itinerant Shakespearean companies in stories like "No Great Magic" and "Four Ghosts in Hamlet", and creating an actor/producer protagonist for his novel A Specter is Haunting Texas.
Although his Change War novel, The Big Time, is about a war between two factions, the "Snakes" and the "Spiders", changing and rechanging history throughout the universe, all the action takes place in a small bubble of isolated space-time the size of a theatrical stage, and with only a handful of characters. Judith Merril (in the July 1969 issue of The Magazine of Fantasy & Science Fiction) remarks on Leiber's acting skills when the writer won a science fiction convention costume ball. Leiber's costume consisted of a cardboard military collar over turned-up jacket lapels, cardboard insignia, an armband, and a spider pencilled large in black on his forehead, thus turning him into an officer of the Spiders, one of the combatants in his Change War stories. "The only other component," Merril writes, "was the Leiber instinct for theatre."
The similarity of the names of the father and the son caused some filmographies to incorrectly attribute to Fritz Jr. roles which were in fact played by his father, Fritz Leiber Sr., who was the evil Inquisitor in the Errol Flynn adventure film The Sea Hawk (1940) and had played in many other movies from 1917 to the late 1950s. It is the elder Leiber, not the younger, who appears in the Vincent Price vehicle The Web (1947) and in Charlie Chaplin's Monsieur Verdoux (1947).
The younger Leiber can be seen briefly as Valentin in the 1936 film version of Camille starring Greta Garbo, probably his most widely-seen film performance. In the cult horror film Equinox (1970) directed by Dennis Muren and Jack Woods, Leiber has a cameo appearance as a geologist, Dr. Watermann. In the edited second version of the movie, Leiber has no spoken dialogue but appears in a few scenes. The original version of the movie has a longer appearance by Leiber recounting the ancient book and a brief speaking role; all were cut from the re-release.
He also appears as Chavez in the 1979 Schick Sunn Classics documentary The Bermuda Triangle, based on the book by Charles Berlitz.
Leiber was heavily influenced by H. P. Lovecraft, Robert Graves, John Webster, and Shakespeare in the first two decades of his career. Beginning in the late 1950s, he was increasingly influenced by the works of Carl Jung, particularly by the concepts of the anima and the shadow. In the mid-1960s, he began incorporating elements of Joseph Campbell's The Hero with a Thousand Faces. These concepts are often mentioned in his stories, especially the anima, which becomes a method of exploring his fascination with, but estrangement from, the female.
Leiber liked cats, which are featured in many of his stories. Tigerishka, for example, is a cat-like alien who is sexually attractive to the human protagonist yet repelled by human customs in the novel The Wanderer. Leiber's "Gummitch" stories feature a kitten with an I.Q. of 160, just waiting for his ritual cup of coffee so that he can become human, too.
His first stories in the 1930s and 40s were inspired by Lovecraft's Cthulhu Mythos. A notable critic and historian of the wider Mythos, S. T. Joshi, has singled out Leiber's "The Sunken Land" (Unknown Worlds, February 1942) as the most accomplished of the early stories based on Lovecraft's Mythos. Leiber also later wrote several essays on Lovecraft the man, such as "A Literary Copernicus" (1949), the publication of which formed a key moment in the emergence of a serious critical appreciation of Lovecraft's life and work.
Leiber's first professional sale was "Two Sought Adventure" (Unknown, August 1939), which introduced his most famous characters, Fafhrd and the Gray Mouser. In 1943, his first two novels were serialized in Unknown (the supernatural horror-oriented Conjure Wife, inspired by his experiences on the faculty of Occidental College) and Astounding Science Fiction (Gather, Darkness).
1947 marked the publication of his first book, Night's Black Agents, a short story collection containing seven stories grouped as 'Modern Horrors', one as a 'Transition', and two grouped as 'Ancient Adventures': "The Sunken Land" and "Adept's Gambit", which are both stories of Fafhrd and the Gray Mouser.
The science fiction novel Gather, Darkness followed in 1950. It deals with a futuristic world that follows the Second Atomic Age which is ruled by scientists, until in the throes of a new Dark Age, the witches revolt.
In 1951, Leiber was Guest of Honor at the World Science Fiction Convention in New Orleans. Further novels followed during the 1950s, and in 1958 The Big Time won the Hugo Award for Best Novel.
Leiber continued to publish in the 1960s. His novel The Wanderer (1964) also won the Hugo for Best Novel. In the novel, an artificial planet nicknamed the Wanderer materializes from hyperspace within earth's orbit. The Wanderer's gravitational field captures the moon and shatters it into something like one of Saturn's rings. On Earth, the Wanderer's gravity well triggers earthquakes, tsunamis, and tidal phenomena. The multi-threaded plot follows the exploits of an ensemble cast as they struggle to survive the global disaster.
In the same period, Leiber published "Black Gondolier", a short story in which a protagonist uncovers a cosmic conspiracy in which oil from ancient fossils preys upon human beings and human civilizations. Leiber received the Hugo Award for Best Novella in 1970 and 1971 for "Ship of Shadows" (1969) and "Ill Met in Lankhmar" (1970). "Gonna Roll the Bones" (1967), his contribution to Harlan Ellison's Dangerous Visions anthology, won the Hugo Award for Best Novelette and the Nebula Award for Best Novelette in 1968.
Our Lady of Darkness (1977), originally serialized in short form in The Magazine of Fantasy & Science Fiction under the title "The Pale Brown Thing" (1977), featured cities as the breeding grounds for new types of elementals called paramentals, summonable by the dark art of megapolisomancy, with such activities centering on the Transamerica Pyramid. Its main characters include Franz Westen, Jaime Donaldus Byers, and the magician Thibault de Castries. Our Lady of Darkness won the World Fantasy Award—Novel.
Leiber also wrote the 1966 novelization of the Clair Huffaker screenplay of Tarzan and the Valley of Gold.
Many of Leiber's most acclaimed works are short stories, especially in the horror genre. Owing to such stories as "The Smoke Ghost", "The Girl With the Hungry Eyes", and "You're All Alone" (later expanded as The Sinful Ones), he is one of the forerunners of the modern urban horror story. Leiber also challenged the conventions of science fiction through reflexive narratives such as "A Bad Day For Sales" (first published in Galaxy Science Fiction, July 1953), in which the protagonist, Robie, "America’s only genuine mobile salesrobot", references the title character of Isaac Asimov's idealistic robot story, "Robbie". Questioning Isaac Asimov's Three Laws of Robotics, Leiber imagines the futility of automatons in a post-apocalyptic New York City. In his later years, Leiber returned to short story horror in such works as "Horrible Imaginings", "Black Has Its Charms" and the award-winning "The Button Moulder".
The short parallel worlds story "Catch That Zeppelin!" (1975) won the Hugo Award for Best Short Story and the Nebula Award for Best Short Story in 1976. It presents an alternate reality much better than our own, as opposed to the usual parallel universe story depicting a world worse than our own. "Belsen Express" (1975) won the World Fantasy Award—Short Fiction. Both stories reflect Leiber's uneasy fascination with Nazism, an uneasiness compounded by his mixed feelings about his German ancestry and his philosophical pacifism during World War II.
Leiber was named the second Gandalf Grand Master of Fantasy by participants in the 1975 World Science Fiction Convention (Worldcon), after the posthumous inaugural award to J. R. R. Tolkien. Next year he won the World Fantasy Award for Life Achievement. He was Guest of Honor at the 1979 Worldcon in Brighton, England (1979). The Science Fiction Writers of America made him its fifth SFWA Grand Master in 1981; the Horror Writers Association made him an inaugural winner of the Bram Stoker Award for Lifetime Achievement in 1988 (named in 1987); and the Science Fiction and Fantasy Hall of Fame inducted him in 2001, its sixth class of two deceased and two living writers.
Leiber was a founding member of the Swordsmen and Sorcerers' Guild of America (SAGA), a loose-knit group of Heroic fantasy authors founded in the 1960s and led by Lin Carter. Some works by SAGA members were published in Lin Carter's Flashing Swords! anthologies. Leiber himself is credited with inventing the term sword and sorcery for the particular subgenre of epic fantasy exemplified by his Fafhrd and Grey Mouser stories.
In an appreciation in the July 1969 "Special Fritz Leiber Issue" of The Magazine of Fantasy & Science Fiction, Judith Merril writes of Leiber's connection with his readers: "That this kind of personal response...is shared by thousands of other readers, has been made clear on several occasions." The November 1959 issue of Fantastic, for instance: Leiber had just come out of one of his recurrent dry spells, and editor Cele Lalli bought up all his new material until there was enough [five stories] to fill an issue; the magazine came out with a big black headline across its cover — Leiber Is Back!
His legacy has been consolidated by his most famous creations, the Fafhrd and the Gray Mouser stories, written over a span of 50 years. The first, "Two Sought Adventure", appeared in Unknown, August 1939. The stories are about an unlikely pair of heroes found in and around the city of Lankhmar. Fafhrd was based on Leiber himself and the Mouser on his friend Harry Otto Fischer, and the two characters were created in a series of letters exchanged by the two in the mid-1930s. These stories were among the progenitors of many of the tropes of sword and sorcery. They are also notable among sword and sorcery stories in that, over the course of the stories, his two heroes mature, take on more responsibilities, and eventually settle down into marriage.
Some Fafhrd and Mouser stories were recognized by annual genre awards: "Scylla's Daughter" (1961) was "Short Story" Hugo finalist, and "Ill Met in Lankhmar" (1970) won the "Best Novella" Hugo and Nebula Awards. Leiber's last major work, The Knight and Knave of Swords (1991), closed out the series while leaving room for possible sequels. In his last year, Leiber considered allowing other writers to continue the series, but his sudden death made this more difficult. One new Fafhrd and the Mouser novel, Swords Against the Shadowland, by Robin Wayne Bailey, appeared in 1998.
The stories influenced the shaping of sword and sorcery and other works. Joanna Russ' stories about thief-assassin Alyx (collected in 1976 in The Adventures of Alyx) were in part inspired by Fafhrd and the Gray Mouser, and Alyx made guest appearances in two of Leiber's stories. Numerous writers have paid homage to the stories. For instance, Terry Pratchett's city of Ankh-Morpork bears something more than a passing resemblance to Lankhmar (acknowledged by Pratchett by the placing of the swordsman-thief "The Weasel" and his giant barbarian comrade "Bravd" in the opening scenes of the first Discworld novel). More recently, playing off the visit of Fafhrd and the Grey Mouser to our world in Adept's Gambit (set in second century B.C. Tyre), Steven Saylor's short story "Ill Seen in Tyre" takes his Roma Sub Rosa series hero Gordianus to the city of Tyre a hundred years later, where the two visitors from Nehwon are remembered as local legends.
Fischer and Leiber contributed to the original design of the 1976 wargame Lankhmar from TSR.
Conjure Wife has been made into feature films four times under other titles:
"The Girl with the Hungry Eyes" was filmed under that title by Kastenbaum Films in 1995. (This film is not to be confused with the 1967 William Rotsler film The Girl with the Hungry Eyes which is entirely unrelated to Leiber's story).
Two Leiber stories were filmed for TV for Rod Serling's Night Gallery. These were "The Girl with the Hungry Eyes" (1970) (adapted by Robert M. Young and directed by John Badham) and "The Dead Man" (adapted and directed by Douglas Heyes). | [
{
"paragraph_id": 0,
"text": "Fritz Reuter Leiber Jr. (/ˈlaɪbər/ LEYE-bər; December 24, 1910 – September 5, 1992) was an American writer of fantasy, horror, and science fiction. He was also a poet, actor in theater and films, playwright, and chess expert. With writers such as Robert E. Howard and Michael Moorcock, Leiber is one of the fathers of sword and sorcery and coined the term.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fritz Leiber was born December 24, 1910, in Chicago, Illinois, to the actors Fritz Leiber and Virginia Bronson Leiber. For a time, he seemed inclined to follow in his parents' footsteps; the theater and actors feature in his fiction. He spent 1928 touring with his parents' Shakespeare company (Fritz Leiber & Co.) before entering the University of Chicago, where he was elected to Phi Beta Kappa and received an undergraduate Ph.B. degree in psychology and physiology or biology with honors in 1932. From 1932 to 1933, he worked as a lay reader and studied as a candidate for the ministry, without taking a degree, at the General Theological Seminary in Chelsea, Manhattan, an affiliate of the Episcopal Church.",
"title": "Life"
},
{
"paragraph_id": 2,
"text": "After pursuing graduate studies in philosophy at the University of Chicago from 1933 to 1934 and again not taking a degree, he remained in Chicago while touring under the stage name of \"Francis Lathrop\" intermittently with his parents' company and pursuing a literary career. Six short stories later included in the 2010 collection Strange Wonders: A Collection of Rare Fritz Leiber Works carry 1934 and 1935 dates. He also appeared alongside his father in uncredited parts in George Cukor's Camille (1936), James Whale's The Great Garrick (1937), and William Dieterle's The Hunchback of Notre Dame (1939).",
"title": "Life"
},
{
"paragraph_id": 3,
"text": "In 1936, he initiated a brief, intense correspondence with H. P. Lovecraft, who \"encouraged and influenced [Leiber's] literary development\" before Lovecraft died in March 1937. Leiber introduced Fafhrd and the Gray Mouser in \"Two Sought Adventure\", his first professionally published short story in the August 1939 edition of Unknown, edited by John W. Campbell.",
"title": "Life"
},
{
"paragraph_id": 4,
"text": "Leiber married Jonquil Stephens on January 16, 1936. Their only child, philosopher and science fiction writer Justin Leiber, was born in 1938. From 1937 to 1941, Fritz Leiber was employed by Consolidated Book Publishing as a staff writer for the Standard American Encyclopedia. In 1941, the family moved to California, where Leiber served as a speech and drama instructor at Occidental College during the 1941–1942 academic year.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "Unable to conceal his disdain for academic politics as the United States entered World War II, he decided that the struggle against fascism mattered more than his long-held pacifist convictions. He accepted a position with Douglas Aircraft in quality inspection, primarily working on the C-47 Skytrain. Throughout the war, he continued to regularly publish fiction.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "Thereafter, the family returned to Chicago, where Leiber served as associate editor of Science Digest from 1945 to 1956. During this decade (forestalled by a fallow interregnum from 1954 to 1956), his output (including the 1947 Arkham House anthology Night's Black Agents) was characterized by Poul Anderson as \"a lot of the best science fiction and fantasy in the business\". In 1958, the Leibers returned to Los Angeles. By then, he could afford to relinquish his journalistic career and support his family as a full-time fiction writer.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "Jonquil's death in 1969 precipitated Leiber's permanent relocation to San Francisco and exacerbated his longstanding alcoholism after twelve years of fellowship in Alcoholics Anonymous. He gradually regained sobriety, an effort impeded by comorbid barbiturate abuse, over the next two decades. Perhaps as a result of his substance abuse, Leiber seems to have suffered periods of penury in the 1970s; Harlan Ellison wrote of his anger at finding that the much-awarded Leiber had to write his novels on a manual typewriter propped up over the sink in his apartment. Marc Laidlaw wrote that, when visiting Leiber as a fan in 1976, he \"was shocked to find him occupying one small room of a seedy San Francisco residence hotel, its squalor relieved mainly by walls of books\". Other reports suggest that Leiber preferred to live simply in the city, spending his money on dining, movies, and travel. In the last years of his life, royalty checks from TSR, Inc. (the makers of Dungeons & Dragons, who had licensed the mythos of the Fafhrd and Gray Mouser series) were enough in themselves to ensure that he lived comfortably. In 1977, he returned to his original form with a fantasy novel set in modern-day San Francisco, Our Lady of Darkness, which is about a writer of weird tales who must deal with the death of his wife and his recovery from alcoholism.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "In 1992, the last year of his life, Leiber married his second wife, Margo Skinner, a journalist and poet with whom he had been friends for years. Leiber died a few weeks after a physical collapse while traveling from a science fiction convention in London, Ontario, with Skinner. His cause of death was a stroke.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "He wrote a 100-page-plus memoir, Not Much Disorder and Not So Early Sex, which can be found in The Ghost Light (1984).",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Leiber's own literary criticism, including several essays on Lovecraft, was collected in the volume Fafhrd and Me (1990).",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "As the child of two Shakespearean actors, Leiber was fascinated with the stage, describing itinerant Shakespearean companies in stories like \"No Great Magic\" and \"Four Ghosts in Hamlet\", and creating an actor/producer protagonist for his novel A Specter is Haunting Texas.",
"title": "Theater"
},
{
"paragraph_id": 12,
"text": "Although his Change War novel, The Big Time, is about a war between two factions, the \"Snakes\" and the \"Spiders\", changing and rechanging history throughout the universe, all the action takes place in a small bubble of isolated space-time the size of a theatrical stage, and with only a handful of characters. Judith Merril (in the July 1969 issue of The Magazine of Fantasy & Science Fiction) remarks on Leiber's acting skills when the writer won a science fiction convention costume ball. Leiber's costume consisted of a cardboard military collar over turned-up jacket lapels, cardboard insignia, an armband, and a spider pencilled large in black on his forehead, thus turning him into an officer of the Spiders, one of the combatants in his Change War stories. \"The only other component,\" Merril writes, \"was the Leiber instinct for theatre.\"",
"title": "Theater"
},
{
"paragraph_id": 13,
"text": "The similarity of the names of the father and the son caused some filmographies to incorrectly attribute to Fritz Jr. roles which were in fact played by his father, Fritz Leiber Sr., who was the evil Inquisitor in the Errol Flynn adventure film The Sea Hawk (1940) and had played in many other movies from 1917 to the late 1950s. It is the elder Leiber, not the younger, who appears in the Vincent Price vehicle The Web (1947) and in Charlie Chaplin's Monsieur Verdoux (1947).",
"title": "Films"
},
{
"paragraph_id": 14,
"text": "The younger Leiber can be seen briefly as Valentin in the 1936 film version of Camille starring Greta Garbo, probably his most widely-seen film performance. In the cult horror film Equinox (1970) directed by Dennis Muren and Jack Woods, Leiber has a cameo appearance as a geologist, Dr. Watermann. In the edited second version of the movie, Leiber has no spoken dialogue but appears in a few scenes. The original version of the movie has a longer appearance by Leiber recounting the ancient book and a brief speaking role; all were cut from the re-release.",
"title": "Films"
},
{
"paragraph_id": 15,
"text": "He also appears as Chavez in the 1979 Schick Sunn Classics documentary The Bermuda Triangle, based on the book by Charles Berlitz.",
"title": "Films"
},
{
"paragraph_id": 16,
"text": "Leiber was heavily influenced by H. P. Lovecraft, Robert Graves, John Webster, and Shakespeare in the first two decades of his career. Beginning in the late 1950s, he was increasingly influenced by the works of Carl Jung, particularly by the concepts of the anima and the shadow. In the mid-1960s, he began incorporating elements of Joseph Campbell's The Hero with a Thousand Faces. These concepts are often mentioned in his stories, especially the anima, which becomes a method of exploring his fascination with, but estrangement from, the female.",
"title": "Writing career"
},
{
"paragraph_id": 17,
"text": "Leiber liked cats, which are featured in many of his stories. Tigerishka, for example, is a cat-like alien who is sexually attractive to the human protagonist yet repelled by human customs in the novel The Wanderer. Leiber's \"Gummitch\" stories feature a kitten with an I.Q. of 160, just waiting for his ritual cup of coffee so that he can become human, too.",
"title": "Writing career"
},
{
"paragraph_id": 18,
"text": "His first stories in the 1930s and 40s were inspired by Lovecraft's Cthulhu Mythos. A notable critic and historian of the wider Mythos, S. T. Joshi, has singled out Leiber's \"The Sunken Land\" (Unknown Worlds, February 1942) as the most accomplished of the early stories based on Lovecraft's Mythos. Leiber also later wrote several essays on Lovecraft the man, such as \"A Literary Copernicus\" (1949), the publication of which formed a key moment in the emergence of a serious critical appreciation of Lovecraft's life and work.",
"title": "Writing career"
},
{
"paragraph_id": 19,
"text": "Leiber's first professional sale was \"Two Sought Adventure\" (Unknown, August 1939), which introduced his most famous characters, Fafhrd and the Gray Mouser. In 1943, his first two novels were serialized in Unknown (the supernatural horror-oriented Conjure Wife, inspired by his experiences on the faculty of Occidental College) and Astounding Science Fiction (Gather, Darkness).",
"title": "Writing career"
},
{
"paragraph_id": 20,
"text": "1947 marked the publication of his first book, Night's Black Agents, a short story collection containing seven stories grouped as 'Modern Horrors', one as a 'Transition', and two grouped as 'Ancient Adventures': \"The Sunken Land\" and \"Adept's Gambit\", which are both stories of Fafhrd and the Gray Mouser.",
"title": "Writing career"
},
{
"paragraph_id": 21,
"text": "The science fiction novel Gather, Darkness followed in 1950. It deals with a futuristic world that follows the Second Atomic Age which is ruled by scientists, until in the throes of a new Dark Age, the witches revolt.",
"title": "Writing career"
},
{
"paragraph_id": 22,
"text": "In 1951, Leiber was Guest of Honor at the World Science Fiction Convention in New Orleans. Further novels followed during the 1950s, and in 1958 The Big Time won the Hugo Award for Best Novel.",
"title": "Writing career"
},
{
"paragraph_id": 23,
"text": "Leiber continued to publish in the 1960s. His novel The Wanderer (1964) also won the Hugo for Best Novel. In the novel, an artificial planet nicknamed the Wanderer materializes from hyperspace within earth's orbit. The Wanderer's gravitational field captures the moon and shatters it into something like one of Saturn's rings. On Earth, the Wanderer's gravity well triggers earthquakes, tsunamis, and tidal phenomena. The multi-threaded plot follows the exploits of an ensemble cast as they struggle to survive the global disaster.",
"title": "Writing career"
},
{
"paragraph_id": 24,
"text": "In the same period, Leiber published \"Black Gondolier\", a short story in which a protagonist uncovers a cosmic conspiracy in which oil from ancient fossils preys upon human beings and human civilizations. Leiber received the Hugo Award for Best Novella in 1970 and 1971 for \"Ship of Shadows\" (1969) and \"Ill Met in Lankhmar\" (1970). \"Gonna Roll the Bones\" (1967), his contribution to Harlan Ellison's Dangerous Visions anthology, won the Hugo Award for Best Novelette and the Nebula Award for Best Novelette in 1968.",
"title": "Writing career"
},
{
"paragraph_id": 25,
"text": "Our Lady of Darkness (1977), originally serialized in short form in The Magazine of Fantasy & Science Fiction under the title \"The Pale Brown Thing\" (1977), featured cities as the breeding grounds for new types of elementals called paramentals, summonable by the dark art of megapolisomancy, with such activities centering on the Transamerica Pyramid. Its main characters include Franz Westen, Jaime Donaldus Byers, and the magician Thibault de Castries. Our Lady of Darkness won the World Fantasy Award—Novel.",
"title": "Writing career"
},
{
"paragraph_id": 26,
"text": "Leiber also wrote the 1966 novelization of the Clair Huffaker screenplay of Tarzan and the Valley of Gold.",
"title": "Writing career"
},
{
"paragraph_id": 27,
"text": "Many of Leiber's most acclaimed works are short stories, especially in the horror genre. Owing to such stories as \"The Smoke Ghost\", \"The Girl With the Hungry Eyes\", and \"You're All Alone\" (later expanded as The Sinful Ones), he is one of the forerunners of the modern urban horror story. Leiber also challenged the conventions of science fiction through reflexive narratives such as \"A Bad Day For Sales\" (first published in Galaxy Science Fiction, July 1953), in which the protagonist, Robie, \"America’s only genuine mobile salesrobot\", references the title character of Isaac Asimov's idealistic robot story, \"Robbie\". Questioning Isaac Asimov's Three Laws of Robotics, Leiber imagines the futility of automatons in a post-apocalyptic New York City. In his later years, Leiber returned to short story horror in such works as \"Horrible Imaginings\", \"Black Has Its Charms\" and the award-winning \"The Button Moulder\".",
"title": "Writing career"
},
{
"paragraph_id": 28,
"text": "The short parallel worlds story \"Catch That Zeppelin!\" (1975) won the Hugo Award for Best Short Story and the Nebula Award for Best Short Story in 1976. It presents an alternate reality much better than our own, as opposed to the usual parallel universe story depicting a world worse than our own. \"Belsen Express\" (1975) won the World Fantasy Award—Short Fiction. Both stories reflect Leiber's uneasy fascination with Nazism, an uneasiness compounded by his mixed feelings about his German ancestry and his philosophical pacifism during World War II.",
"title": "Writing career"
},
{
"paragraph_id": 29,
"text": "Leiber was named the second Gandalf Grand Master of Fantasy by participants in the 1975 World Science Fiction Convention (Worldcon), after the posthumous inaugural award to J. R. R. Tolkien. Next year he won the World Fantasy Award for Life Achievement. He was Guest of Honor at the 1979 Worldcon in Brighton, England (1979). The Science Fiction Writers of America made him its fifth SFWA Grand Master in 1981; the Horror Writers Association made him an inaugural winner of the Bram Stoker Award for Lifetime Achievement in 1988 (named in 1987); and the Science Fiction and Fantasy Hall of Fame inducted him in 2001, its sixth class of two deceased and two living writers.",
"title": "Writing career"
},
{
"paragraph_id": 30,
"text": "Leiber was a founding member of the Swordsmen and Sorcerers' Guild of America (SAGA), a loose-knit group of Heroic fantasy authors founded in the 1960s and led by Lin Carter. Some works by SAGA members were published in Lin Carter's Flashing Swords! anthologies. Leiber himself is credited with inventing the term sword and sorcery for the particular subgenre of epic fantasy exemplified by his Fafhrd and Grey Mouser stories.",
"title": "Writing career"
},
{
"paragraph_id": 31,
"text": "In an appreciation in the July 1969 \"Special Fritz Leiber Issue\" of The Magazine of Fantasy & Science Fiction, Judith Merril writes of Leiber's connection with his readers: \"That this kind of personal response...is shared by thousands of other readers, has been made clear on several occasions.\" The November 1959 issue of Fantastic, for instance: Leiber had just come out of one of his recurrent dry spells, and editor Cele Lalli bought up all his new material until there was enough [five stories] to fill an issue; the magazine came out with a big black headline across its cover — Leiber Is Back!",
"title": "Writing career"
},
{
"paragraph_id": 32,
"text": "His legacy has been consolidated by his most famous creations, the Fafhrd and the Gray Mouser stories, written over a span of 50 years. The first, \"Two Sought Adventure\", appeared in Unknown, August 1939. The stories are about an unlikely pair of heroes found in and around the city of Lankhmar. Fafhrd was based on Leiber himself and the Mouser on his friend Harry Otto Fischer, and the two characters were created in a series of letters exchanged by the two in the mid-1930s. These stories were among the progenitors of many of the tropes of sword and sorcery. They are also notable among sword and sorcery stories in that, over the course of the stories, his two heroes mature, take on more responsibilities, and eventually settle down into marriage.",
"title": "Fafhrd and the Gray Mouser"
},
{
"paragraph_id": 33,
"text": "Some Fafhrd and Mouser stories were recognized by annual genre awards: \"Scylla's Daughter\" (1961) was \"Short Story\" Hugo finalist, and \"Ill Met in Lankhmar\" (1970) won the \"Best Novella\" Hugo and Nebula Awards. Leiber's last major work, The Knight and Knave of Swords (1991), closed out the series while leaving room for possible sequels. In his last year, Leiber considered allowing other writers to continue the series, but his sudden death made this more difficult. One new Fafhrd and the Mouser novel, Swords Against the Shadowland, by Robin Wayne Bailey, appeared in 1998.",
"title": "Fafhrd and the Gray Mouser"
},
{
"paragraph_id": 34,
"text": "The stories influenced the shaping of sword and sorcery and other works. Joanna Russ' stories about thief-assassin Alyx (collected in 1976 in The Adventures of Alyx) were in part inspired by Fafhrd and the Gray Mouser, and Alyx made guest appearances in two of Leiber's stories. Numerous writers have paid homage to the stories. For instance, Terry Pratchett's city of Ankh-Morpork bears something more than a passing resemblance to Lankhmar (acknowledged by Pratchett by the placing of the swordsman-thief \"The Weasel\" and his giant barbarian comrade \"Bravd\" in the opening scenes of the first Discworld novel). More recently, playing off the visit of Fafhrd and the Grey Mouser to our world in Adept's Gambit (set in second century B.C. Tyre), Steven Saylor's short story \"Ill Seen in Tyre\" takes his Roma Sub Rosa series hero Gordianus to the city of Tyre a hundred years later, where the two visitors from Nehwon are remembered as local legends.",
"title": "Fafhrd and the Gray Mouser"
},
{
"paragraph_id": 35,
"text": "Fischer and Leiber contributed to the original design of the 1976 wargame Lankhmar from TSR.",
"title": "Fafhrd and the Gray Mouser"
},
{
"paragraph_id": 36,
"text": "Conjure Wife has been made into feature films four times under other titles:",
"title": "Selected works"
},
{
"paragraph_id": 37,
"text": "\"The Girl with the Hungry Eyes\" was filmed under that title by Kastenbaum Films in 1995. (This film is not to be confused with the 1967 William Rotsler film The Girl with the Hungry Eyes which is entirely unrelated to Leiber's story).",
"title": "Selected works"
},
{
"paragraph_id": 38,
"text": "Two Leiber stories were filmed for TV for Rod Serling's Night Gallery. These were \"The Girl with the Hungry Eyes\" (1970) (adapted by Robert M. Young and directed by John Badham) and \"The Dead Man\" (adapted and directed by Douglas Heyes).",
"title": "Selected works"
}
]
| Fritz Reuter Leiber Jr. was an American writer of fantasy, horror, and science fiction. He was also a poet, actor in theater and films, playwright, and chess expert. With writers such as Robert E. Howard and Michael Moorcock, Leiber is one of the fathers of sword and sorcery and coined the term. | 2001-07-11T11:45:22Z | 2023-11-15T03:49:09Z | [
"Template:Portal bar",
"Template:Gutenberg author",
"Template:World Fantasy Award Best Short Fiction",
"Template:Efn",
"Template:Notelist",
"Template:Authority control",
"Template:About",
"Template:Respell",
"Template:Webarchive",
"Template:Cite book",
"Template:Librivox author",
"Template:IBList",
"Template:Cthulhu Mythos",
"Template:Citation needed",
"Template:Main",
"Template:Cite web",
"Template:IMDb name",
"Template:Reflist",
"Template:Internet Archive author",
"Template:Isfdb name",
"Template:Bram Stoker Award for Lifetime Achievement",
"Template:IPAc-en",
"Template:ISBN",
"Template:Sfhof",
"Template:Cite journal",
"Template:Commons category",
"Template:Wikiquote",
"Template:Fritz Leiber",
"Template:Use mdy dates",
"Template:Infobox writer",
"Template:Cite news",
"Template:World Fantasy Award Life Achievement",
"Template:Short description",
"Template:StandardEbooks",
"Template:Damon Knight Memorial Grand Master Awards"
]
| https://en.wikipedia.org/wiki/Fritz_Leiber |
10,878 | Flanders | Flanders (UK: /ˈflɑːndərz/, US: /ˈflæn-/; Dutch: Vlaanderen [ˈvlaːndərə(n)] ) is the Dutch-speaking northern portion of Belgium and one of the communities, regions and language areas of Belgium. However, there are several overlapping definitions, including ones related to culture, language, politics, and history, and sometimes involving neighbouring countries. The demonym associated with Flanders is Fleming, while the corresponding adjective is Flemish, which is also the name of the local dialect. The official capital of Flanders is the City of Brussels, although the Brussels-Capital Region that includes it has an independent regional government. The powers of the government of Flanders consist, among others, of economic affairs in the Flemish Region and the community aspects of Flanders life in Brussels, such as Flemish culture and education.
Geographically, Flanders is mainly flat, and has a small section of coast on the North Sea. It borders the French department of Nord to the south-west near the coast, the Dutch provinces of Zeeland, North Brabant and Limburg to the north and east, and the Walloon provinces of Hainaut, Walloon Brabant and Liège to the south. Despite accounting for only 45% of Belgium's territory, more than half the population lives there – 6,653,062 (or 57%) out of 11,431,406 Belgian inhabitants. Much of Flanders is agriculturally fertile and densely populated at 483/km (1,250/sq mi). The Brussels Region is an officially bilingual enclave within the Flemish Region. Flanders also has exclaves of its own: Voeren in the east is between Wallonia and the Netherlands and Baarle-Hertog in the north consists of 22 exclaves surrounded by the Netherlands. Not including Brussels, there are five present-day Flemish provinces: Antwerp, East Flanders, Flemish Brabant, Limburg and West Flanders. The official language is Dutch.
The area of today's Flanders has figured prominently in European history since the Middle Ages. The original County of Flanders stretched around AD 900 from the Strait of Dover to the Scheldt estuary and expanded from there. This county also still corresponds roughly with the modern-day Belgian provinces of West Flanders and East Flanders, along with neighbouring parts of France and the Netherlands. In this period, cities such as Ghent and Bruges of the historic County of Flanders, and later Antwerp of the Duchy of Brabant made it one of the richest and most urbanised parts of Europe, trading, and weaving the wool of neighbouring lands into cloth for both domestic use and export. As a consequence, a very sophisticated culture developed, with impressive achievements in the arts and architecture, rivaling those of northern Italy.
Belgium was one of the centres of the 19th-century Industrial Revolution, but this occurred mainly in French-speaking Wallonia. In the second half of the 20th century, and due to massive national investments in port infrastructure, Flanders' economy modernised rapidly, and today Flanders and Brussels are much wealthier than Wallonia, being among the wealthiest regions in Europe and the world. In accordance with late 20th century Belgian state reforms, Flanders was made into two political entities: the Flemish Region (Dutch: Vlaams Gewest) and the Flemish Community (Dutch: Vlaamse Gemeenschap). These entities were merged, although geographically the Flemish Community, which has a broader cultural mandate, covers Brussels, whereas the Flemish Region does not.
The term "Flanders" has several main modern meanings:
The name originally applied to the ancien régime territory called the County of Flanders, that existed from the 8th century (Latin Flandria) until its absorption by the French First Republic. Until the 1600s, this county also extended over parts of what are now France and the Netherlands.
Especially in international discussions, the significance of the County of Flanders and its counts eroded over time, but the designation was used for a bigger territory. "Flanders" (and Latin "Belgium") were the first two common names used for the Burgundian Netherlands. With the breakaway of the northern Netherlands in the early modern period, the term Flanders continued to be associated with the whole southern part of the Low Countries—the Southern, Spanish or Austrian Netherlands, which were the successors of the Burgundian state, and predecessors of modern Belgium.
The term "Flemish" came to be a term for the language Dutch, and during the 19th and 20th centuries, it became increasingly common to refer exclusively to the Dutch-speaking part of Belgium as "Flanders". Belgium divided itself into official French- and Dutch-speaking parts starting in the early '60s. Today Flanders extends over the northern part of Belgium, including not only the Dutch-speaking Belgian parts of the medieval Duchy of Brabant, which was united with Flanders since the Middle Ages, but also Belgian Limburg, which corresponds closely to the medieval County of Loon, and was never under Burgundian control.
The ambiguity between this wider cultural area and that of the county or province still remains in discussions about the region. In most present-day contexts however, the term Flanders is taken to refer to either the political, social, cultural, and linguistic community (and the corresponding official institution, the Flemish Community), or the geographical area, one of the three institutional regions in Belgium, namely the Flemish Region.
In the history of art and other fields, the adjectives Flemish and Netherlandish are commonly used to designate all the artistic production in this area before about 1580, after which it refers specifically to the southern Netherlands. For example, the term "Flemish Primitives", now outdated in English but used in French, Dutch and other languages, is a synonym for "Early Netherlandish painting", and it is not uncommon to see Mosan art categorized as Flemish art. In music the Franco-Flemish School is also known as the Dutch School.
Within this Dutch-speaking part of Belgium, French has never ceased to be spoken by some citizens and Jewish groups have been speaking Yiddish in Antwerp for centuries. Regardless of nationality or linguistic background, according to Belgian Law education in schools located in the Flemish Region must be mainly in the Dutch language. In Brussels, teaching is also done in French.
When Julius Caesar conquered the area he described it as the less economically developed and more warlike part of Gallia Belgica. His informants told him that especially in the east, the tribes claimed ancestral connections and kinship with the "Germanic" peoples then east of the Rhine. Under the Roman empire the whole of Gallia Belgica became an administrative province. The future counties of Flanders and Brabant remained part of this province connected to what is now France, but in the east modern Limburg became part of the Rhine frontier province of Germania Inferior connected to what is now the Netherlands and Germany. Gallia Belgica and Germania Inferior were the two most northerly continental provinces of the Roman empire.
In the future county of Flanders, the main Belgic tribe in early Roman times was the Menapii, but also on the coast were the Marsacii and Morini. In the central part of modern Belgium were the Nervii, whose territory corresponded to medieval Brabant as well as French-speaking Hainaut. In the east was the large district of the Tungri which covered both French- and Dutch-speaking parts of eastern Belgium. The Tungri were understood to have links to Germanic tribes east of the Rhine. Another notable group were the Toxandrians who appear to have lived in the Kempen region, in the northern parts of both the Nervian and Tungrian districts, probably stretching into the modern Netherlands. The Roman administrative districts (civitates) of the Menapii, Nervii and Tungri therefore corresponded roughly with the medieval counties of Flanders, Brabant and Loon, and the modern Flemish provinces of East and West Flanders (Menapii), Brabant and Antwerp (the northern Nervii), and Belgian Limburg (part of the Tungri). Brabant appears to have been separated from the Tungri by a relatively unpopulated forest area, the Silva Carbonaria, forming a natural boundary between northeast and southwest Belgium.
Linguistically, the tribes in this area were under Celtic influence in the south, and Germanic influence in the east, but there is disagreement about what languages were spoken locally (apart from Vulgar Latin), and there may even have been an intermediate "Nordwestblock" language related to both. By the first century AD, Germanic languages appear to have become prevalent in the area of the Tungri.
As Roman influence waned, Frankish populations settled in the Tungiran area east of the Silva Carbonaria, and eventually pushed through it under Chlodio. They had kings in each Roman district (civitas). In the meantime, the Franks contributed to the Roman military. The first Merovingian king Childeric I was king of the Franks within the military of Gaul. He became leader of the administration of Belgica Secunda, which included the civitas of the Menapii (the future county of Flanders). From there, his son Clovis I managed to conquer both the Roman populations of northern France and the Frankish populations beyond the forest areas.
The County of Flanders was a feudal fief in West Francia. The first certain Count in the comital family, Baldwin I of Flanders, is first reported in a document of 862, when he eloped with a daughter of his king Charles the Bald. The region developed as a medieval economic power with a large degree of political autonomy. While its trading cities remained strong, it was weakened and divided when districts fell under direct French royal rule in the late 12th century. The remaining parts of Flanders came under the rule of the counts of neighbouring imperial Hainaut under Baldwin V of Hainaut in 1191.
During the late Middle Ages, Flanders's trading towns (notably Ghent, Bruges and Ypres) made it one of the richest and most urbanized parts of Europe, weaving the wool of neighbouring lands into cloth for both domestic use and export. As a consequence, a sophisticated culture developed, with impressive art and architecture, rivaling those of northern Italy. Ghent, Bruges, Ypres and the Franc of Bruges formed the Four Members, a form of parliament that exercised considerable power in Flanders.
Increasingly powerful from the 12th century, the territory's autonomous urban communes were instrumental in defeating a French attempt at annexation (1300–1302), finally defeating the French in the Battle of the Golden Spurs (11 July 1302), near Kortrijk. Two years later, the uprising was defeated and Flanders indirectly remained part of the French Crown. Flemish prosperity waned in the following century, due to widespread European population decline following the Black Death of 1348, the disruption of trade during the Anglo-French Hundred Years' War (1337–1453), and increased English cloth production. Flemish weavers had gone over to Worstead and North Walsham in Norfolk in the 12th century and established the woolen industry.
The County of Flanders started to take control of the neighbouring County of Brabant during the life of Louis II, Count of Flanders (1330-1384), who fought his sister-in-law Joanna, Duchess of Brabant for control of it.
The entire area, straddling the ancient boundary of France and the Holy Roman Empire, later passed to Philip the Bold in 1384, the Duke of Burgundy, with his capital in Brussels. The titles were eventually more clearly united under his grandson Philip the Good (1396 – 1467). This large Duchy passed in 1477 to the Habsburg dynasty, and in 1556 to the kings of Spain. Western and southern districts of Flanders were confirmed under French rule under successive treaties of 1659 (Artois), 1668 and 1678.
The County of Loon, approximately the modern Flemish province of Limburg, remained independent of France, forming a part of the Prince-Bishopric of Liège until the French Revolution, but surrounded by the Burgundians, and under their influence.
In 1500, Charles V was born in Ghent. He inherited the Seventeen Provinces (1506), Spain (1516) with its colonies and in 1519 was elected Holy Roman Emperor. Charles V issued the Pragmatic Sanction of 1549, which established the Low Countries as the Seventeen Provinces (or Spanish Netherlands in its broad sense) as an entity separate from the Holy Roman Empire and from France. In 1556 Charles V abdicated due to ill health (he suffered from crippling gout). Spain and the Seventeen Provinces went to his son, Philip II of Spain.
Over the first half of the 16th century Antwerp grew to become the second-largest European city north of the Alps by 1560. Antwerp was the richest city in Europe at this time. According to Luc-Normand Tellier "It is estimated that the port of Antwerp was earning the Spanish crown seven times more revenues than the Americas."
Meanwhile, Protestantism had reached the Low Countries. Among the wealthy traders of Antwerp, the Lutheran beliefs of the German Hanseatic traders found appeal, perhaps partly for economic reasons. The spread of Protestantism in this city was aided by the presence of an Augustinian cloister (founded 1514) in the St. Andries quarter. Luther, an Augustinian himself, had taught some of the monks, and his works were in print by 1518. The first Lutheran martyrs came from Antwerp. The Reformation resulted in consecutive but overlapping waves of reform: a Lutheran, followed by a militant Anabaptist, then a Mennonite, and finally a Calvinistic movement. These movements existed independently of each other.
Philip II, a devout Catholic and self-proclaimed protector of the Counter-Reformation, suppressed Calvinism in Flanders, Brabant and Holland (what is now approximately Belgian Limburg was part of the Prince-Bishopric of Liège and was Catholic de facto). In 1566, the wave of iconoclasm known as the Beeldenstorm was a prelude to religious war between Catholics and Protestants, especially the Anabaptists. The Beeldenstorm started in what is now French Flanders, with open-air sermons (Dutch: hagepreken) that spread through the Low Countries, first to Antwerp and Ghent, and from there further east and north.
Subsequently, Philip II of Spain sent the Duke of Alba to the Provinces to repress the revolt. Alba recaptured the southern part of the Provinces, who signed the Union of Atrecht, which meant that they would accept the Spanish government on condition of more freedom. But the northern part of the provinces signed the Union of Utrecht and settled in 1581 the Republic of the Seven United Netherlands. Spanish troops quickly started fighting the rebels, and the Spanish armies conquered the important trading cities of Bruges and Ghent. Antwerp, which was then the most important port in the world, also had to be conquered. But before the revolt was defeated, a war between Spain and England broke out, forcing Spanish troops to halt their advance. On 17 August 1585, Antwerp fell. This ended the Eighty Years' War for the (from now on) Southern Netherlands. The United Provinces (the Northern Netherlands) fought on until 1648 – the Peace of Westphalia.
During the war with England, the rebels from the north, strengthened by refugees from the south, started a campaign to reclaim areas lost to Philip II's Spanish troops. They conquered a considerable part of Brabant (the later North Brabant of the Netherlands), and the south bank of the Scheldt estuary (Zeelandic Flanders), before being stopped by Spanish troops. The front at the end of this war stabilized and became the border between present-day Belgium and the Netherlands. The Dutch (as they later became known) had managed to reclaim enough of Spanish-controlled Flanders to close off the river Scheldt, effectively cutting Antwerp off from its trade routes.
The fall of Antwerp to the Spanish and the closing of the Scheldt caused considerable emigration. Many Calvinist merchants of Antwerp and other Flemish cities left Flanders and migrated north. Many of them settled in Amsterdam, which was a smaller port, important only in the Baltic trade. The Flemish exiles helped to rapidly transform Amsterdam into one of the world's most important ports. This is why the exodus is sometimes described as "creating a new Antwerp".
Flanders and Brabant, went into a period of relative decline from the time of the Thirty Years War. In the Northern Netherlands, the mass emigration from Flanders and Brabant became an important driving force behind the Dutch Golden Age.
Although arts remained relatively impressive for another century with Peter Paul Rubens (1577–1640) and Anthony van Dyck, Flanders lost its former economic and intellectual power under Spanish, Austrian, and French rule. Heavy taxation and rigid imperial political control compounded the effects of industrial stagnation and Spanish-Dutch and Franco-Austrian conflict. The Southern Netherlands suffered severely under the War of the Spanish Succession. But under the reign of Empress Maria-Theresia, these lands again flourished economically. Influenced by the Enlightenment, the Austrian Emperor Joseph II was the first sovereign who had been in the Southern Netherlands since King Philip II of Spain left them in 1559.
In 1794, the French Republican Army started using Antwerp as the northernmost naval port of France. The following year, France officially annexed Flanders as the départements of Lys, Escaut, Deux-Nèthes, Meuse-Inférieure and Dyle. Obligatory (French) army service for all men aged 16–25 years was a main reason for the uprising against the French in 1798, known as the Boerenkrijg (Peasants' War), with the heaviest fighting in the Campine area.
After the defeat of Napoleon Bonaparte at the 1815 Battle of Waterloo in Brabant, the Congress of Vienna (1815) gave sovereignty over the Austrian Netherlands – Belgium minus the East Cantons and Luxembourg – to the United Netherlands (Dutch: Verenigde Nederlanden) under Prince William I of Orange Nassau, making him William I of the United Kingdom of the Netherlands. William I started rapid industrialisation of the southern parts of the Kingdom. But the political system failed to forge a true union between the north and south. Most of the southern bourgeoisie was Roman Catholic and French-speaking, while the north was mainly Protestant and Dutch-speaking.
In 1815, the Dutch Senate was reinstated (Dutch: Eerste Kamer der Staaten Generaal). The nobility, mainly coming from the south, became more and more estranged from their northern colleagues. Resentment grew between the Roman Catholics from the south and the Protestants from the north, and also between the powerful liberal bourgeoisie from the south and their more moderate colleagues from the north. On 25 August 1830 (after the showing of the opera 'La Muette de Portici' of Daniel Auber in Brussels) the Belgian Revolution sparked. On 4 October 1830, the Provisional Government (Dutch: Voorlopig Bewind) proclaimed its independence, which was later confirmed by the National Congress that issued a new Liberal Constitution and declared the new state a Constitutional Monarchy, under the House of Saxe-Coburg. Flanders now became part of the Kingdom of Belgium, which was recognized by the major European Powers on 20 January 1831. The cessation was recognized by the United Kingdom of the Netherlands on 19 April 1839.
In 1830, the Belgian Revolution led to the splitting up of the two countries. Belgium was confirmed as an independent state by the Treaty of London of 1839, but deprived of the eastern half of Limburg (now Dutch Limburg), and the Eastern half of Luxembourg (now the Grand-Duchy of Luxembourg). Sovereignty over Zeelandic Flanders, south of the Westerscheldt river delta, was left with the Kingdom of the Netherlands, which was allowed to levy a toll on all traffic to Antwerp harbour until 1863.
In 1873, Dutch became an official language in public secondary schools. In 1898, Dutch and French were declared equal languages in laws and Royal orders. In 1930, the first Flemish university was opened.
The first official translation of the Belgian constitution in Dutch was not published until 1967.
Flanders (and Belgium as a whole) saw some of the greatest loss of life on the Western Front of the First World War, in particular from the three battles of Ypres.
The war strengthened Flemish identity and consciousness. The occupying German authorities took several Flemish-friendly measures. The resulting suffering of the war is remembered by Flemish organizations during the yearly Yser pilgrimage in Diksmuide at the monument of the Yser Tower.
During the interbellum and World War II, several right-wing fascist and/or national-socialistic parties emerged in Belgium. Since these parties were promised more rights for the Flemings by the German government during World War II, many of them collaborated with the Nazi regime. After the war, collaborators (or people who were Zwart, "Black" during the war) were prosecuted and punished, among them many Flemish Nationalists whose main political goal had been the emancipation of Flanders. As a result, until today Flemish Nationalism is often associated with right-wing and sometimes fascist ideologies.
After World War II, the differences between Dutch-speaking and French-speaking Belgians became clear in a number of conflicts, such as the Royal Question, the question whether King Leopold III should return (which most Flemings supported but Walloons did not) and the use of Dutch in the Catholic University of Leuven. As a result, several state reforms took place in the second half of the 20th century, which transformed the unitary Belgium into a federal state with communities, regions and language areas. This resulted also in the establishment of a Flemish Parliament and Government. During the 1970s, all major political parties split into a Dutch and French-speaking party.
Several Flemish parties still advocate for more Flemish autonomy, some even for Flemish independence (see Partition of Belgium), whereas the French-speakers would like to keep the current state as it is. Recent governments (such as Verhofstadt I Government) have transferred certain federal competences to the regional governments.
On 13 December 2006, a spoof news broadcast by the Belgian Francophone public broadcasting station RTBF announced that Flanders had decided to declare independence from Belgium.
The 2007 federal elections showed more support for Flemish autonomy, marking the start of the 2007–2011 Belgian political crisis. All the political parties that advocated a significant increase of Flemish autonomy gained votes as well as seats in the Belgian federal parliament. This was especially the case for Christian Democratic and Flemish and New Flemish Alliance (N-VA) (who had participated on a shared electoral list). The trend continued during the 2009 regional elections, where CD&V and N-VA were the clear winners in Flanders, and N-VA became even the largest party in Flanders and Belgium during the 2010 federal elections, followed by the longest-ever government formation after which the Di Rupo I Government was formed excluding N-VA. Eight parties agreed on a sixth state reform which aim to solve the disputes between Flemings and French-speakers. However, the 2012 provincial and municipal elections continued the trend of N-VA becoming the biggest party in Flanders.
However, sociological studies show no parallel between the rise of nationalist parties and popular support for their agenda. Instead, a recent study revealed a majority in favour of returning regional competences to the federal level.
Both the Flemish Community and the Flemish Region are constitutional institutions of the Kingdom of Belgium, exercising certain powers within their jurisdiction, granted following a series of state reforms. In practice, the Flemish Community and Region together form a single body, with its own parliament and government, as the Community legally absorbed the competences of the Region. The parliament is a directly elected legislative body composed of 124 representatives. The government consists of up to 11 members and is presided by a Minister-President, currently Geert Bourgeois (New Flemish Alliance) leading a coalition of his party (N-VA) with Christen-Democratisch en Vlaams (CD&V) and Open Vlaamse Liberalen en Democraten (Open VLD).
The area of the Flemish Community is represented on the maps above, including the area of the Brussels-Capital Region (hatched on the relevant map). Roughly, the Flemish Community exercises competences originally oriented towards the individuals of the Community's language: culture (including audiovisual media), education, and the use of the language. Extensions to personal matters less directly associated with language comprise sports, health policy (curative and preventive medicine), and assistance to individuals (protection of youth, social welfare, aid to families, immigrant assistance services, etc.)
The area of the Flemish Region is represented on the maps above. It has a population of more than 6 million (excluding the Dutch-speaking community in the Brussels Region, grey on the map for it is not a part of the Flemish Region). Roughly, the Flemish Region is responsible for territorial issues in a broad sense, including economy, employment, agriculture, water policy, housing, public works, energy, transport, the environment, town and country planning, nature conservation, credit, and foreign trade. It supervises the provinces, municipalities, and intercommunal utility companies.
The number of Dutch-speaking Flemish people in the Capital Region is estimated to be between 11% and 15% (official figures do not exist as there is no language census and no official subnationality). According to a survey conducted by the University of Louvain (UCLouvain) in Louvain-la-Neuve and published in June 2006, 51% of respondents from Brussels claimed to be bilingual, even if they do not have Dutch as their first language. They are governed by the Brussels Region for economics affairs and by the Flemish Community for educational and cultural issues.
As mentioned above, Flemish institutions such as the Flemish Parliament and Government, represent the Flemish Community and the Flemish Region. The region and the community thus de facto share the same parliament and the same government. All these institutions are based in Brussels. Nevertheless, both types of subdivisions (the Community and the Region) still exist legally and the distinction between both is important for the people living in Brussels. Members of the Flemish Parliament who were elected in the Brussels Region cannot vote on affairs belonging to the competences of the Flemish Region.
The official language for all Flemish institutions is Dutch. French enjoys a limited official recognition in a dozen municipalities along the borders with French-speaking Wallonia, and a large recognition in the bilingual Brussels Region. French is widely known in Flanders, with 59% claiming to know French according to a survey conducted by UCLouvain in Louvain-la-Neuve and published in June 2006.
Historically, the political parties reflected the pillarisation (verzuiling) in Flemish society. The traditional political parties of the three pillars are Christian-Democratic and Flemish (CD&V), the Open Flemish Liberals and Democrats (Open Vld) and the Socialist Party – Differently (sp.a).
However, during the last half century, many new political parties were founded in Flanders. One of the first was the nationalist People's Union, of which the right nationalist Flemish Block (now Flemish Interest) split off, and which later dissolved into the now-defunct Spirit or Social Liberal Party, moderate nationalism rather left of the spectrum, on the one hand, and the New Flemish Alliance (N-VA), more conservative but independentist, on the other hand. Other parties are the leftist alternative/ecological Green party; the short-lived anarchistic libertarian spark ROSSEM and more recently the conservative-right liberal List Dedecker, founded by Jean-Marie Dedecker, and the socialist Workers' Party.
Particularly the Flemish Block/Flemish Interest has seen electoral success roughly around the turn of the century, and the New Flemish Alliance during the last few elections, even becoming the largest party in the 2010 federal elections.
For some inhabitants, Flanders is more than just a geographical area or the federal institutions (Flemish Community and Region). Supporters of the Flemish Movement even call it a nation and pursue Flemish independence, but most people (approximately 75%) living in Flanders say they are proud to be Belgian and opposed to the dissolution of Belgium. 20% is even very proud, while some 25% are not proud and 8% is very not proud. Mostly students claim to be proud of their nationality, with 90% of them saying so. Of the people older than 55, 31% claim to be proud of being a Belgian. Particular opposition to secession comes from women, people employed in services, the highest social classes and people from big families. Strongest of all opposing the notion are housekeepers—both housewives and house husbands.
In 2012, the Flemish government drafted a "Charter for Flanders" (Handvest voor Vlaanderen) of which the first article says "Vlaanderen is een deelstaat van de federale Staat België en maakt deel uit van de Europese Unie." ("Flanders is a component state of the federal State of Belgium and is part of the European Union").
Flanders shares its borders with Wallonia in the south, Brussels being an enclave within the Flemish Region. The rest of the border is shared with the Netherlands (Zeelandic Flanders in Zeeland, North Brabant and Limburg) in the north and east, and with France (French Flanders in Hauts-de-France) and the North Sea in the west. Voeren is an exclave of Flanders between Wallonia and the Netherlands, while Baarle-Hertog in Flanders forms a complicated series of enclaves and exclaves with Baarle-Nassau in the Netherlands. Germany, although bordering Wallonia and close to Voeren in Limburg, does not share a border with Flanders. The German-speaking Community of Belgium, also close to Voeren, does not border Flanders either. (The commune of Plombières, majority French speaking, lies between them.)
Flanders is a highly urbanised area, lying completely within the Blue Banana. Antwerp, Ghent, Bruges and Leuven are the largest cities of the Flemish Region. Antwerp has a population of more than 500,000 citizens and is the largest city, Ghent has a population of 250,000 citizens, followed by Bruges with 120,000 citizens and Leuven counts almost 100,000 citizens.
Brussels is a part of Flanders as far as community matters are concerned, but does not belong to the Flemish Region.
Flanders has two main geographical regions: the coastal Yser basin plain in the north-west and a central plain. The first consists mainly of sand dunes and clayey alluvial soils in the polders. Polders are areas of land, close to or below sea level that have been reclaimed from the sea, from which they are protected by dikes or, a little further inland, by fields that have been drained with canals. With similar soils along the lowermost Scheldt basin starts the central plain, a smooth, slowly rising fertile area irrigated by many waterways that reaches an average height of about five metres (16 feet) above sea level with wide valleys of its rivers upstream as well as the Campine region to the east having sandy soils at altitudes around thirty metres. Near its southern edges close to Wallonia one can find slightly rougher land, richer in calcium, with low hills reaching up to 150 m (490 ft) and small valleys, and at the eastern border with the Netherlands, in the Meuse basin, there are marl caves (mergelgrotten). Its exclave around Voeren between the Dutch border and Wallonia's Liège Province attains a maximum altitude of 288 m (945 ft) above sea level.
The present-day Flemish Region covers 13,625 km (5,261 sq mi) and is divided into five provinces, 22 arrondissements and 308 cities or municipalities.
The province of Flemish Brabant is the most recently created, being formed in 1995 after the splitting of the province of Brabant on a linguistic basis.
Most municipalities are made up of several former municipalities, now called deelgemeenten. The largest municipality (both in terms of population and area) is Antwerp, having more than half a million inhabitants. Its nine deelgemeenten have a special status and are called districts, which have an elected council and a college. While any municipality with more than 100,000 inhabitants can establish districts, only Antwerp did this so far. The smallest municipality (also both in terms of population and area) is Herstappe (Limburg).
The Flemish Community covers both the Flemish Region and, together with the French Community, the Brussels-Capital Region. Brussels, an enclave within the province of Flemish Brabant, is not divided into any province nor is it part of any. It coincides with the Arrondissement of Brussels-Capital and includes 19 municipalities.
The Flemish Government has its own local institutions in the Brussels-Capital Region, being the Vlaamse Gemeenschapscommissie (VGC), and its municipal antennae (Gemeenschapscentra, community centres for the Flemish community in Brussels). These institutions are independent from the educational, cultural and social institutions that depend directly on the Flemish Government. They exert, among others, all those cultural competences that outside Brussels fall under the provinces.
The climate is maritime temperate, with significant precipitation in all seasons (Köppen climate classification: Cfb; the average temperature is 3 °C (37 °F) in January, and 21 °C (70 °F) in July; the average precipitation is 65 millimetres (2.6 inches) in January, and 78 millimetres (3.1 inches) in July).
Total gross regional product (GRP) of Flanders in 2021 was €296 billion (excluding Brussels). Per capita GDP at purchasing power parity was 20% above the EU average. Flemish productivity per capita is about 13% higher than that in Wallonia, and wages are about 7% higher than in Wallonia.
Flanders was one of the first continental European areas to undergo the Industrial Revolution, in the 19th century. Initially, the modernization relied heavily on food processing and textile. However, by the 1840s the textile industry of Flanders was in severe crisis and there was famine in Flanders (1846–50). After World War II, Antwerp and Ghent experienced a fast expansion of the chemical and petroleum industries. Flanders also attracted a large majority of foreign investments in Belgium. The 1973 and 1979 oil crises sent the economy into a recession. The steel industry remained in relatively good shape. In the 1980s and 90s, the economic centre of Belgium continued to shift further to Flanders and is now concentrated in the populous Flemish Diamond area. Nowadays, the Flemish economy is mainly service-oriented.
Belgium is a founding member of the European Coal and Steel Community in 1951, which evolved into the present-day European Union. In 1999, the euro, the single European currency, was introduced in Flanders. It replaced the Belgian franc in 2002.
The Flemish economy is strongly export-oriented, in particular of high value-added goods. The main imports are food products, machinery, rough diamonds, petroleum and petroleum products, chemicals, clothing and accessories, and textiles. The main exports are automobiles, food and food products, iron and steel, finished diamonds, textiles, plastics, petroleum products, and non-ferrous metals. Since 1922, Belgium and Luxembourg have been a single trade market within a customs and currency union—the Belgium–Luxembourg Economic Union. Its main trading partners are Germany, the Netherlands, France, the United Kingdom, Italy, the United States, and Spain.
Antwerp is the number one diamond market in the world, diamond exports account for roughly 1/10 of Belgian exports. The Antwerp-based BASF plant is the largest BASF-base outside Germany, and accounts on its own for about 2% of Belgian exports. Other industrial and service activities in Antwerp include car manufacturing, telecommunications, photographic products.
Flanders is home to several science and technology institutes, such as IMEC, VITO, Flanders DC, and Flanders Make.
Flanders has developed an extensive transportation infrastructure of ports, canals, railways and highways. The Port of Antwerp is the second-largest in Europe, after Rotterdam. Other ports are Bruges-Zeebrugge, Ghent and Ostend, of which Zeebrugge and Ostend are located at the Belgian coast [nl].
Whereas railways are managed by the federal National Railway Company of Belgium, other public transport (De Lijn) and roads are managed by the Flemish region.
The main airport is Brussels Airport, the only other civilian airport with scheduled services in Flanders is Antwerp International Airport, but there are two other ones with cargo or charter flights: Ostend-Bruges International Airport and Kortrijk-Wevelgem International Airport, both in West Flanders.
The highest population density is found in the area circumscribed by the Brussels-Antwerp-Ghent-Leuven agglomerations that surround Mechelen and is known as the Flemish Diamond, in other important urban centres as Bruges, Roeselare and Kortrijk to the west, and notable centres Turnhout and Hasselt to the east. On 1 January 2015, the Flemish Region had a population of 6,444,127 and about 15% of the 1,175,173 people in the Brussels Region are also considered Flemish.
The Belgian constitution provides for freedom of religion, and the various governments in general respect this right in practice. Since independence, Catholicism, counterbalanced by strong freethought movements, has had an important role in Belgium's politics, since the 20th century in Flanders mainly via the Christian trade union ACV and the Christian Democratic and Flemish party (CD&V). According to the 2001 Survey and Study of Religion, about 47 percent of the Belgian population identify themselves as belonging to the Catholic Church, while Islam is the second-largest religion at 3.5 percent. A 2006 inquiry in Flanders, considered more religious than Wallonia, showed that 55% considered themselves religious, and 36% believed that God created the world.
Jews have been present in Flanders for a long time, in particular in Antwerp. More recently, Muslims have immigrated to Flanders, now forming the largest minority religion with about 3.9% in the Flemish Region and 25% in Brussels. The largest Muslim group is Moroccan in origin, while the second largest is Turkish in origin.
Education is compulsory from the ages of six to 18, but most Flemings continue to study until around 23. Among the Organisation for Economic Co-operation and Development countries in 1999, Flanders had the third-highest proportion of 18- to 21-year-olds enrolled in postsecondary education. Flanders also scores very high in international comparative studies on education. Its secondary school students consistently rank among the top three for mathematics and science. However, the success is not evenly spread: ethnic minority youth score consistently lower, and the difference is larger than in most comparable countries.
Mirroring the historical political conflicts between the secular and Catholic segments of the population, the Flemish educational system is split into a secular branch controlled by the communities, the provinces, or the municipalities, and a subsidised religious—mostly Catholic—branch. For the subsidised schools, the main costs such as the teacher's wages and building maintenance completely borne by the Flemish government. Subsidised schools are also free to determine their own teaching and examination methods, but in exchange, they must be able to prove that certain minimal terms are achieved by keeping records of the given lessons and exams. It should however be noted that—at least for the Catholic schools—the religious authorities have very limited power over these schools, neither do the schools have a lot of power on their own. Instead, the Catholic schools are a member of the Catholic umbrella organisation VSKO [nl]. The VSKO determines most practicalities for schools, like the advised schedules per study field. However, there's freedom of education in Flanders, which doesn't only mean that every pupil can choose his/her preferred school, but also that every organisation can found a school, and even be subsidised when abiding the different rules. This resulted also in some smaller school systems follow 'methodical pedagogies' (e.g. Steiner, Montessori, or Freinet) or serve the Jewish and Protestant minorities.
During the school year 2003–2004, 68.30% of the total population of children between the ages of six and 18 went to subsidized private schools (both religious schools or 'methodical pedagogies' schools).
The big freedom given to schools results in a constant competition to be the "best" school. The schools get certain reputations amongst parents and employers. So it's important for schools to be the best school since the subsidies depend on the number of pupils. This competition has been pinpointed as one of the main reasons for the high overall quality of the Flemish education. However, the importance of a school's reputation also makes schools more eager to expel pupils that don't perform well. Resulting in the ethnic differences and the well-known waterfall system: pupils start high in the perceived hierarchy, and then drop towards more professional oriented directions or "easier" schools when they can't handle the pressure any longer.
Healthcare is a federal matter, but the Flemish Government is responsible for care, health education and preventive care.
The standard language in Flanders is Dutch; spelling and grammar are regulated by a single authority, the Dutch Language Union (Nederlandse Taalunie), comprising a committee of ministers of the Flemish and Dutch governments, their advisory council of appointed experts, a controlling commission of 22 parliamentarians, and a secretariate. The term Flemish can be applied to the Dutch spoken in Flanders; it shows many regional and local variations.
The biggest difference between Belgian Dutch and Dutch used in the Netherlands is in the pronunciation of words. The Dutch spoken in the north of the Netherlands is typically described as being "sharper", while Belgian Dutch is "softer". In Belgian Dutch, there are also fewer vowels pronounced as diphthongs. When it comes to spelling, Belgian Dutch language purists historically avoided writing words using a French spelling, or searched for specific translations of words derived from French, while the Dutch often retain the French spelling. For example, the Dutch word "punaise" (English: Drawing pin) is derived directly from the French language. Belgian Dutch language purists have lobbied to accept the word "duimspijker" (literally: thumb spike) as official Dutch, though the Dutch Language Union never accepted it as standard Dutch. Other proposals by purists were sometimes accepted, and sometimes reverted again in later spelling revisions. As language purists were quite often professionally involved in language (e.g. as a teacher), these unofficial purist translations are found more often in Belgian Dutch texts.
The earliest example of literature in non-standardized dialects in the current area of Flanders is Hendrik van Veldeke's Eneas Romance, the first courtly romance in a Germanic language (12th century). With a writer of Hendrik Conscience's stature, Flemish literature rose ahead of French literature in Belgium's early history. Guido Gezelle not only explicitly referred to his writings as Flemish but used it in many of his poems, and strongly defended it:
Original from kleengedichtjes (1860?)
Gij zegt dat 't vlaamsch te niet zal gaan: 't en zal! dat 't waalsch gezwets zal boven slaan: 't en zal! Dat hopen, dat begeren wij: dat zeggen en dat zweren wij: zoo lange als wij ons weren, wij: 't en zal, 't en zal, 't en zal!
You say Flemish will fade away: It shan't! that Walloon twaddle will have its way: It shan't! This we hope, for this we hanker: this we say and this we vow: as long as we fight back, we: It shan't, It shan't, It shan't!
The distinction between Dutch and Flemish literature, often perceived politically, is also made on intrinsic grounds by some experts such as Kris Humbeeck, professor of literature at the University of Antwerp. Nevertheless, most Dutch-language literature read (and appreciated to varying degrees) in Flanders is the same as that in the Netherlands.
Influential Flemish writers include Ernest Claes, Stijn Streuvels and Felix Timmermans. Their novels mostly describe rural life in Flanders in the 19th century and at beginning of the 20th. Widely read by the older generations, they are considered somewhat old-fashioned by present-day critics. Some famous Flemish writers of the early 20th century wrote in French, including Nobel Prize winners (1911) Maurice Maeterlinck and Emile Verhaeren. They were followed by a younger generation, including Paul van Ostaijen and Gaston Burssens, who activated the Flemish Movement. Still widely read and translated into other languages (including English) are the novels of authors such as Willem Elsschot, Louis Paul Boon and Hugo Claus. The recent crop of writers includes the novelists Tom Lanoye and Herman Brusselmans, and poets such as the married couple Herman de Coninck and Kristien Hemmerechts.
At the creation of the Belgian state, French was the only official language. Historically Flanders was a Dutch-speaking region. For a long period, French was used as a second language and, like elsewhere in Europe, commonly spoken among the aristocracy. There is still a French-speaking minority in Flanders, especially in the municipalities with language facilities, along the language border and the Brussels periphery (Vlaamse Rand), though many of them are French-speakers that migrated to Flanders in recent decades.
In French Flanders, French is the only official language and now the native language of the majority of the population, but there is still a minority of Dutch-speakers living there. French is also the primary language in the officially bilingual Brussels Capital Region (see Francization of Brussels).
Many Flemings are also able to speak French, children in Flanders generally get their first French lessons in the 5th primary year (normally around 10 years). But the current lack of French outside the educational context makes it hard to maintain a decent level of French. As such, the proficiency of French is declining. Flemish pupils are also obligated to follow English lessons as their third language. Normally from the second secondary year (around 14 years old), but the ubiquity of English in movies, music, IT and even advertisements makes it easier to learn and maintain the English language.
The public radio and television broadcaster in Flanders is VRT, which operates the TV channels één, Canvas, Ketnet, OP12 and (together with the Netherlands) BVN. Flemish provinces each have up to two TV channels as well. Commercial television broadcasters include vtm and Vier (VT4). Popular TV series are for example Thuis and F.C. De Kampioenen.
The five most successful Flemish films were Loft (2008; 1,186,071 visitors), Koko Flanel (1990; 1,082,000 tickets sold), Hector (1987; 933,000 tickets sold), Daens (1993; 848,000 tickets sold) and De Zaak Alzheimer (2003; 750,000 tickets sold). The first and last ones were directed by Erik Van Looy, and an American remake is being made of both of them, respectively The Loft (2012) and The Memory of a Killer. The other three ones were directed by Stijn Coninx.
Newspapers are grouped under three main publishers: De Persgroep with Het Laatste Nieuws, the most popular newspaper in Flanders, De Morgen and De Tijd. Then Corelio with De Gentenaar [nl], the oldest extant Flemish newspaper, Het Nieuwsblad and De Standaard. Lastly, Concentra publishes Gazet van Antwerpen and Het Belang van Limburg.
Magazines include Knack and HUMO.
Association football (soccer) is one of the most popular sports in both parts of Belgium, together with cycling, tennis, swimming and judo.
In cycling, the Tour of Flanders is considered one of the five "Monuments". Other "Flanders Classics" races include Dwars door Vlaanderen and Gent–Wevelgem. Eddy Merckx is widely regarded as the greatest cyclist of all time, with five victories in the Tour de France and numerous other cycling records. His hour speed record (set in 1972) stood for 12 years.
Jean-Marie Pfaff, a former Belgian goalkeeper, is considered one of the greatest in the history of football (soccer).
Kim Clijsters (as well as the French-speaking Belgian Justine Henin) was Player of the Year twice in the Women's Tennis Association as she was ranked the number one female tennis player.
Kim Gevaert and Tia Hellebaut are notable track and field stars from Flanders.
The 1920 Summer Olympics were held in Antwerp. Jacques Rogge was president of the International Olympic Committee from 2001 to 2013.
The Flemish government agency for sports is Bloso.
Flanders is known for its music festivals, like the annual Rock Werchter, Tomorrowland and Pukkelpop. The Gentse Feesten is another very large yearly event.
The best-selling Flemish group or artist is the (Flemish-Dutch) group 2 Unlimited, followed by (Italian-born) Rocco Granata, Technotronic, Helmut Lotti and Vaya Con Dios.
The weekly charts of best-selling singles is the Ultratop 50. "Kvraagetaan" by the Fixkes holds the current record for longest time at No. 1 on the chart. | [
{
"paragraph_id": 0,
"text": "Flanders (UK: /ˈflɑːndərz/, US: /ˈflæn-/; Dutch: Vlaanderen [ˈvlaːndərə(n)] ) is the Dutch-speaking northern portion of Belgium and one of the communities, regions and language areas of Belgium. However, there are several overlapping definitions, including ones related to culture, language, politics, and history, and sometimes involving neighbouring countries. The demonym associated with Flanders is Fleming, while the corresponding adjective is Flemish, which is also the name of the local dialect. The official capital of Flanders is the City of Brussels, although the Brussels-Capital Region that includes it has an independent regional government. The powers of the government of Flanders consist, among others, of economic affairs in the Flemish Region and the community aspects of Flanders life in Brussels, such as Flemish culture and education.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Geographically, Flanders is mainly flat, and has a small section of coast on the North Sea. It borders the French department of Nord to the south-west near the coast, the Dutch provinces of Zeeland, North Brabant and Limburg to the north and east, and the Walloon provinces of Hainaut, Walloon Brabant and Liège to the south. Despite accounting for only 45% of Belgium's territory, more than half the population lives there – 6,653,062 (or 57%) out of 11,431,406 Belgian inhabitants. Much of Flanders is agriculturally fertile and densely populated at 483/km (1,250/sq mi). The Brussels Region is an officially bilingual enclave within the Flemish Region. Flanders also has exclaves of its own: Voeren in the east is between Wallonia and the Netherlands and Baarle-Hertog in the north consists of 22 exclaves surrounded by the Netherlands. Not including Brussels, there are five present-day Flemish provinces: Antwerp, East Flanders, Flemish Brabant, Limburg and West Flanders. The official language is Dutch.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The area of today's Flanders has figured prominently in European history since the Middle Ages. The original County of Flanders stretched around AD 900 from the Strait of Dover to the Scheldt estuary and expanded from there. This county also still corresponds roughly with the modern-day Belgian provinces of West Flanders and East Flanders, along with neighbouring parts of France and the Netherlands. In this period, cities such as Ghent and Bruges of the historic County of Flanders, and later Antwerp of the Duchy of Brabant made it one of the richest and most urbanised parts of Europe, trading, and weaving the wool of neighbouring lands into cloth for both domestic use and export. As a consequence, a very sophisticated culture developed, with impressive achievements in the arts and architecture, rivaling those of northern Italy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Belgium was one of the centres of the 19th-century Industrial Revolution, but this occurred mainly in French-speaking Wallonia. In the second half of the 20th century, and due to massive national investments in port infrastructure, Flanders' economy modernised rapidly, and today Flanders and Brussels are much wealthier than Wallonia, being among the wealthiest regions in Europe and the world. In accordance with late 20th century Belgian state reforms, Flanders was made into two political entities: the Flemish Region (Dutch: Vlaams Gewest) and the Flemish Community (Dutch: Vlaamse Gemeenschap). These entities were merged, although geographically the Flemish Community, which has a broader cultural mandate, covers Brussels, whereas the Flemish Region does not.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"Flanders\" has several main modern meanings:",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "The name originally applied to the ancien régime territory called the County of Flanders, that existed from the 8th century (Latin Flandria) until its absorption by the French First Republic. Until the 1600s, this county also extended over parts of what are now France and the Netherlands.",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "Especially in international discussions, the significance of the County of Flanders and its counts eroded over time, but the designation was used for a bigger territory. \"Flanders\" (and Latin \"Belgium\") were the first two common names used for the Burgundian Netherlands. With the breakaway of the northern Netherlands in the early modern period, the term Flanders continued to be associated with the whole southern part of the Low Countries—the Southern, Spanish or Austrian Netherlands, which were the successors of the Burgundian state, and predecessors of modern Belgium.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "The term \"Flemish\" came to be a term for the language Dutch, and during the 19th and 20th centuries, it became increasingly common to refer exclusively to the Dutch-speaking part of Belgium as \"Flanders\". Belgium divided itself into official French- and Dutch-speaking parts starting in the early '60s. Today Flanders extends over the northern part of Belgium, including not only the Dutch-speaking Belgian parts of the medieval Duchy of Brabant, which was united with Flanders since the Middle Ages, but also Belgian Limburg, which corresponds closely to the medieval County of Loon, and was never under Burgundian control.",
"title": "Terminology"
},
{
"paragraph_id": 8,
"text": "The ambiguity between this wider cultural area and that of the county or province still remains in discussions about the region. In most present-day contexts however, the term Flanders is taken to refer to either the political, social, cultural, and linguistic community (and the corresponding official institution, the Flemish Community), or the geographical area, one of the three institutional regions in Belgium, namely the Flemish Region.",
"title": "Terminology"
},
{
"paragraph_id": 9,
"text": "In the history of art and other fields, the adjectives Flemish and Netherlandish are commonly used to designate all the artistic production in this area before about 1580, after which it refers specifically to the southern Netherlands. For example, the term \"Flemish Primitives\", now outdated in English but used in French, Dutch and other languages, is a synonym for \"Early Netherlandish painting\", and it is not uncommon to see Mosan art categorized as Flemish art. In music the Franco-Flemish School is also known as the Dutch School.",
"title": "Terminology"
},
{
"paragraph_id": 10,
"text": "Within this Dutch-speaking part of Belgium, French has never ceased to be spoken by some citizens and Jewish groups have been speaking Yiddish in Antwerp for centuries. Regardless of nationality or linguistic background, according to Belgian Law education in schools located in the Flemish Region must be mainly in the Dutch language. In Brussels, teaching is also done in French.",
"title": "Terminology"
},
{
"paragraph_id": 11,
"text": "When Julius Caesar conquered the area he described it as the less economically developed and more warlike part of Gallia Belgica. His informants told him that especially in the east, the tribes claimed ancestral connections and kinship with the \"Germanic\" peoples then east of the Rhine. Under the Roman empire the whole of Gallia Belgica became an administrative province. The future counties of Flanders and Brabant remained part of this province connected to what is now France, but in the east modern Limburg became part of the Rhine frontier province of Germania Inferior connected to what is now the Netherlands and Germany. Gallia Belgica and Germania Inferior were the two most northerly continental provinces of the Roman empire.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the future county of Flanders, the main Belgic tribe in early Roman times was the Menapii, but also on the coast were the Marsacii and Morini. In the central part of modern Belgium were the Nervii, whose territory corresponded to medieval Brabant as well as French-speaking Hainaut. In the east was the large district of the Tungri which covered both French- and Dutch-speaking parts of eastern Belgium. The Tungri were understood to have links to Germanic tribes east of the Rhine. Another notable group were the Toxandrians who appear to have lived in the Kempen region, in the northern parts of both the Nervian and Tungrian districts, probably stretching into the modern Netherlands. The Roman administrative districts (civitates) of the Menapii, Nervii and Tungri therefore corresponded roughly with the medieval counties of Flanders, Brabant and Loon, and the modern Flemish provinces of East and West Flanders (Menapii), Brabant and Antwerp (the northern Nervii), and Belgian Limburg (part of the Tungri). Brabant appears to have been separated from the Tungri by a relatively unpopulated forest area, the Silva Carbonaria, forming a natural boundary between northeast and southwest Belgium.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Linguistically, the tribes in this area were under Celtic influence in the south, and Germanic influence in the east, but there is disagreement about what languages were spoken locally (apart from Vulgar Latin), and there may even have been an intermediate \"Nordwestblock\" language related to both. By the first century AD, Germanic languages appear to have become prevalent in the area of the Tungri.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "As Roman influence waned, Frankish populations settled in the Tungiran area east of the Silva Carbonaria, and eventually pushed through it under Chlodio. They had kings in each Roman district (civitas). In the meantime, the Franks contributed to the Roman military. The first Merovingian king Childeric I was king of the Franks within the military of Gaul. He became leader of the administration of Belgica Secunda, which included the civitas of the Menapii (the future county of Flanders). From there, his son Clovis I managed to conquer both the Roman populations of northern France and the Frankish populations beyond the forest areas.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The County of Flanders was a feudal fief in West Francia. The first certain Count in the comital family, Baldwin I of Flanders, is first reported in a document of 862, when he eloped with a daughter of his king Charles the Bald. The region developed as a medieval economic power with a large degree of political autonomy. While its trading cities remained strong, it was weakened and divided when districts fell under direct French royal rule in the late 12th century. The remaining parts of Flanders came under the rule of the counts of neighbouring imperial Hainaut under Baldwin V of Hainaut in 1191.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "During the late Middle Ages, Flanders's trading towns (notably Ghent, Bruges and Ypres) made it one of the richest and most urbanized parts of Europe, weaving the wool of neighbouring lands into cloth for both domestic use and export. As a consequence, a sophisticated culture developed, with impressive art and architecture, rivaling those of northern Italy. Ghent, Bruges, Ypres and the Franc of Bruges formed the Four Members, a form of parliament that exercised considerable power in Flanders.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Increasingly powerful from the 12th century, the territory's autonomous urban communes were instrumental in defeating a French attempt at annexation (1300–1302), finally defeating the French in the Battle of the Golden Spurs (11 July 1302), near Kortrijk. Two years later, the uprising was defeated and Flanders indirectly remained part of the French Crown. Flemish prosperity waned in the following century, due to widespread European population decline following the Black Death of 1348, the disruption of trade during the Anglo-French Hundred Years' War (1337–1453), and increased English cloth production. Flemish weavers had gone over to Worstead and North Walsham in Norfolk in the 12th century and established the woolen industry.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The County of Flanders started to take control of the neighbouring County of Brabant during the life of Louis II, Count of Flanders (1330-1384), who fought his sister-in-law Joanna, Duchess of Brabant for control of it.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The entire area, straddling the ancient boundary of France and the Holy Roman Empire, later passed to Philip the Bold in 1384, the Duke of Burgundy, with his capital in Brussels. The titles were eventually more clearly united under his grandson Philip the Good (1396 – 1467). This large Duchy passed in 1477 to the Habsburg dynasty, and in 1556 to the kings of Spain. Western and southern districts of Flanders were confirmed under French rule under successive treaties of 1659 (Artois), 1668 and 1678.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The County of Loon, approximately the modern Flemish province of Limburg, remained independent of France, forming a part of the Prince-Bishopric of Liège until the French Revolution, but surrounded by the Burgundians, and under their influence.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1500, Charles V was born in Ghent. He inherited the Seventeen Provinces (1506), Spain (1516) with its colonies and in 1519 was elected Holy Roman Emperor. Charles V issued the Pragmatic Sanction of 1549, which established the Low Countries as the Seventeen Provinces (or Spanish Netherlands in its broad sense) as an entity separate from the Holy Roman Empire and from France. In 1556 Charles V abdicated due to ill health (he suffered from crippling gout). Spain and the Seventeen Provinces went to his son, Philip II of Spain.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Over the first half of the 16th century Antwerp grew to become the second-largest European city north of the Alps by 1560. Antwerp was the richest city in Europe at this time. According to Luc-Normand Tellier \"It is estimated that the port of Antwerp was earning the Spanish crown seven times more revenues than the Americas.\"",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Meanwhile, Protestantism had reached the Low Countries. Among the wealthy traders of Antwerp, the Lutheran beliefs of the German Hanseatic traders found appeal, perhaps partly for economic reasons. The spread of Protestantism in this city was aided by the presence of an Augustinian cloister (founded 1514) in the St. Andries quarter. Luther, an Augustinian himself, had taught some of the monks, and his works were in print by 1518. The first Lutheran martyrs came from Antwerp. The Reformation resulted in consecutive but overlapping waves of reform: a Lutheran, followed by a militant Anabaptist, then a Mennonite, and finally a Calvinistic movement. These movements existed independently of each other.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Philip II, a devout Catholic and self-proclaimed protector of the Counter-Reformation, suppressed Calvinism in Flanders, Brabant and Holland (what is now approximately Belgian Limburg was part of the Prince-Bishopric of Liège and was Catholic de facto). In 1566, the wave of iconoclasm known as the Beeldenstorm was a prelude to religious war between Catholics and Protestants, especially the Anabaptists. The Beeldenstorm started in what is now French Flanders, with open-air sermons (Dutch: hagepreken) that spread through the Low Countries, first to Antwerp and Ghent, and from there further east and north.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Subsequently, Philip II of Spain sent the Duke of Alba to the Provinces to repress the revolt. Alba recaptured the southern part of the Provinces, who signed the Union of Atrecht, which meant that they would accept the Spanish government on condition of more freedom. But the northern part of the provinces signed the Union of Utrecht and settled in 1581 the Republic of the Seven United Netherlands. Spanish troops quickly started fighting the rebels, and the Spanish armies conquered the important trading cities of Bruges and Ghent. Antwerp, which was then the most important port in the world, also had to be conquered. But before the revolt was defeated, a war between Spain and England broke out, forcing Spanish troops to halt their advance. On 17 August 1585, Antwerp fell. This ended the Eighty Years' War for the (from now on) Southern Netherlands. The United Provinces (the Northern Netherlands) fought on until 1648 – the Peace of Westphalia.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "During the war with England, the rebels from the north, strengthened by refugees from the south, started a campaign to reclaim areas lost to Philip II's Spanish troops. They conquered a considerable part of Brabant (the later North Brabant of the Netherlands), and the south bank of the Scheldt estuary (Zeelandic Flanders), before being stopped by Spanish troops. The front at the end of this war stabilized and became the border between present-day Belgium and the Netherlands. The Dutch (as they later became known) had managed to reclaim enough of Spanish-controlled Flanders to close off the river Scheldt, effectively cutting Antwerp off from its trade routes.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The fall of Antwerp to the Spanish and the closing of the Scheldt caused considerable emigration. Many Calvinist merchants of Antwerp and other Flemish cities left Flanders and migrated north. Many of them settled in Amsterdam, which was a smaller port, important only in the Baltic trade. The Flemish exiles helped to rapidly transform Amsterdam into one of the world's most important ports. This is why the exodus is sometimes described as \"creating a new Antwerp\".",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Flanders and Brabant, went into a period of relative decline from the time of the Thirty Years War. In the Northern Netherlands, the mass emigration from Flanders and Brabant became an important driving force behind the Dutch Golden Age.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Although arts remained relatively impressive for another century with Peter Paul Rubens (1577–1640) and Anthony van Dyck, Flanders lost its former economic and intellectual power under Spanish, Austrian, and French rule. Heavy taxation and rigid imperial political control compounded the effects of industrial stagnation and Spanish-Dutch and Franco-Austrian conflict. The Southern Netherlands suffered severely under the War of the Spanish Succession. But under the reign of Empress Maria-Theresia, these lands again flourished economically. Influenced by the Enlightenment, the Austrian Emperor Joseph II was the first sovereign who had been in the Southern Netherlands since King Philip II of Spain left them in 1559.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In 1794, the French Republican Army started using Antwerp as the northernmost naval port of France. The following year, France officially annexed Flanders as the départements of Lys, Escaut, Deux-Nèthes, Meuse-Inférieure and Dyle. Obligatory (French) army service for all men aged 16–25 years was a main reason for the uprising against the French in 1798, known as the Boerenkrijg (Peasants' War), with the heaviest fighting in the Campine area.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "After the defeat of Napoleon Bonaparte at the 1815 Battle of Waterloo in Brabant, the Congress of Vienna (1815) gave sovereignty over the Austrian Netherlands – Belgium minus the East Cantons and Luxembourg – to the United Netherlands (Dutch: Verenigde Nederlanden) under Prince William I of Orange Nassau, making him William I of the United Kingdom of the Netherlands. William I started rapid industrialisation of the southern parts of the Kingdom. But the political system failed to forge a true union between the north and south. Most of the southern bourgeoisie was Roman Catholic and French-speaking, while the north was mainly Protestant and Dutch-speaking.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In 1815, the Dutch Senate was reinstated (Dutch: Eerste Kamer der Staaten Generaal). The nobility, mainly coming from the south, became more and more estranged from their northern colleagues. Resentment grew between the Roman Catholics from the south and the Protestants from the north, and also between the powerful liberal bourgeoisie from the south and their more moderate colleagues from the north. On 25 August 1830 (after the showing of the opera 'La Muette de Portici' of Daniel Auber in Brussels) the Belgian Revolution sparked. On 4 October 1830, the Provisional Government (Dutch: Voorlopig Bewind) proclaimed its independence, which was later confirmed by the National Congress that issued a new Liberal Constitution and declared the new state a Constitutional Monarchy, under the House of Saxe-Coburg. Flanders now became part of the Kingdom of Belgium, which was recognized by the major European Powers on 20 January 1831. The cessation was recognized by the United Kingdom of the Netherlands on 19 April 1839.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In 1830, the Belgian Revolution led to the splitting up of the two countries. Belgium was confirmed as an independent state by the Treaty of London of 1839, but deprived of the eastern half of Limburg (now Dutch Limburg), and the Eastern half of Luxembourg (now the Grand-Duchy of Luxembourg). Sovereignty over Zeelandic Flanders, south of the Westerscheldt river delta, was left with the Kingdom of the Netherlands, which was allowed to levy a toll on all traffic to Antwerp harbour until 1863.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1873, Dutch became an official language in public secondary schools. In 1898, Dutch and French were declared equal languages in laws and Royal orders. In 1930, the first Flemish university was opened.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The first official translation of the Belgian constitution in Dutch was not published until 1967.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Flanders (and Belgium as a whole) saw some of the greatest loss of life on the Western Front of the First World War, in particular from the three battles of Ypres.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The war strengthened Flemish identity and consciousness. The occupying German authorities took several Flemish-friendly measures. The resulting suffering of the war is remembered by Flemish organizations during the yearly Yser pilgrimage in Diksmuide at the monument of the Yser Tower.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "During the interbellum and World War II, several right-wing fascist and/or national-socialistic parties emerged in Belgium. Since these parties were promised more rights for the Flemings by the German government during World War II, many of them collaborated with the Nazi regime. After the war, collaborators (or people who were Zwart, \"Black\" during the war) were prosecuted and punished, among them many Flemish Nationalists whose main political goal had been the emancipation of Flanders. As a result, until today Flemish Nationalism is often associated with right-wing and sometimes fascist ideologies.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "After World War II, the differences between Dutch-speaking and French-speaking Belgians became clear in a number of conflicts, such as the Royal Question, the question whether King Leopold III should return (which most Flemings supported but Walloons did not) and the use of Dutch in the Catholic University of Leuven. As a result, several state reforms took place in the second half of the 20th century, which transformed the unitary Belgium into a federal state with communities, regions and language areas. This resulted also in the establishment of a Flemish Parliament and Government. During the 1970s, all major political parties split into a Dutch and French-speaking party.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Several Flemish parties still advocate for more Flemish autonomy, some even for Flemish independence (see Partition of Belgium), whereas the French-speakers would like to keep the current state as it is. Recent governments (such as Verhofstadt I Government) have transferred certain federal competences to the regional governments.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "On 13 December 2006, a spoof news broadcast by the Belgian Francophone public broadcasting station RTBF announced that Flanders had decided to declare independence from Belgium.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The 2007 federal elections showed more support for Flemish autonomy, marking the start of the 2007–2011 Belgian political crisis. All the political parties that advocated a significant increase of Flemish autonomy gained votes as well as seats in the Belgian federal parliament. This was especially the case for Christian Democratic and Flemish and New Flemish Alliance (N-VA) (who had participated on a shared electoral list). The trend continued during the 2009 regional elections, where CD&V and N-VA were the clear winners in Flanders, and N-VA became even the largest party in Flanders and Belgium during the 2010 federal elections, followed by the longest-ever government formation after which the Di Rupo I Government was formed excluding N-VA. Eight parties agreed on a sixth state reform which aim to solve the disputes between Flemings and French-speakers. However, the 2012 provincial and municipal elections continued the trend of N-VA becoming the biggest party in Flanders.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "However, sociological studies show no parallel between the rise of nationalist parties and popular support for their agenda. Instead, a recent study revealed a majority in favour of returning regional competences to the federal level.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Both the Flemish Community and the Flemish Region are constitutional institutions of the Kingdom of Belgium, exercising certain powers within their jurisdiction, granted following a series of state reforms. In practice, the Flemish Community and Region together form a single body, with its own parliament and government, as the Community legally absorbed the competences of the Region. The parliament is a directly elected legislative body composed of 124 representatives. The government consists of up to 11 members and is presided by a Minister-President, currently Geert Bourgeois (New Flemish Alliance) leading a coalition of his party (N-VA) with Christen-Democratisch en Vlaams (CD&V) and Open Vlaamse Liberalen en Democraten (Open VLD).",
"title": "Government and politics"
},
{
"paragraph_id": 45,
"text": "The area of the Flemish Community is represented on the maps above, including the area of the Brussels-Capital Region (hatched on the relevant map). Roughly, the Flemish Community exercises competences originally oriented towards the individuals of the Community's language: culture (including audiovisual media), education, and the use of the language. Extensions to personal matters less directly associated with language comprise sports, health policy (curative and preventive medicine), and assistance to individuals (protection of youth, social welfare, aid to families, immigrant assistance services, etc.)",
"title": "Government and politics"
},
{
"paragraph_id": 46,
"text": "The area of the Flemish Region is represented on the maps above. It has a population of more than 6 million (excluding the Dutch-speaking community in the Brussels Region, grey on the map for it is not a part of the Flemish Region). Roughly, the Flemish Region is responsible for territorial issues in a broad sense, including economy, employment, agriculture, water policy, housing, public works, energy, transport, the environment, town and country planning, nature conservation, credit, and foreign trade. It supervises the provinces, municipalities, and intercommunal utility companies.",
"title": "Government and politics"
},
{
"paragraph_id": 47,
"text": "The number of Dutch-speaking Flemish people in the Capital Region is estimated to be between 11% and 15% (official figures do not exist as there is no language census and no official subnationality). According to a survey conducted by the University of Louvain (UCLouvain) in Louvain-la-Neuve and published in June 2006, 51% of respondents from Brussels claimed to be bilingual, even if they do not have Dutch as their first language. They are governed by the Brussels Region for economics affairs and by the Flemish Community for educational and cultural issues.",
"title": "Government and politics"
},
{
"paragraph_id": 48,
"text": "As mentioned above, Flemish institutions such as the Flemish Parliament and Government, represent the Flemish Community and the Flemish Region. The region and the community thus de facto share the same parliament and the same government. All these institutions are based in Brussels. Nevertheless, both types of subdivisions (the Community and the Region) still exist legally and the distinction between both is important for the people living in Brussels. Members of the Flemish Parliament who were elected in the Brussels Region cannot vote on affairs belonging to the competences of the Flemish Region.",
"title": "Government and politics"
},
{
"paragraph_id": 49,
"text": "The official language for all Flemish institutions is Dutch. French enjoys a limited official recognition in a dozen municipalities along the borders with French-speaking Wallonia, and a large recognition in the bilingual Brussels Region. French is widely known in Flanders, with 59% claiming to know French according to a survey conducted by UCLouvain in Louvain-la-Neuve and published in June 2006.",
"title": "Government and politics"
},
{
"paragraph_id": 50,
"text": "Historically, the political parties reflected the pillarisation (verzuiling) in Flemish society. The traditional political parties of the three pillars are Christian-Democratic and Flemish (CD&V), the Open Flemish Liberals and Democrats (Open Vld) and the Socialist Party – Differently (sp.a).",
"title": "Government and politics"
},
{
"paragraph_id": 51,
"text": "However, during the last half century, many new political parties were founded in Flanders. One of the first was the nationalist People's Union, of which the right nationalist Flemish Block (now Flemish Interest) split off, and which later dissolved into the now-defunct Spirit or Social Liberal Party, moderate nationalism rather left of the spectrum, on the one hand, and the New Flemish Alliance (N-VA), more conservative but independentist, on the other hand. Other parties are the leftist alternative/ecological Green party; the short-lived anarchistic libertarian spark ROSSEM and more recently the conservative-right liberal List Dedecker, founded by Jean-Marie Dedecker, and the socialist Workers' Party.",
"title": "Government and politics"
},
{
"paragraph_id": 52,
"text": "Particularly the Flemish Block/Flemish Interest has seen electoral success roughly around the turn of the century, and the New Flemish Alliance during the last few elections, even becoming the largest party in the 2010 federal elections.",
"title": "Government and politics"
},
{
"paragraph_id": 53,
"text": "For some inhabitants, Flanders is more than just a geographical area or the federal institutions (Flemish Community and Region). Supporters of the Flemish Movement even call it a nation and pursue Flemish independence, but most people (approximately 75%) living in Flanders say they are proud to be Belgian and opposed to the dissolution of Belgium. 20% is even very proud, while some 25% are not proud and 8% is very not proud. Mostly students claim to be proud of their nationality, with 90% of them saying so. Of the people older than 55, 31% claim to be proud of being a Belgian. Particular opposition to secession comes from women, people employed in services, the highest social classes and people from big families. Strongest of all opposing the notion are housekeepers—both housewives and house husbands.",
"title": "Government and politics"
},
{
"paragraph_id": 54,
"text": "In 2012, the Flemish government drafted a \"Charter for Flanders\" (Handvest voor Vlaanderen) of which the first article says \"Vlaanderen is een deelstaat van de federale Staat België en maakt deel uit van de Europese Unie.\" (\"Flanders is a component state of the federal State of Belgium and is part of the European Union\").",
"title": "Government and politics"
},
{
"paragraph_id": 55,
"text": "Flanders shares its borders with Wallonia in the south, Brussels being an enclave within the Flemish Region. The rest of the border is shared with the Netherlands (Zeelandic Flanders in Zeeland, North Brabant and Limburg) in the north and east, and with France (French Flanders in Hauts-de-France) and the North Sea in the west. Voeren is an exclave of Flanders between Wallonia and the Netherlands, while Baarle-Hertog in Flanders forms a complicated series of enclaves and exclaves with Baarle-Nassau in the Netherlands. Germany, although bordering Wallonia and close to Voeren in Limburg, does not share a border with Flanders. The German-speaking Community of Belgium, also close to Voeren, does not border Flanders either. (The commune of Plombières, majority French speaking, lies between them.)",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "Flanders is a highly urbanised area, lying completely within the Blue Banana. Antwerp, Ghent, Bruges and Leuven are the largest cities of the Flemish Region. Antwerp has a population of more than 500,000 citizens and is the largest city, Ghent has a population of 250,000 citizens, followed by Bruges with 120,000 citizens and Leuven counts almost 100,000 citizens.",
"title": "Geography"
},
{
"paragraph_id": 57,
"text": "Brussels is a part of Flanders as far as community matters are concerned, but does not belong to the Flemish Region.",
"title": "Geography"
},
{
"paragraph_id": 58,
"text": "Flanders has two main geographical regions: the coastal Yser basin plain in the north-west and a central plain. The first consists mainly of sand dunes and clayey alluvial soils in the polders. Polders are areas of land, close to or below sea level that have been reclaimed from the sea, from which they are protected by dikes or, a little further inland, by fields that have been drained with canals. With similar soils along the lowermost Scheldt basin starts the central plain, a smooth, slowly rising fertile area irrigated by many waterways that reaches an average height of about five metres (16 feet) above sea level with wide valleys of its rivers upstream as well as the Campine region to the east having sandy soils at altitudes around thirty metres. Near its southern edges close to Wallonia one can find slightly rougher land, richer in calcium, with low hills reaching up to 150 m (490 ft) and small valleys, and at the eastern border with the Netherlands, in the Meuse basin, there are marl caves (mergelgrotten). Its exclave around Voeren between the Dutch border and Wallonia's Liège Province attains a maximum altitude of 288 m (945 ft) above sea level.",
"title": "Geography"
},
{
"paragraph_id": 59,
"text": "The present-day Flemish Region covers 13,625 km (5,261 sq mi) and is divided into five provinces, 22 arrondissements and 308 cities or municipalities.",
"title": "Geography"
},
{
"paragraph_id": 60,
"text": "The province of Flemish Brabant is the most recently created, being formed in 1995 after the splitting of the province of Brabant on a linguistic basis.",
"title": "Geography"
},
{
"paragraph_id": 61,
"text": "Most municipalities are made up of several former municipalities, now called deelgemeenten. The largest municipality (both in terms of population and area) is Antwerp, having more than half a million inhabitants. Its nine deelgemeenten have a special status and are called districts, which have an elected council and a college. While any municipality with more than 100,000 inhabitants can establish districts, only Antwerp did this so far. The smallest municipality (also both in terms of population and area) is Herstappe (Limburg).",
"title": "Geography"
},
{
"paragraph_id": 62,
"text": "The Flemish Community covers both the Flemish Region and, together with the French Community, the Brussels-Capital Region. Brussels, an enclave within the province of Flemish Brabant, is not divided into any province nor is it part of any. It coincides with the Arrondissement of Brussels-Capital and includes 19 municipalities.",
"title": "Geography"
},
{
"paragraph_id": 63,
"text": "The Flemish Government has its own local institutions in the Brussels-Capital Region, being the Vlaamse Gemeenschapscommissie (VGC), and its municipal antennae (Gemeenschapscentra, community centres for the Flemish community in Brussels). These institutions are independent from the educational, cultural and social institutions that depend directly on the Flemish Government. They exert, among others, all those cultural competences that outside Brussels fall under the provinces.",
"title": "Geography"
},
{
"paragraph_id": 64,
"text": "The climate is maritime temperate, with significant precipitation in all seasons (Köppen climate classification: Cfb; the average temperature is 3 °C (37 °F) in January, and 21 °C (70 °F) in July; the average precipitation is 65 millimetres (2.6 inches) in January, and 78 millimetres (3.1 inches) in July).",
"title": "Climate"
},
{
"paragraph_id": 65,
"text": "Total gross regional product (GRP) of Flanders in 2021 was €296 billion (excluding Brussels). Per capita GDP at purchasing power parity was 20% above the EU average. Flemish productivity per capita is about 13% higher than that in Wallonia, and wages are about 7% higher than in Wallonia.",
"title": "Economy"
},
{
"paragraph_id": 66,
"text": "Flanders was one of the first continental European areas to undergo the Industrial Revolution, in the 19th century. Initially, the modernization relied heavily on food processing and textile. However, by the 1840s the textile industry of Flanders was in severe crisis and there was famine in Flanders (1846–50). After World War II, Antwerp and Ghent experienced a fast expansion of the chemical and petroleum industries. Flanders also attracted a large majority of foreign investments in Belgium. The 1973 and 1979 oil crises sent the economy into a recession. The steel industry remained in relatively good shape. In the 1980s and 90s, the economic centre of Belgium continued to shift further to Flanders and is now concentrated in the populous Flemish Diamond area. Nowadays, the Flemish economy is mainly service-oriented.",
"title": "Economy"
},
{
"paragraph_id": 67,
"text": "Belgium is a founding member of the European Coal and Steel Community in 1951, which evolved into the present-day European Union. In 1999, the euro, the single European currency, was introduced in Flanders. It replaced the Belgian franc in 2002.",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "The Flemish economy is strongly export-oriented, in particular of high value-added goods. The main imports are food products, machinery, rough diamonds, petroleum and petroleum products, chemicals, clothing and accessories, and textiles. The main exports are automobiles, food and food products, iron and steel, finished diamonds, textiles, plastics, petroleum products, and non-ferrous metals. Since 1922, Belgium and Luxembourg have been a single trade market within a customs and currency union—the Belgium–Luxembourg Economic Union. Its main trading partners are Germany, the Netherlands, France, the United Kingdom, Italy, the United States, and Spain.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "Antwerp is the number one diamond market in the world, diamond exports account for roughly 1/10 of Belgian exports. The Antwerp-based BASF plant is the largest BASF-base outside Germany, and accounts on its own for about 2% of Belgian exports. Other industrial and service activities in Antwerp include car manufacturing, telecommunications, photographic products.",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "Flanders is home to several science and technology institutes, such as IMEC, VITO, Flanders DC, and Flanders Make.",
"title": "Economy"
},
{
"paragraph_id": 71,
"text": "Flanders has developed an extensive transportation infrastructure of ports, canals, railways and highways. The Port of Antwerp is the second-largest in Europe, after Rotterdam. Other ports are Bruges-Zeebrugge, Ghent and Ostend, of which Zeebrugge and Ostend are located at the Belgian coast [nl].",
"title": "Economy"
},
{
"paragraph_id": 72,
"text": "Whereas railways are managed by the federal National Railway Company of Belgium, other public transport (De Lijn) and roads are managed by the Flemish region.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "The main airport is Brussels Airport, the only other civilian airport with scheduled services in Flanders is Antwerp International Airport, but there are two other ones with cargo or charter flights: Ostend-Bruges International Airport and Kortrijk-Wevelgem International Airport, both in West Flanders.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "The highest population density is found in the area circumscribed by the Brussels-Antwerp-Ghent-Leuven agglomerations that surround Mechelen and is known as the Flemish Diamond, in other important urban centres as Bruges, Roeselare and Kortrijk to the west, and notable centres Turnhout and Hasselt to the east. On 1 January 2015, the Flemish Region had a population of 6,444,127 and about 15% of the 1,175,173 people in the Brussels Region are also considered Flemish.",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "The Belgian constitution provides for freedom of religion, and the various governments in general respect this right in practice. Since independence, Catholicism, counterbalanced by strong freethought movements, has had an important role in Belgium's politics, since the 20th century in Flanders mainly via the Christian trade union ACV and the Christian Democratic and Flemish party (CD&V). According to the 2001 Survey and Study of Religion, about 47 percent of the Belgian population identify themselves as belonging to the Catholic Church, while Islam is the second-largest religion at 3.5 percent. A 2006 inquiry in Flanders, considered more religious than Wallonia, showed that 55% considered themselves religious, and 36% believed that God created the world.",
"title": "Demographics"
},
{
"paragraph_id": 76,
"text": "Jews have been present in Flanders for a long time, in particular in Antwerp. More recently, Muslims have immigrated to Flanders, now forming the largest minority religion with about 3.9% in the Flemish Region and 25% in Brussels. The largest Muslim group is Moroccan in origin, while the second largest is Turkish in origin.",
"title": "Demographics"
},
{
"paragraph_id": 77,
"text": "Education is compulsory from the ages of six to 18, but most Flemings continue to study until around 23. Among the Organisation for Economic Co-operation and Development countries in 1999, Flanders had the third-highest proportion of 18- to 21-year-olds enrolled in postsecondary education. Flanders also scores very high in international comparative studies on education. Its secondary school students consistently rank among the top three for mathematics and science. However, the success is not evenly spread: ethnic minority youth score consistently lower, and the difference is larger than in most comparable countries.",
"title": "Demographics"
},
{
"paragraph_id": 78,
"text": "Mirroring the historical political conflicts between the secular and Catholic segments of the population, the Flemish educational system is split into a secular branch controlled by the communities, the provinces, or the municipalities, and a subsidised religious—mostly Catholic—branch. For the subsidised schools, the main costs such as the teacher's wages and building maintenance completely borne by the Flemish government. Subsidised schools are also free to determine their own teaching and examination methods, but in exchange, they must be able to prove that certain minimal terms are achieved by keeping records of the given lessons and exams. It should however be noted that—at least for the Catholic schools—the religious authorities have very limited power over these schools, neither do the schools have a lot of power on their own. Instead, the Catholic schools are a member of the Catholic umbrella organisation VSKO [nl]. The VSKO determines most practicalities for schools, like the advised schedules per study field. However, there's freedom of education in Flanders, which doesn't only mean that every pupil can choose his/her preferred school, but also that every organisation can found a school, and even be subsidised when abiding the different rules. This resulted also in some smaller school systems follow 'methodical pedagogies' (e.g. Steiner, Montessori, or Freinet) or serve the Jewish and Protestant minorities.",
"title": "Demographics"
},
{
"paragraph_id": 79,
"text": "During the school year 2003–2004, 68.30% of the total population of children between the ages of six and 18 went to subsidized private schools (both religious schools or 'methodical pedagogies' schools).",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "The big freedom given to schools results in a constant competition to be the \"best\" school. The schools get certain reputations amongst parents and employers. So it's important for schools to be the best school since the subsidies depend on the number of pupils. This competition has been pinpointed as one of the main reasons for the high overall quality of the Flemish education. However, the importance of a school's reputation also makes schools more eager to expel pupils that don't perform well. Resulting in the ethnic differences and the well-known waterfall system: pupils start high in the perceived hierarchy, and then drop towards more professional oriented directions or \"easier\" schools when they can't handle the pressure any longer.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "Healthcare is a federal matter, but the Flemish Government is responsible for care, health education and preventive care.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "The standard language in Flanders is Dutch; spelling and grammar are regulated by a single authority, the Dutch Language Union (Nederlandse Taalunie), comprising a committee of ministers of the Flemish and Dutch governments, their advisory council of appointed experts, a controlling commission of 22 parliamentarians, and a secretariate. The term Flemish can be applied to the Dutch spoken in Flanders; it shows many regional and local variations.",
"title": "Culture"
},
{
"paragraph_id": 83,
"text": "The biggest difference between Belgian Dutch and Dutch used in the Netherlands is in the pronunciation of words. The Dutch spoken in the north of the Netherlands is typically described as being \"sharper\", while Belgian Dutch is \"softer\". In Belgian Dutch, there are also fewer vowels pronounced as diphthongs. When it comes to spelling, Belgian Dutch language purists historically avoided writing words using a French spelling, or searched for specific translations of words derived from French, while the Dutch often retain the French spelling. For example, the Dutch word \"punaise\" (English: Drawing pin) is derived directly from the French language. Belgian Dutch language purists have lobbied to accept the word \"duimspijker\" (literally: thumb spike) as official Dutch, though the Dutch Language Union never accepted it as standard Dutch. Other proposals by purists were sometimes accepted, and sometimes reverted again in later spelling revisions. As language purists were quite often professionally involved in language (e.g. as a teacher), these unofficial purist translations are found more often in Belgian Dutch texts.",
"title": "Culture"
},
{
"paragraph_id": 84,
"text": "The earliest example of literature in non-standardized dialects in the current area of Flanders is Hendrik van Veldeke's Eneas Romance, the first courtly romance in a Germanic language (12th century). With a writer of Hendrik Conscience's stature, Flemish literature rose ahead of French literature in Belgium's early history. Guido Gezelle not only explicitly referred to his writings as Flemish but used it in many of his poems, and strongly defended it:",
"title": "Culture"
},
{
"paragraph_id": 85,
"text": "Original from kleengedichtjes (1860?)",
"title": "Culture"
},
{
"paragraph_id": 86,
"text": "Gij zegt dat 't vlaamsch te niet zal gaan: 't en zal! dat 't waalsch gezwets zal boven slaan: 't en zal! Dat hopen, dat begeren wij: dat zeggen en dat zweren wij: zoo lange als wij ons weren, wij: 't en zal, 't en zal, 't en zal!",
"title": "Culture"
},
{
"paragraph_id": 87,
"text": "You say Flemish will fade away: It shan't! that Walloon twaddle will have its way: It shan't! This we hope, for this we hanker: this we say and this we vow: as long as we fight back, we: It shan't, It shan't, It shan't!",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "The distinction between Dutch and Flemish literature, often perceived politically, is also made on intrinsic grounds by some experts such as Kris Humbeeck, professor of literature at the University of Antwerp. Nevertheless, most Dutch-language literature read (and appreciated to varying degrees) in Flanders is the same as that in the Netherlands.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "Influential Flemish writers include Ernest Claes, Stijn Streuvels and Felix Timmermans. Their novels mostly describe rural life in Flanders in the 19th century and at beginning of the 20th. Widely read by the older generations, they are considered somewhat old-fashioned by present-day critics. Some famous Flemish writers of the early 20th century wrote in French, including Nobel Prize winners (1911) Maurice Maeterlinck and Emile Verhaeren. They were followed by a younger generation, including Paul van Ostaijen and Gaston Burssens, who activated the Flemish Movement. Still widely read and translated into other languages (including English) are the novels of authors such as Willem Elsschot, Louis Paul Boon and Hugo Claus. The recent crop of writers includes the novelists Tom Lanoye and Herman Brusselmans, and poets such as the married couple Herman de Coninck and Kristien Hemmerechts.",
"title": "Culture"
},
{
"paragraph_id": 90,
"text": "At the creation of the Belgian state, French was the only official language. Historically Flanders was a Dutch-speaking region. For a long period, French was used as a second language and, like elsewhere in Europe, commonly spoken among the aristocracy. There is still a French-speaking minority in Flanders, especially in the municipalities with language facilities, along the language border and the Brussels periphery (Vlaamse Rand), though many of them are French-speakers that migrated to Flanders in recent decades.",
"title": "Culture"
},
{
"paragraph_id": 91,
"text": "In French Flanders, French is the only official language and now the native language of the majority of the population, but there is still a minority of Dutch-speakers living there. French is also the primary language in the officially bilingual Brussels Capital Region (see Francization of Brussels).",
"title": "Culture"
},
{
"paragraph_id": 92,
"text": "Many Flemings are also able to speak French, children in Flanders generally get their first French lessons in the 5th primary year (normally around 10 years). But the current lack of French outside the educational context makes it hard to maintain a decent level of French. As such, the proficiency of French is declining. Flemish pupils are also obligated to follow English lessons as their third language. Normally from the second secondary year (around 14 years old), but the ubiquity of English in movies, music, IT and even advertisements makes it easier to learn and maintain the English language.",
"title": "Culture"
},
{
"paragraph_id": 93,
"text": "The public radio and television broadcaster in Flanders is VRT, which operates the TV channels één, Canvas, Ketnet, OP12 and (together with the Netherlands) BVN. Flemish provinces each have up to two TV channels as well. Commercial television broadcasters include vtm and Vier (VT4). Popular TV series are for example Thuis and F.C. De Kampioenen.",
"title": "Culture"
},
{
"paragraph_id": 94,
"text": "The five most successful Flemish films were Loft (2008; 1,186,071 visitors), Koko Flanel (1990; 1,082,000 tickets sold), Hector (1987; 933,000 tickets sold), Daens (1993; 848,000 tickets sold) and De Zaak Alzheimer (2003; 750,000 tickets sold). The first and last ones were directed by Erik Van Looy, and an American remake is being made of both of them, respectively The Loft (2012) and The Memory of a Killer. The other three ones were directed by Stijn Coninx.",
"title": "Culture"
},
{
"paragraph_id": 95,
"text": "Newspapers are grouped under three main publishers: De Persgroep with Het Laatste Nieuws, the most popular newspaper in Flanders, De Morgen and De Tijd. Then Corelio with De Gentenaar [nl], the oldest extant Flemish newspaper, Het Nieuwsblad and De Standaard. Lastly, Concentra publishes Gazet van Antwerpen and Het Belang van Limburg.",
"title": "Culture"
},
{
"paragraph_id": 96,
"text": "Magazines include Knack and HUMO.",
"title": "Culture"
},
{
"paragraph_id": 97,
"text": "Association football (soccer) is one of the most popular sports in both parts of Belgium, together with cycling, tennis, swimming and judo.",
"title": "Culture"
},
{
"paragraph_id": 98,
"text": "In cycling, the Tour of Flanders is considered one of the five \"Monuments\". Other \"Flanders Classics\" races include Dwars door Vlaanderen and Gent–Wevelgem. Eddy Merckx is widely regarded as the greatest cyclist of all time, with five victories in the Tour de France and numerous other cycling records. His hour speed record (set in 1972) stood for 12 years.",
"title": "Culture"
},
{
"paragraph_id": 99,
"text": "Jean-Marie Pfaff, a former Belgian goalkeeper, is considered one of the greatest in the history of football (soccer).",
"title": "Culture"
},
{
"paragraph_id": 100,
"text": "Kim Clijsters (as well as the French-speaking Belgian Justine Henin) was Player of the Year twice in the Women's Tennis Association as she was ranked the number one female tennis player.",
"title": "Culture"
},
{
"paragraph_id": 101,
"text": "Kim Gevaert and Tia Hellebaut are notable track and field stars from Flanders.",
"title": "Culture"
},
{
"paragraph_id": 102,
"text": "The 1920 Summer Olympics were held in Antwerp. Jacques Rogge was president of the International Olympic Committee from 2001 to 2013.",
"title": "Culture"
},
{
"paragraph_id": 103,
"text": "The Flemish government agency for sports is Bloso.",
"title": "Culture"
},
{
"paragraph_id": 104,
"text": "Flanders is known for its music festivals, like the annual Rock Werchter, Tomorrowland and Pukkelpop. The Gentse Feesten is another very large yearly event.",
"title": "Culture"
},
{
"paragraph_id": 105,
"text": "The best-selling Flemish group or artist is the (Flemish-Dutch) group 2 Unlimited, followed by (Italian-born) Rocco Granata, Technotronic, Helmut Lotti and Vaya Con Dios.",
"title": "Culture"
},
{
"paragraph_id": 106,
"text": "The weekly charts of best-selling singles is the Ultratop 50. \"Kvraagetaan\" by the Fixkes holds the current record for longest time at No. 1 on the chart.",
"title": "Culture"
}
]
| Flanders is the Dutch-speaking northern portion of Belgium and one of the communities, regions and language areas of Belgium. However, there are several overlapping definitions, including ones related to culture, language, politics, and history, and sometimes involving neighbouring countries. The demonym associated with Flanders is Fleming, while the corresponding adjective is Flemish, which is also the name of the local dialect. The official capital of Flanders is the City of Brussels, although the Brussels-Capital Region that includes it has an independent regional government. The powers of the government of Flanders consist, among others, of economic affairs in the Flemish Region and the community aspects of Flanders life in Brussels, such as Flemish culture and education. Geographically, Flanders is mainly flat, and has a small section of coast on the North Sea. It borders the French department of Nord to the south-west near the coast, the Dutch provinces of Zeeland, North Brabant and Limburg to the north and east, and the Walloon provinces of Hainaut, Walloon Brabant and Liège to the south. Despite accounting for only 45% of Belgium's territory, more than half the population lives there – 6,653,062 out of 11,431,406 Belgian inhabitants. Much of Flanders is agriculturally fertile and densely populated at 483/km2 (1,250/sq mi). The Brussels Region is an officially bilingual enclave within the Flemish Region. Flanders also has exclaves of its own: Voeren in the east is between Wallonia and the Netherlands and Baarle-Hertog in the north consists of 22 exclaves surrounded by the Netherlands. Not including Brussels, there are five present-day Flemish provinces: Antwerp, East Flanders, Flemish Brabant, Limburg and West Flanders. The official language is Dutch. The area of today's Flanders has figured prominently in European history since the Middle Ages. The original County of Flanders stretched around AD 900 from the Strait of Dover to the Scheldt estuary and expanded from there. This county also still corresponds roughly with the modern-day Belgian provinces of West Flanders and East Flanders, along with neighbouring parts of France and the Netherlands. In this period, cities such as Ghent and Bruges of the historic County of Flanders, and later Antwerp of the Duchy of Brabant made it one of the richest and most urbanised parts of Europe, trading, and weaving the wool of neighbouring lands into cloth for both domestic use and export. As a consequence, a very sophisticated culture developed, with impressive achievements in the arts and architecture, rivaling those of northern Italy. Belgium was one of the centres of the 19th-century Industrial Revolution, but this occurred mainly in French-speaking Wallonia. In the second half of the 20th century, and due to massive national investments in port infrastructure, Flanders' economy modernised rapidly, and today Flanders and Brussels are much wealthier than Wallonia, being among the wealthiest regions in Europe and the world. In accordance with late 20th century Belgian state reforms, Flanders was made into two political entities: the Flemish Region and the Flemish Community. These entities were merged, although geographically the Flemish Community, which has a broader cultural mandate, covers Brussels, whereas the Flemish Region does not. | 2001-09-12T18:42:22Z | 2023-12-08T19:24:08Z | [
"Template:Use dmy dates",
"Template:Lang-nl",
"Template:Interlanguage link",
"Template:Cite journal",
"Template:Wikivoyage inline",
"Template:Efn",
"Template:Webarchive",
"Template:ISBN",
"Template:Infobox ethnonym",
"Template:IPA-nl",
"Template:Nowrap",
"Template:Cite web",
"Template:Citation",
"Template:Commons category-inline",
"Template:IPAc-en",
"Template:Convert",
"Template:TOC limit",
"Template:Citation needed",
"Template:Reflist",
"Template:Main",
"Template:Small",
"Template:Notelist",
"Template:Cite book",
"Template:Cite news",
"Template:Cite EB1911",
"Template:Authority control",
"Template:Short description",
"Template:Redirect2",
"Template:Use British English",
"Template:Lang",
"Template:In lang",
"Template:Infobox settlement",
"Template:Further",
"Template:Flag",
"Template:Flanders topics",
"Template:Portal bar"
]
| https://en.wikipedia.org/wiki/Flanders |
10,879 | Freud (disambiguation) | Sigmund Freud (1856–1939) was the inventor of psychoanalysis, psychosexual stages, and the personality theory of Ego, Superego, and Id.
Freud may also refer to:
The Freud family:
Other people: | [
{
"paragraph_id": 0,
"text": "Sigmund Freud (1856–1939) was the inventor of psychoanalysis, psychosexual stages, and the personality theory of Ego, Superego, and Id.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Freud may also refer to:",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Freud family:",
"title": "People with the surname"
},
{
"paragraph_id": 3,
"text": "Other people:",
"title": "People with the surname"
}
]
| Sigmund Freud (1856–1939) was the inventor of psychoanalysis, psychosexual stages, and the personality theory of Ego, Superego, and Id. Freud may also refer to: Freud (crater), a lunar crater
Freud, a chimpanzee in the Kasakela community | 2001-07-12T23:44:37Z | 2023-11-18T16:53:38Z | [
"Template:Wiktionary",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Freud_(disambiguation) |
10,880 | Plurality voting | Plurality voting refers to electoral systems in which a candidate who polls more than any other (that is, receives a plurality) is elected. In systems based on single-member districts, it elects just one member per district and may also be referred to as first-past-the-post (FPTP), single-member plurality (SMP/SMDP), single-choice voting (an imprecise term as non-plurality voting systems may also use a single choice), simple plurality or relative majority (as opposed to an absolute majority, where more than half of votes is needed, this is called majority voting). A system that elects multiple winners elected at once with the plurality rule and where each voter casts multiple X votes in a multi-seat district is referred to as plurality block voting. A semi-proportional system that elects multiple winners elected at once with the plurality rule and where each voter casts just one vote in a multi-seat district is known as single non-transferable voting.
Plurality voting is distinguished from majority voting, in which a winning candidate must receive an absolute majority of votes: more than half of all votes (more than all other candidates combined if each voter has one vote). Under single-winner plurality voting, the leading candidate, whether or not they have a majority of votes, is elected. Not every single-winner winner-takes-all system is plurality voting; for example, instant-runoff voting is a non-plurality winner-takes-all system, because not all votes are taken as initially cast.
Also, some plurality voting methods are close to proportional. For example limited voting and single non-transferable vote use plurality rules but are considered semi-proportional systems.
Plurality voting is still used to elect members of a legislative assembly or executive officers in only a handful of countries, mostly in the English speaking world, for historical reasons. It is used in most elections in the United States, the lower house (Lok Sabha) in India and elections to the British House of Commons and English local elections in the United Kingdom, and federal and provincial elections in Canada. An example for a "winner-take-all" plurality voting is system used at the state-level for election of most of the Electoral College in United States presidential elections. This system is called party block voting, also called the general ticket.
Proponents of electoral reform generally argue against plurality voting systems in favour of either other single winner systems (such as ranked-choice voting methods) or proportional representation (such as the single transferable vote or open list PR).
In single-winner plurality voting, each voter is allowed to vote for only one candidate, and the winner of the election is the candidate who represents a plurality of voters or, in other words, received more votes than any other candidate. That makes plurality voting among the simplest of all electoral systems for voters and vote counting officials; however, the drawing of district boundary lines can be contentious in the plurality system.
In an election for a legislative body with single-member seats, each voter in a geographically defined electoral district may vote for one candidate from a list of the candidates who are competing to represent that district. Under the plurality system, the winner of the election then becomes the representative of the whole electoral district and serves with representatives of other electoral districts.
In an election for a single seat, such as for president in a presidential system, the same style of ballot is used, and the winner is whichever candidate receives the highest number of votes.
In the two-round system, usually the top two candidates in the first ballot progress to the second round, also called the runoff.
In a multiple-member plurality election with n seats available, the winners are the n candidates with the highest numbers of votes. The rules may allow the voter to vote for one candidate, up to n candidates, or some other number.
Single-member plurality voting systems, often known as first past the post, is a simple system to use. The candidate who gets more votes than any of the other candidate(s) is the winning candidate. Depending on the number of candidates and their popularity within the community, it often happens that the winning candidate gets fewer votes than all the other candidates combined. This is sometimes called the spoiler effect.
Multi-member plurality elections are only slightly more complicated. The n candidates who get more votes than the others are elected.
Generally, plurality ballots can be categorized into two forms. The simplest form is a blank ballot in which the name of a candidate(s) is written in by hand. A more structured ballot will list all the candidates and allow a mark to be made next to the name of a single candidate (or more than one, in some cases); however, a structured ballot can also include space for a write-in candidate.
Plurality voting is used for local and/or national elections in 43 of the 193 countries that are members of the United Nations. It is particularly prevalent in the United Kingdom, the United States, Canada and India.
The United Kingdom, like the United States and Canada, uses single-member districts as the base for national elections. Each electoral district (constituency) chooses one member of parliament, the candidate who gets the most votes, whether or not they get at least 50% of the votes cast ("first past the post"). In 1992, for example, a Liberal Democrat in Scotland won a seat (Inverness, Nairn and Lochaber) with just 26% of the votes. The system of single-member districts with plurality winners tends to produce two large political parties. In countries with proportional representation there is not such a great incentive to vote for a large party, which contributes to multi-party systems.
Scotland, Wales and Northern Ireland use the first-past-the-post system for UK general elections but versions of proportional representation for elections to their own assemblies and parliaments. All of the UK used one form or another of proportional representation for European Parliament elections.
The countries that inherited the British majoritarian system tend toward two large parties: one left and the other right, such as the U.S. Democrats and Republicans. Canada is an exception, with three major political parties consisting of the New Democratic Party, which is to the left; the Conservative Party, which is to the right; and the Liberal Party, which is slightly off-centre but to the left. A fourth party that no longer has major party status is the separatist Bloc Québécois party, which is territorial and runs only in Quebec. New Zealand once used the British system, which yielded two large parties as well. It also left many New Zealanders unhappy because other viewpoints were ignored, which made the New Zealand Parliament in 1993 adopt a new electoral law modelled on Germany's system of proportional representation (PR) with a partial selection by constituencies. New Zealand soon developed a more complex party system.
After the 2015 UK general election, there were calls from UKIP for a switch to the use of proportional representation after it received 3,881,129 votes that produced only one MP. The Green Party was similarly underrepresented, which contrasted greatly with the SNP, a Scottish separatist party that received only 1,454,436 votes but won 56 seats because of more geographically concentrated support.
This is a general example, using population percentages taken from one state for illustrative purposes.
Imagine that Tennessee is having an election on the location of its capital. The population of Tennessee is concentrated around its four major cities, which are spread throughout the state. For this example, suppose that the entire electorate lives in these four cities and that everyone wants to live as near to the capital as possible.
The candidates for the capital are:
The preferences of the voters would be divided like this:
If each voter in each city naively selects one city on the ballot (Memphis voters select Memphis, Nashville voters select Nashville, and so on), Memphis will be selected, as it has the most votes 42%. The system does not require that the winner have a majority, only a plurality. Memphis wins because it has the most votes even though 58% of the voters in the example preferred Memphis least. That problem does not arise with the two-round system in which Nashville would have won. (In practice, with FPTP, many voters in Chattanooga and Knoxville are likely to vote tactically for Nashville: see below.)
To a much greater extent than many other electoral methods, plurality electoral systems encourage tactical voting techniques like "compromising". Voters are under pressure to vote for one of the two candidates most likely to win, even if their true preference is neither of them; because a vote for any other candidate is unlikely to lead to the preferred candidate being elected. This will instead reduce support for one of the two major candidates whom the voter might prefer to the other. Electors who prefer not to waste their vote by voting for a candidate with a very low chance of winning their constituency vote for their lesser preferred candidate who has a higher chance of winning. The minority party will then simply take votes away from one of the major parties, which could change the outcome and gain nothing for the voters. Any other party will typically need to build up its votes and credibility over a series of elections before it is seen as electable.
In the Tennessee example, if all the voters for Chattanooga and Knoxville had instead voted for Nashville, Nashville would have won (with 58% of the vote). That would have only been the third choice for those voters, but voting for their respective first choices (their own cities) actually results in their fourth choice (Memphis) being elected.
The difficulty is sometimes summed up in an extreme form, as "All votes for anyone other than the second place are votes for the winner". That is because by voting for other candidates, voters have denied those votes to the second-place candidate, who could have won had they received them. It is often claimed by United States Democrats that Democrat Al Gore lost the 2000 Presidential Election to Republican George W. Bush because some voters on the left voted for Ralph Nader of the Green Party, who, exit polls indicated, would have preferred Gore at 45% to Bush at 27%, with the rest not voting in Nader's absence.
That thinking is illustrated by elections in Puerto Rico and its three principal voter groups: the Independentistas (pro-independence), the Populares (pro-commonwealth), and the Estadistas (pro-statehood). Historically, there has been a tendency for Independentista voters to elect Popular candidates and policies. This results in more Popular victories even though the Estadistas have the most voters on the island. It is so widely recognised that the Puerto Ricans sometimes call the Independentistas who vote for the Populares "melons" in reference to the party colours, because the fruit is green on the outside but red on the inside.
Such tactical voting can cause significant perturbation to the system:
Proponents of other single-winner electoral systems argue that their proposals would reduce the need for tactical voting and reduce the spoiler effect. Other systems include the commonly used two-round system of runoffs and instant-runoff voting, along with less-tested and perhaps less-understood systems such as approval voting, score voting and Condorcet methods.
Duverger's law is a theory that constituencies that use first-past-the-post systems will eventually become a two-party system after enough time. The two dominating parties regularly alternate in power and easily win constituencies due to the structure of plurality voting systems. This puts smaller parties who struggle to meet the threshold of votes at a disadvantage, and inhibits growth.
Plurality voting tends to reduce the number of political parties to a greater extent than most other methods do, making it more likely that a single party will hold a majority of legislative seats. (In the United Kingdom, 22 out of 27 general elections since 1922 have produced a single-party majority government or, in the case of the National Governments, a parliament from which such a single-party government could have been drawn.)
Plurality voting's tendency toward fewer parties and more-frequent majorities of one party can also produce a government that may not consider as wide a range of perspectives and concerns. It is entirely possible that a voter finds all major parties to have similar views on issues, and that a voter does not have a meaningful way of expressing a dissenting opinion through their vote.
As fewer choices are offered to voters, voters may vote for a candidate although they disagree with them because they disagree even more with their opponents. That will make candidates less closely reflect the viewpoints of those who vote for them.
Furthermore, one-party rule is more likely to lead to radical changes in government policy even though the changes are favoured only by a plurality or a bare majority of the voters, but a multi-party system usually requires more consensus to make dramatic changes in policy.
Wasted votes are those cast for candidates who are virtually sure to lose in a safe seat, and votes cast for winning candidates in excess of the number required for victory. Plurality voting systems function on a "winner-takes-all" principle, which means that the party of the losing candidate in each riding receives no representation in government, regardless of the number of votes they received. For example, in the UK general election of 2005, 52% of votes were cast for losing candidates and 18% were excess votes, a total of 70% wasted votes. That is perhaps the most fundamental criticism of FPTP since a large majority of votes may play no part in determining the outcome. Alternative electoral systems, such as Proportional Representation, attempt to ensure that almost all of the votes are effective in influencing the result, which minimizes vote wastage. Such a system decreases disproportionality in election results and is credited for increasing voter turnout.
Political apathy is prevalent in plurality voting systems such as FPTP. Studies suggest that plurality voting system fails to incentivize citizens to vote, which results in very low voter turnouts. Under this system, many people feel that voting is an empty ritual that has no influence on the composition of legislature. Voters are not assured that the number of seats that political parties are accorded will reflect the popular vote, which disincentivizes them from voting and sends the message that their votes are not valued, and participation in elections does not seem necessary.
This is when a voter decides to vote in a way that does not represent their true preference or choice, motivated by an intent to influence election outcomes. Strategic behaviour by voters can and does influence the outcome of voting in different plurality voting systems. Strategic behaviour is when a voter casts their vote for a different party or alternative district/constituency/riding in order to induce, in their opinion, a better outcome. An example of this is when a person really likes party A but votes for party B because they do not like party C or D or because they believe that party A has little to no chance of winning. This can cause the outcome of very close votes to be swayed for the wrong reason. This might have had an impact on the 2000 United States election that was essentially decided by fewer than 600 votes, with the winner being President Bush. When voters behave in a strategic way and expect others to do the same, they end up voting for one of the two leading candidates, making the Condorcet alternative more likely to be elected. The prevalence of strategic voting in an election makes it difficult to evaluate the true political state of the population, as their true political ideologies are not reflected in their votes.
Because FPTP permits a high level of wasted votes, an election under FPTP is easily gerrymandered unless safeguards are in place. In gerrymandering, a party in power deliberately manipulates constituency boundaries to increase the number of seats that it wins unfairly.
In brief, if a governing party G wishes to reduce the seats that will be won by opposition party O in the next election, it can create a number of constituencies in each of which O has an overwhelming majority of votes. O will win these seats, but many of its voters will waste their votes. Then, the rest of the constituencies are designed to have small majorities for G. Few G votes are wasted, and G will win many seats by small margins. As a result of the gerrymander, O's seats have cost it more votes than G's seats.
The efficiency gap measures gerrymandering and has been scrutinized in the Supreme Court of the United States. The efficiency gap is the difference between the two parties' wasted votes, divided by the total number of votes.
The presence of spoilers often gives rise to suspicions that manipulation of the slate has taken place. The spoiler may have received incentives to run. A spoiler may also drop out at the last moment, which induces charges that such an act was intended from the beginning. Voters who are uninformed do not have a comparable opportunity to manipulate their votes as voters who understand all opposing sides, understand the pros and cons of voting for each party.
The spoiler effect is the effect of vote splitting between candidates or ballot questions with similar ideologies. One spoiler candidate's presence in the election draws votes from a major candidate with similar politics, which causes a strong opponent of both or several to win. Smaller parties can disproportionately change the outcome of an FPTP election by swinging what is called the 50-50% balance of two party systems by creating a faction within one or both ends of the political spectrum. This shifts the winner of the election from an absolute majority outcome to a plurality outcome. Due to the spoiler effect, the party that holds the unfavourable ideology by the majority will win, as the majority of the population would be split between the two parties with the similar ideology. In comparison, electoral systems that use proportional representation have small groups win only their proportional share of representation.
In August 2008, Sir Peter Kenilorea commented on what he perceived as the flaws of a first-past-the-post electoral system in the Solomon Islands:
An... underlying cause of political instability and poor governance, in my opinion, is our electoral system and its related problems. It has been identified by a number of academics and practitioners that the First Past the Post system is such that a Member elected to Parliament is sometimes elected by a small percentage of voters where there are many candidates in a particular constituency. I believe that this system is part of the reason why voters ignore political parties and why candidates try an appeal to voters' material desires and relationships instead of political parties.... Moreover, this system creates a political environment where a Member is elected by a relatively small number of voters with the effect that this Member is then expected to ignore his party's philosophy and instead look after that core base of voters in terms of their material needs. Another relevant factor that I see in relation to the electoral system is the proven fact that it is rather conducive, and thus has not prevented, corrupt elections practices such as ballot buying.
The United Kingdom continues to use the first-past-the-post electoral system for general elections, and for local government elections in England and Wales. Changes to the UK system have been proposed, and alternatives were examined by the Jenkins Commission in the late 1990s. After the formation of a new coalition government in 2010, it was announced as part of the coalition agreement that a referendum would be held on switching to the alternative vote system. However the alternative vote system was rejected 2-1 by British voters in a referendum held on 5 May 2011.
Canada also uses FPTP for national and provincial elections. In May 2005 the Canadian province of British Columbia had a referendum on abolishing single-member district plurality in favour of multi-member districts with the Single Transferable Vote system after the Citizens' Assembly on Electoral Reform made a recommendation for the reform. The referendum obtained 57% of the vote, but failed to meet the 60% requirement for passing. A second referendum was held in May 2009, this time the province's voters defeated the change with 39% voting in favour.
An October 2007 referendum in the Canadian province of Ontario on adopting a Mixed Member Proportional system, also requiring 60% approval, failed with only 36.9% voting in favour. British Columbia again called a referendum on the issue in 2018 which was defeated by 62% voting to keep current system.
Northern Ireland, Scotland, Wales, the Republic of Ireland, Australia and New Zealand are notable examples of countries within the UK, or with previous links to it, that use non-FPTP electoral systems (Northern Ireland, Scotland and Wales use FPTP in United Kingdom general elections, however).
Nations which have undergone democratic reforms since 1990 but have not adopted the FPTP system include South Africa, almost all of the former Eastern bloc nations, Russia, and Afghanistan.
Countries that use plurality voting to elect the lower or only house of their legislature include: (Some of these may be undemocratic systems where there is effectively only one candidate allowed anyway.)
The fatal flaws of Plurality (first-past-the-post) electoral systems – Proportional Representation Society of Australia | [
{
"paragraph_id": 0,
"text": "Plurality voting refers to electoral systems in which a candidate who polls more than any other (that is, receives a plurality) is elected. In systems based on single-member districts, it elects just one member per district and may also be referred to as first-past-the-post (FPTP), single-member plurality (SMP/SMDP), single-choice voting (an imprecise term as non-plurality voting systems may also use a single choice), simple plurality or relative majority (as opposed to an absolute majority, where more than half of votes is needed, this is called majority voting). A system that elects multiple winners elected at once with the plurality rule and where each voter casts multiple X votes in a multi-seat district is referred to as plurality block voting. A semi-proportional system that elects multiple winners elected at once with the plurality rule and where each voter casts just one vote in a multi-seat district is known as single non-transferable voting.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Plurality voting is distinguished from majority voting, in which a winning candidate must receive an absolute majority of votes: more than half of all votes (more than all other candidates combined if each voter has one vote). Under single-winner plurality voting, the leading candidate, whether or not they have a majority of votes, is elected. Not every single-winner winner-takes-all system is plurality voting; for example, instant-runoff voting is a non-plurality winner-takes-all system, because not all votes are taken as initially cast.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Also, some plurality voting methods are close to proportional. For example limited voting and single non-transferable vote use plurality rules but are considered semi-proportional systems.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Plurality voting is still used to elect members of a legislative assembly or executive officers in only a handful of countries, mostly in the English speaking world, for historical reasons. It is used in most elections in the United States, the lower house (Lok Sabha) in India and elections to the British House of Commons and English local elections in the United Kingdom, and federal and provincial elections in Canada. An example for a \"winner-take-all\" plurality voting is system used at the state-level for election of most of the Electoral College in United States presidential elections. This system is called party block voting, also called the general ticket.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Proponents of electoral reform generally argue against plurality voting systems in favour of either other single winner systems (such as ranked-choice voting methods) or proportional representation (such as the single transferable vote or open list PR).",
"title": ""
},
{
"paragraph_id": 5,
"text": "In single-winner plurality voting, each voter is allowed to vote for only one candidate, and the winner of the election is the candidate who represents a plurality of voters or, in other words, received more votes than any other candidate. That makes plurality voting among the simplest of all electoral systems for voters and vote counting officials; however, the drawing of district boundary lines can be contentious in the plurality system.",
"title": "Voting"
},
{
"paragraph_id": 6,
"text": "In an election for a legislative body with single-member seats, each voter in a geographically defined electoral district may vote for one candidate from a list of the candidates who are competing to represent that district. Under the plurality system, the winner of the election then becomes the representative of the whole electoral district and serves with representatives of other electoral districts.",
"title": "Voting"
},
{
"paragraph_id": 7,
"text": "In an election for a single seat, such as for president in a presidential system, the same style of ballot is used, and the winner is whichever candidate receives the highest number of votes.",
"title": "Voting"
},
{
"paragraph_id": 8,
"text": "In the two-round system, usually the top two candidates in the first ballot progress to the second round, also called the runoff.",
"title": "Voting"
},
{
"paragraph_id": 9,
"text": "In a multiple-member plurality election with n seats available, the winners are the n candidates with the highest numbers of votes. The rules may allow the voter to vote for one candidate, up to n candidates, or some other number.",
"title": "Voting"
},
{
"paragraph_id": 10,
"text": "Single-member plurality voting systems, often known as first past the post, is a simple system to use. The candidate who gets more votes than any of the other candidate(s) is the winning candidate. Depending on the number of candidates and their popularity within the community, it often happens that the winning candidate gets fewer votes than all the other candidates combined. This is sometimes called the spoiler effect.",
"title": "Voting"
},
{
"paragraph_id": 11,
"text": "Multi-member plurality elections are only slightly more complicated. The n candidates who get more votes than the others are elected.",
"title": "Voting"
},
{
"paragraph_id": 12,
"text": "Generally, plurality ballots can be categorized into two forms. The simplest form is a blank ballot in which the name of a candidate(s) is written in by hand. A more structured ballot will list all the candidates and allow a mark to be made next to the name of a single candidate (or more than one, in some cases); however, a structured ballot can also include space for a write-in candidate.",
"title": "Voting"
},
{
"paragraph_id": 13,
"text": "Plurality voting is used for local and/or national elections in 43 of the 193 countries that are members of the United Nations. It is particularly prevalent in the United Kingdom, the United States, Canada and India.",
"title": "Voting"
},
{
"paragraph_id": 14,
"text": "The United Kingdom, like the United States and Canada, uses single-member districts as the base for national elections. Each electoral district (constituency) chooses one member of parliament, the candidate who gets the most votes, whether or not they get at least 50% of the votes cast (\"first past the post\"). In 1992, for example, a Liberal Democrat in Scotland won a seat (Inverness, Nairn and Lochaber) with just 26% of the votes. The system of single-member districts with plurality winners tends to produce two large political parties. In countries with proportional representation there is not such a great incentive to vote for a large party, which contributes to multi-party systems.",
"title": "Voting"
},
{
"paragraph_id": 15,
"text": "Scotland, Wales and Northern Ireland use the first-past-the-post system for UK general elections but versions of proportional representation for elections to their own assemblies and parliaments. All of the UK used one form or another of proportional representation for European Parliament elections.",
"title": "Voting"
},
{
"paragraph_id": 16,
"text": "The countries that inherited the British majoritarian system tend toward two large parties: one left and the other right, such as the U.S. Democrats and Republicans. Canada is an exception, with three major political parties consisting of the New Democratic Party, which is to the left; the Conservative Party, which is to the right; and the Liberal Party, which is slightly off-centre but to the left. A fourth party that no longer has major party status is the separatist Bloc Québécois party, which is territorial and runs only in Quebec. New Zealand once used the British system, which yielded two large parties as well. It also left many New Zealanders unhappy because other viewpoints were ignored, which made the New Zealand Parliament in 1993 adopt a new electoral law modelled on Germany's system of proportional representation (PR) with a partial selection by constituencies. New Zealand soon developed a more complex party system.",
"title": "Voting"
},
{
"paragraph_id": 17,
"text": "After the 2015 UK general election, there were calls from UKIP for a switch to the use of proportional representation after it received 3,881,129 votes that produced only one MP. The Green Party was similarly underrepresented, which contrasted greatly with the SNP, a Scottish separatist party that received only 1,454,436 votes but won 56 seats because of more geographically concentrated support.",
"title": "Voting"
},
{
"paragraph_id": 18,
"text": "This is a general example, using population percentages taken from one state for illustrative purposes.",
"title": "Voting"
},
{
"paragraph_id": 19,
"text": "",
"title": "Voting"
},
{
"paragraph_id": 20,
"text": "Imagine that Tennessee is having an election on the location of its capital. The population of Tennessee is concentrated around its four major cities, which are spread throughout the state. For this example, suppose that the entire electorate lives in these four cities and that everyone wants to live as near to the capital as possible.",
"title": "Voting"
},
{
"paragraph_id": 21,
"text": "The candidates for the capital are:",
"title": "Voting"
},
{
"paragraph_id": 22,
"text": "The preferences of the voters would be divided like this:",
"title": "Voting"
},
{
"paragraph_id": 23,
"text": "If each voter in each city naively selects one city on the ballot (Memphis voters select Memphis, Nashville voters select Nashville, and so on), Memphis will be selected, as it has the most votes 42%. The system does not require that the winner have a majority, only a plurality. Memphis wins because it has the most votes even though 58% of the voters in the example preferred Memphis least. That problem does not arise with the two-round system in which Nashville would have won. (In practice, with FPTP, many voters in Chattanooga and Knoxville are likely to vote tactically for Nashville: see below.)",
"title": "Voting"
},
{
"paragraph_id": 24,
"text": "To a much greater extent than many other electoral methods, plurality electoral systems encourage tactical voting techniques like \"compromising\". Voters are under pressure to vote for one of the two candidates most likely to win, even if their true preference is neither of them; because a vote for any other candidate is unlikely to lead to the preferred candidate being elected. This will instead reduce support for one of the two major candidates whom the voter might prefer to the other. Electors who prefer not to waste their vote by voting for a candidate with a very low chance of winning their constituency vote for their lesser preferred candidate who has a higher chance of winning. The minority party will then simply take votes away from one of the major parties, which could change the outcome and gain nothing for the voters. Any other party will typically need to build up its votes and credibility over a series of elections before it is seen as electable.",
"title": "Disadvantages"
},
{
"paragraph_id": 25,
"text": "In the Tennessee example, if all the voters for Chattanooga and Knoxville had instead voted for Nashville, Nashville would have won (with 58% of the vote). That would have only been the third choice for those voters, but voting for their respective first choices (their own cities) actually results in their fourth choice (Memphis) being elected.",
"title": "Disadvantages"
},
{
"paragraph_id": 26,
"text": "The difficulty is sometimes summed up in an extreme form, as \"All votes for anyone other than the second place are votes for the winner\". That is because by voting for other candidates, voters have denied those votes to the second-place candidate, who could have won had they received them. It is often claimed by United States Democrats that Democrat Al Gore lost the 2000 Presidential Election to Republican George W. Bush because some voters on the left voted for Ralph Nader of the Green Party, who, exit polls indicated, would have preferred Gore at 45% to Bush at 27%, with the rest not voting in Nader's absence.",
"title": "Disadvantages"
},
{
"paragraph_id": 27,
"text": "That thinking is illustrated by elections in Puerto Rico and its three principal voter groups: the Independentistas (pro-independence), the Populares (pro-commonwealth), and the Estadistas (pro-statehood). Historically, there has been a tendency for Independentista voters to elect Popular candidates and policies. This results in more Popular victories even though the Estadistas have the most voters on the island. It is so widely recognised that the Puerto Ricans sometimes call the Independentistas who vote for the Populares \"melons\" in reference to the party colours, because the fruit is green on the outside but red on the inside.",
"title": "Disadvantages"
},
{
"paragraph_id": 28,
"text": "Such tactical voting can cause significant perturbation to the system:",
"title": "Disadvantages"
},
{
"paragraph_id": 29,
"text": "Proponents of other single-winner electoral systems argue that their proposals would reduce the need for tactical voting and reduce the spoiler effect. Other systems include the commonly used two-round system of runoffs and instant-runoff voting, along with less-tested and perhaps less-understood systems such as approval voting, score voting and Condorcet methods.",
"title": "Disadvantages"
},
{
"paragraph_id": 30,
"text": "Duverger's law is a theory that constituencies that use first-past-the-post systems will eventually become a two-party system after enough time. The two dominating parties regularly alternate in power and easily win constituencies due to the structure of plurality voting systems. This puts smaller parties who struggle to meet the threshold of votes at a disadvantage, and inhibits growth.",
"title": "Disadvantages"
},
{
"paragraph_id": 31,
"text": "Plurality voting tends to reduce the number of political parties to a greater extent than most other methods do, making it more likely that a single party will hold a majority of legislative seats. (In the United Kingdom, 22 out of 27 general elections since 1922 have produced a single-party majority government or, in the case of the National Governments, a parliament from which such a single-party government could have been drawn.)",
"title": "Disadvantages"
},
{
"paragraph_id": 32,
"text": "Plurality voting's tendency toward fewer parties and more-frequent majorities of one party can also produce a government that may not consider as wide a range of perspectives and concerns. It is entirely possible that a voter finds all major parties to have similar views on issues, and that a voter does not have a meaningful way of expressing a dissenting opinion through their vote.",
"title": "Disadvantages"
},
{
"paragraph_id": 33,
"text": "As fewer choices are offered to voters, voters may vote for a candidate although they disagree with them because they disagree even more with their opponents. That will make candidates less closely reflect the viewpoints of those who vote for them.",
"title": "Disadvantages"
},
{
"paragraph_id": 34,
"text": "Furthermore, one-party rule is more likely to lead to radical changes in government policy even though the changes are favoured only by a plurality or a bare majority of the voters, but a multi-party system usually requires more consensus to make dramatic changes in policy.",
"title": "Disadvantages"
},
{
"paragraph_id": 35,
"text": "Wasted votes are those cast for candidates who are virtually sure to lose in a safe seat, and votes cast for winning candidates in excess of the number required for victory. Plurality voting systems function on a \"winner-takes-all\" principle, which means that the party of the losing candidate in each riding receives no representation in government, regardless of the number of votes they received. For example, in the UK general election of 2005, 52% of votes were cast for losing candidates and 18% were excess votes, a total of 70% wasted votes. That is perhaps the most fundamental criticism of FPTP since a large majority of votes may play no part in determining the outcome. Alternative electoral systems, such as Proportional Representation, attempt to ensure that almost all of the votes are effective in influencing the result, which minimizes vote wastage. Such a system decreases disproportionality in election results and is credited for increasing voter turnout.",
"title": "Disadvantages"
},
{
"paragraph_id": 36,
"text": "Political apathy is prevalent in plurality voting systems such as FPTP. Studies suggest that plurality voting system fails to incentivize citizens to vote, which results in very low voter turnouts. Under this system, many people feel that voting is an empty ritual that has no influence on the composition of legislature. Voters are not assured that the number of seats that political parties are accorded will reflect the popular vote, which disincentivizes them from voting and sends the message that their votes are not valued, and participation in elections does not seem necessary.",
"title": "Disadvantages"
},
{
"paragraph_id": 37,
"text": "This is when a voter decides to vote in a way that does not represent their true preference or choice, motivated by an intent to influence election outcomes. Strategic behaviour by voters can and does influence the outcome of voting in different plurality voting systems. Strategic behaviour is when a voter casts their vote for a different party or alternative district/constituency/riding in order to induce, in their opinion, a better outcome. An example of this is when a person really likes party A but votes for party B because they do not like party C or D or because they believe that party A has little to no chance of winning. This can cause the outcome of very close votes to be swayed for the wrong reason. This might have had an impact on the 2000 United States election that was essentially decided by fewer than 600 votes, with the winner being President Bush. When voters behave in a strategic way and expect others to do the same, they end up voting for one of the two leading candidates, making the Condorcet alternative more likely to be elected. The prevalence of strategic voting in an election makes it difficult to evaluate the true political state of the population, as their true political ideologies are not reflected in their votes.",
"title": "Disadvantages"
},
{
"paragraph_id": 38,
"text": "Because FPTP permits a high level of wasted votes, an election under FPTP is easily gerrymandered unless safeguards are in place. In gerrymandering, a party in power deliberately manipulates constituency boundaries to increase the number of seats that it wins unfairly.",
"title": "Disadvantages"
},
{
"paragraph_id": 39,
"text": "In brief, if a governing party G wishes to reduce the seats that will be won by opposition party O in the next election, it can create a number of constituencies in each of which O has an overwhelming majority of votes. O will win these seats, but many of its voters will waste their votes. Then, the rest of the constituencies are designed to have small majorities for G. Few G votes are wasted, and G will win many seats by small margins. As a result of the gerrymander, O's seats have cost it more votes than G's seats.",
"title": "Disadvantages"
},
{
"paragraph_id": 40,
"text": "The efficiency gap measures gerrymandering and has been scrutinized in the Supreme Court of the United States. The efficiency gap is the difference between the two parties' wasted votes, divided by the total number of votes.",
"title": "Disadvantages"
},
{
"paragraph_id": 41,
"text": "The presence of spoilers often gives rise to suspicions that manipulation of the slate has taken place. The spoiler may have received incentives to run. A spoiler may also drop out at the last moment, which induces charges that such an act was intended from the beginning. Voters who are uninformed do not have a comparable opportunity to manipulate their votes as voters who understand all opposing sides, understand the pros and cons of voting for each party.",
"title": "Disadvantages"
},
{
"paragraph_id": 42,
"text": "The spoiler effect is the effect of vote splitting between candidates or ballot questions with similar ideologies. One spoiler candidate's presence in the election draws votes from a major candidate with similar politics, which causes a strong opponent of both or several to win. Smaller parties can disproportionately change the outcome of an FPTP election by swinging what is called the 50-50% balance of two party systems by creating a faction within one or both ends of the political spectrum. This shifts the winner of the election from an absolute majority outcome to a plurality outcome. Due to the spoiler effect, the party that holds the unfavourable ideology by the majority will win, as the majority of the population would be split between the two parties with the similar ideology. In comparison, electoral systems that use proportional representation have small groups win only their proportional share of representation.",
"title": "Disadvantages"
},
{
"paragraph_id": 43,
"text": "In August 2008, Sir Peter Kenilorea commented on what he perceived as the flaws of a first-past-the-post electoral system in the Solomon Islands:",
"title": "Disadvantages"
},
{
"paragraph_id": 44,
"text": "An... underlying cause of political instability and poor governance, in my opinion, is our electoral system and its related problems. It has been identified by a number of academics and practitioners that the First Past the Post system is such that a Member elected to Parliament is sometimes elected by a small percentage of voters where there are many candidates in a particular constituency. I believe that this system is part of the reason why voters ignore political parties and why candidates try an appeal to voters' material desires and relationships instead of political parties.... Moreover, this system creates a political environment where a Member is elected by a relatively small number of voters with the effect that this Member is then expected to ignore his party's philosophy and instead look after that core base of voters in terms of their material needs. Another relevant factor that I see in relation to the electoral system is the proven fact that it is rather conducive, and thus has not prevented, corrupt elections practices such as ballot buying.",
"title": "Disadvantages"
},
{
"paragraph_id": 45,
"text": "The United Kingdom continues to use the first-past-the-post electoral system for general elections, and for local government elections in England and Wales. Changes to the UK system have been proposed, and alternatives were examined by the Jenkins Commission in the late 1990s. After the formation of a new coalition government in 2010, it was announced as part of the coalition agreement that a referendum would be held on switching to the alternative vote system. However the alternative vote system was rejected 2-1 by British voters in a referendum held on 5 May 2011.",
"title": "International examples"
},
{
"paragraph_id": 46,
"text": "Canada also uses FPTP for national and provincial elections. In May 2005 the Canadian province of British Columbia had a referendum on abolishing single-member district plurality in favour of multi-member districts with the Single Transferable Vote system after the Citizens' Assembly on Electoral Reform made a recommendation for the reform. The referendum obtained 57% of the vote, but failed to meet the 60% requirement for passing. A second referendum was held in May 2009, this time the province's voters defeated the change with 39% voting in favour.",
"title": "International examples"
},
{
"paragraph_id": 47,
"text": "An October 2007 referendum in the Canadian province of Ontario on adopting a Mixed Member Proportional system, also requiring 60% approval, failed with only 36.9% voting in favour. British Columbia again called a referendum on the issue in 2018 which was defeated by 62% voting to keep current system.",
"title": "International examples"
},
{
"paragraph_id": 48,
"text": "Northern Ireland, Scotland, Wales, the Republic of Ireland, Australia and New Zealand are notable examples of countries within the UK, or with previous links to it, that use non-FPTP electoral systems (Northern Ireland, Scotland and Wales use FPTP in United Kingdom general elections, however).",
"title": "International examples"
},
{
"paragraph_id": 49,
"text": "Nations which have undergone democratic reforms since 1990 but have not adopted the FPTP system include South Africa, almost all of the former Eastern bloc nations, Russia, and Afghanistan.",
"title": "International examples"
},
{
"paragraph_id": 50,
"text": "Countries that use plurality voting to elect the lower or only house of their legislature include: (Some of these may be undemocratic systems where there is effectively only one candidate allowed anyway.)",
"title": "International examples"
},
{
"paragraph_id": 51,
"text": "The fatal flaws of Plurality (first-past-the-post) electoral systems – Proportional Representation Society of Australia",
"title": "References"
}
]
| Plurality voting refers to electoral systems in which a candidate who polls more than any other is elected. In systems based on single-member districts, it elects just one member per district and may also be referred to as first-past-the-post (FPTP), single-member plurality (SMP/SMDP), single-choice voting, simple plurality or relative majority. A system that elects multiple winners elected at once with the plurality rule and where each voter casts multiple X votes in a multi-seat district is referred to as plurality block voting. A semi-proportional system that elects multiple winners elected at once with the plurality rule and where each voter casts just one vote in a multi-seat district is known as single non-transferable voting. Plurality voting is distinguished from majority voting, in which a winning candidate must receive an absolute majority of votes: more than half of all votes. Under single-winner plurality voting, the leading candidate, whether or not they have a majority of votes, is elected. Not every single-winner winner-takes-all system is plurality voting; for example, instant-runoff voting is a non-plurality winner-takes-all system, because not all votes are taken as initially cast. Also, some plurality voting methods are close to proportional. For example limited voting and single non-transferable vote use plurality rules but are considered semi-proportional systems. Plurality voting is still used to elect members of a legislative assembly or executive officers in only a handful of countries, mostly in the English speaking world, for historical reasons. It is used in most elections in the United States, the lower house in India and elections to the British House of Commons and English local elections in the United Kingdom, and federal and provincial elections in Canada. An example for a "winner-take-all" plurality voting is system used at the state-level for election of most of the Electoral College in United States presidential elections. This system is called party block voting, also called the general ticket. Proponents of electoral reform generally argue against plurality voting systems in favour of either other single winner systems or proportional representation. | 2001-11-12T19:49:18Z | 2023-12-11T10:28:33Z | [
"Template:Electoral systems",
"Template:Expand section",
"Template:More citations needed section",
"Template:Voting systems",
"Template:Citation needed",
"Template:Div col end",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Short description",
"Template:Use dmy dates",
"Template:Unreferenced section",
"Template:Blockquote",
"Template:Citation",
"Template:Cite journal",
"Template:Use British English",
"Template:Tenn voting example",
"Template:More citations needed",
"Template:See also",
"Template:Clarify",
"Template:Main",
"Template:Div col",
"Template:Cite news",
"Template:Cite magazine",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Plurality_voting |
10,881 | Fetish | Fetish may refer to: | [
{
"paragraph_id": 0,
"text": "Fetish may refer to:",
"title": ""
}
]
| Fetish may refer to: | 2023-06-23T09:16:12Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Fetish |
|
10,882 | February 14 | February 14 is the 45th day of the year in the Gregorian calendar; 320 days remain until the end of the year (321 in leap years). | [
{
"paragraph_id": 0,
"text": "February 14 is the 45th day of the year in the Gregorian calendar; 320 days remain until the end of the year (321 in leap years).",
"title": ""
}
]
| February 14 is the 45th day of the year in the Gregorian calendar; 320 days remain until the end of the year. | 2001-10-16T18:05:13Z | 2023-12-04T04:27:05Z | [
"Template:Cite web",
"Template:Day",
"Template:Pp-pc1",
"Template:Calendar",
"Template:This date in recent years",
"Template:Cite encyclopedia",
"Template:Webarchive",
"Template:Cbignore",
"Template:NYT On this day",
"Template:About",
"Template:USS",
"Template:Citation needed",
"Template:Cite ODNB",
"Template:Cite magazine",
"Template:Commons",
"Template:Pp-move-indef",
"Template:Cite book",
"Template:Cite news",
"Template:Baseballstats",
"Template:Months",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/February_14 |
10,883 | Free trade area | A free trade area is the region encompassing a trade bloc whose member countries have signed a free trade agreement (FTA). Such agreements involve cooperation between at least two countries to reduce trade barriers, import quotas and tariffs, and to increase trade of goods and services with each other. If natural persons are also free to move between the countries, in addition to a free trade agreement, it would also be considered an open border. It can be considered the second stage of economic integration.
Customs unions are a special type of free trade area. All such areas have internal arrangements which parties conclude in order to liberalize and facilitate trade among themselves. The crucial difference between customs unions and free trade areas is their approach to third parties. While a customs union requires all parties to establish and maintain identical external tariffs with regard to trade with non-parties, parties to a free trade area are not subject to this requirement. Instead, they may establish and maintain whatever tariff regime applying to imports from non-parties as deemed necessary. In a free trade area without harmonized external tariffs, to eliminate the risk of trade deflection, parties will adopt a system of preferential rules of origin.
The term free trade area was originally meant by the General Agreement on Tariffs and Trade (GATT 1994) to include only trade in goods. An agreement with a similar purpose, i.e., to enhance liberalization of trade in services, is named under Article V of the General Agreement on Trade in Services (GATS) as an "economic integration agreement". However, in practice, the term is now widely used to refer to agreements covering not only goods but also services and even investment.
The formation of free trade areas is considered an exception to the most favored nation (MFN) principle in the World Trade Organization (WTO) because the preferences that parties to a free trade area exclusively grant each other go beyond their accession commitments. Although Article XXIV of the GATT allows WTO members to establish free trade areas or to adopt interim agreements necessary for the establishment thereof, there are several conditions with respect to free trade areas, or interim agreements leading to the formation of free trade areas.
Firstly, duties and other regulations maintained in each of the signatory parties to a free trade area, which are applicable at the time such free trade area is formed, to the trade with non-parties to such free trade area shall not be higher or more restrictive than the corresponding duties and other regulations existing in the same signatory parties prior to the formation of the free trade area. In other words, the establishment of a free trade area to grant preferential treatment among its member is legitimate under WTO law, but the parties to a free trade area are not permitted to treat non-parties less favorably than before the area was established. A second requirement stipulated by Article XXIV is that tariffs and other barriers to trade must be eliminated to substantially all the trade within the free trade area.
Free trade agreements forming free trade areas generally lie outside the realm of the multilateral trading system. However, WTO members must notify to the Secretariat when they conclude new free trade agreements and in principle the texts of free trade agreements are subject to review under the Committee on Regional Trade Agreements. Although a dispute arising within free trade areas is not subject to litigation at the WTO's Dispute Settlement Body, "there is no guarantee that WTO panels will abide by them and decline to exercise jurisdiction in a given case".
Trade diversion and trade creation
In general, trade diversion means that a free trade area would divert trade away from more efficient suppliers outside the area towards less efficient ones within the areas. Whereas, trade creation implies that a free trade area creates trade which may not have otherwise existed. In all cases trade creation will raise a country's national welfare.
Both trade creation and trade diversion are crucial effects found upon the establishment of a free trade area. Trade creation will cause consumption to shift from a high-cost producer to a low-cost one, and trade will thus expand. In contrast, trade diversion will lead to trade shifting from a lower-cost producer outside the area to a higher-cost one inside the area. Such a shift will not benefit consumers within the free trade area as they are deprived the opportunity to purchase cheaper imported goods. However, economists find that trade diversion does not always harm aggregate national welfare: it can even improve aggregate national welfare if the volume of diverted trade is small.
Free trade areas as public goods
Economists have made attempts to evaluate the extent to which free trade areas can be considered public goods. They firstly address one key element of free trade areas, which is the system of embedded tribunals which act as arbitrators in international trade disputes. This system as a force of clarification for existing statutes and international economic policies is affirmed within the trade treaties.
The second way in which free trade areas are considered public goods is tied to the evolving trend of them becoming "deeper". The depth of a free trade area refers to the added types of structural policies that it covers. While older trade deals are deemed "shallower" as they cover fewer areas (such as tariffs and quotas), more recently concluded agreements address a number of other fields, from services to e-commerce and data localization. Since transactions among parties to a free trade area are relatively cheaper as compared to those with non-parties, free trade areas are conventionally found to be excludable. Now that deep trade deals will enhance regulatory harmonization and increase trade flows with non-parties, thus reduce the excludability of FTA benefits, new generation free trade areas are obtaining essential characteristics of public goods.
Unlike a customs union, parties to a free trade area do not maintain common external tariffs, which means they apply different customs duties, as well as other policies with respect to non-members. This feature creates the possibility of non-parties may free riding preferences under a free trade area by penetrating the market with the lowest external tariffs. Such risk necessitates the introduction of rules to determine originating goods eligible for preferences under a free trade area, a need that does not arise upon the formation of a customs union. Basically, there is a requirement for a minimum extent of processing that results in "substantial transformation" to the goods so that they can be considered originating. By defining which goods are originating in the PTA, preferential rules of origin distinguish between originating and non-originating goods: only the former will be entitled to preferential tariffs scheduled by the free trade area, the latter must pay MFN import duties.
It is noted that in qualifying for origin criteria, there is a differential treatment between inputs originating within and outside a free trade area. Normally inputs originating in one FTA party will be considered as originating in the other party if they are incorporated in the manufacturing process in that other party. Sometimes, production costs arising in one party is also considered as that arising in another party. In preferential rules of origin, such differential treatment is normally provided for in the cumulation or accumulation provision. Such clause further explains the trade creation and trade diversion effects of a free trade area mentioned above, because a party to a free trade area has the incentive to use inputs originating in another party so that their products may qualify for originating status.
Since there are hundreds of free trade areas currently in force and being negotiated (about 800 according to ITC's Rules of Origin Facilitator, counting also non-reciprocal trade arrangements), it is important for businesses and policy-makers to keep track of their status. There are a number of depositories of free trade agreements available either at national, regional or international levels. Some significant ones include the database on Latin American free trade agreements constructed by the Latin American Integration Association (ALADI), the database maintained by the Asian Regional Integration Center (ARIC) providing information agreements of Asian countries, and the portal on the European Union's free trade negotiations and agreements.
At the international level, there are two important free access databases developed by international organizations for policy-makers and businesses:
WTO's Regional Trade Agreements Information System
As WTO members are obliged to notify to the Secretariat their free trade agreements, this database is constructed based on the most official source of information on free trade agreements (referred to as regional trade agreements in the WTO language). The database allows users to seek information on trade agreements notified to the WTO by country or by topic (goods, services or goods and services). This database provides users with an updated list of all agreements in force, however, those not notified to the WTO may be missing. It also displays reports, tables and graphs containing statistics on these agreements, and particularly preferential tariff analysis.
ITC's Market Access Map
The Market Access Map was developed by the International Trade Centre (ITC) with the objectives to facilitate businesses, governments and researchers in market access issues. The database, visible via the online tool Market Access Map, includes information on tariff and non-tariff barriers in all active trade agreements, not limited to those officially notified to the WTO. It also documents data on non-preferential trade agreements (for instance, Generalized System of Preferences schemes). Up until 2019, Market Access Map has provided downloadable links to texts agreements and their rules of origin. The new version of Market Access Map forthcoming this year will provide direct web links to relevant agreement pages and connect itself to other ITC's tools, particularly the Rules of Origin Facilitator. It is expected to become a versatile tool which assists enterprises in understanding free trade agreements and qualifying for origin requirements under these agreements. | [
{
"paragraph_id": 0,
"text": "A free trade area is the region encompassing a trade bloc whose member countries have signed a free trade agreement (FTA). Such agreements involve cooperation between at least two countries to reduce trade barriers, import quotas and tariffs, and to increase trade of goods and services with each other. If natural persons are also free to move between the countries, in addition to a free trade agreement, it would also be considered an open border. It can be considered the second stage of economic integration.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Customs unions are a special type of free trade area. All such areas have internal arrangements which parties conclude in order to liberalize and facilitate trade among themselves. The crucial difference between customs unions and free trade areas is their approach to third parties. While a customs union requires all parties to establish and maintain identical external tariffs with regard to trade with non-parties, parties to a free trade area are not subject to this requirement. Instead, they may establish and maintain whatever tariff regime applying to imports from non-parties as deemed necessary. In a free trade area without harmonized external tariffs, to eliminate the risk of trade deflection, parties will adopt a system of preferential rules of origin.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term free trade area was originally meant by the General Agreement on Tariffs and Trade (GATT 1994) to include only trade in goods. An agreement with a similar purpose, i.e., to enhance liberalization of trade in services, is named under Article V of the General Agreement on Trade in Services (GATS) as an \"economic integration agreement\". However, in practice, the term is now widely used to refer to agreements covering not only goods but also services and even investment.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The formation of free trade areas is considered an exception to the most favored nation (MFN) principle in the World Trade Organization (WTO) because the preferences that parties to a free trade area exclusively grant each other go beyond their accession commitments. Although Article XXIV of the GATT allows WTO members to establish free trade areas or to adopt interim agreements necessary for the establishment thereof, there are several conditions with respect to free trade areas, or interim agreements leading to the formation of free trade areas.",
"title": "Legal aspects of free trade areas"
},
{
"paragraph_id": 4,
"text": "Firstly, duties and other regulations maintained in each of the signatory parties to a free trade area, which are applicable at the time such free trade area is formed, to the trade with non-parties to such free trade area shall not be higher or more restrictive than the corresponding duties and other regulations existing in the same signatory parties prior to the formation of the free trade area. In other words, the establishment of a free trade area to grant preferential treatment among its member is legitimate under WTO law, but the parties to a free trade area are not permitted to treat non-parties less favorably than before the area was established. A second requirement stipulated by Article XXIV is that tariffs and other barriers to trade must be eliminated to substantially all the trade within the free trade area.",
"title": "Legal aspects of free trade areas"
},
{
"paragraph_id": 5,
"text": "Free trade agreements forming free trade areas generally lie outside the realm of the multilateral trading system. However, WTO members must notify to the Secretariat when they conclude new free trade agreements and in principle the texts of free trade agreements are subject to review under the Committee on Regional Trade Agreements. Although a dispute arising within free trade areas is not subject to litigation at the WTO's Dispute Settlement Body, \"there is no guarantee that WTO panels will abide by them and decline to exercise jurisdiction in a given case\".",
"title": "Legal aspects of free trade areas"
},
{
"paragraph_id": 6,
"text": "Trade diversion and trade creation",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 7,
"text": "In general, trade diversion means that a free trade area would divert trade away from more efficient suppliers outside the area towards less efficient ones within the areas. Whereas, trade creation implies that a free trade area creates trade which may not have otherwise existed. In all cases trade creation will raise a country's national welfare.",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 8,
"text": "Both trade creation and trade diversion are crucial effects found upon the establishment of a free trade area. Trade creation will cause consumption to shift from a high-cost producer to a low-cost one, and trade will thus expand. In contrast, trade diversion will lead to trade shifting from a lower-cost producer outside the area to a higher-cost one inside the area. Such a shift will not benefit consumers within the free trade area as they are deprived the opportunity to purchase cheaper imported goods. However, economists find that trade diversion does not always harm aggregate national welfare: it can even improve aggregate national welfare if the volume of diverted trade is small.",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 9,
"text": "Free trade areas as public goods",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 10,
"text": "Economists have made attempts to evaluate the extent to which free trade areas can be considered public goods. They firstly address one key element of free trade areas, which is the system of embedded tribunals which act as arbitrators in international trade disputes. This system as a force of clarification for existing statutes and international economic policies is affirmed within the trade treaties.",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 11,
"text": "The second way in which free trade areas are considered public goods is tied to the evolving trend of them becoming \"deeper\". The depth of a free trade area refers to the added types of structural policies that it covers. While older trade deals are deemed \"shallower\" as they cover fewer areas (such as tariffs and quotas), more recently concluded agreements address a number of other fields, from services to e-commerce and data localization. Since transactions among parties to a free trade area are relatively cheaper as compared to those with non-parties, free trade areas are conventionally found to be excludable. Now that deep trade deals will enhance regulatory harmonization and increase trade flows with non-parties, thus reduce the excludability of FTA benefits, new generation free trade areas are obtaining essential characteristics of public goods.",
"title": "Economic aspects of free trade areas"
},
{
"paragraph_id": 12,
"text": "Unlike a customs union, parties to a free trade area do not maintain common external tariffs, which means they apply different customs duties, as well as other policies with respect to non-members. This feature creates the possibility of non-parties may free riding preferences under a free trade area by penetrating the market with the lowest external tariffs. Such risk necessitates the introduction of rules to determine originating goods eligible for preferences under a free trade area, a need that does not arise upon the formation of a customs union. Basically, there is a requirement for a minimum extent of processing that results in \"substantial transformation\" to the goods so that they can be considered originating. By defining which goods are originating in the PTA, preferential rules of origin distinguish between originating and non-originating goods: only the former will be entitled to preferential tariffs scheduled by the free trade area, the latter must pay MFN import duties.",
"title": "Qualifying for preferences under a free trade area"
},
{
"paragraph_id": 13,
"text": "It is noted that in qualifying for origin criteria, there is a differential treatment between inputs originating within and outside a free trade area. Normally inputs originating in one FTA party will be considered as originating in the other party if they are incorporated in the manufacturing process in that other party. Sometimes, production costs arising in one party is also considered as that arising in another party. In preferential rules of origin, such differential treatment is normally provided for in the cumulation or accumulation provision. Such clause further explains the trade creation and trade diversion effects of a free trade area mentioned above, because a party to a free trade area has the incentive to use inputs originating in another party so that their products may qualify for originating status.",
"title": "Qualifying for preferences under a free trade area"
},
{
"paragraph_id": 14,
"text": "Since there are hundreds of free trade areas currently in force and being negotiated (about 800 according to ITC's Rules of Origin Facilitator, counting also non-reciprocal trade arrangements), it is important for businesses and policy-makers to keep track of their status. There are a number of depositories of free trade agreements available either at national, regional or international levels. Some significant ones include the database on Latin American free trade agreements constructed by the Latin American Integration Association (ALADI), the database maintained by the Asian Regional Integration Center (ARIC) providing information agreements of Asian countries, and the portal on the European Union's free trade negotiations and agreements.",
"title": "Databases on free trade areas"
},
{
"paragraph_id": 15,
"text": "At the international level, there are two important free access databases developed by international organizations for policy-makers and businesses:",
"title": "Databases on free trade areas"
},
{
"paragraph_id": 16,
"text": "WTO's Regional Trade Agreements Information System",
"title": "Databases on free trade areas"
},
{
"paragraph_id": 17,
"text": "As WTO members are obliged to notify to the Secretariat their free trade agreements, this database is constructed based on the most official source of information on free trade agreements (referred to as regional trade agreements in the WTO language). The database allows users to seek information on trade agreements notified to the WTO by country or by topic (goods, services or goods and services). This database provides users with an updated list of all agreements in force, however, those not notified to the WTO may be missing. It also displays reports, tables and graphs containing statistics on these agreements, and particularly preferential tariff analysis.",
"title": "Databases on free trade areas"
},
{
"paragraph_id": 18,
"text": "ITC's Market Access Map",
"title": "Databases on free trade areas"
},
{
"paragraph_id": 19,
"text": "The Market Access Map was developed by the International Trade Centre (ITC) with the objectives to facilitate businesses, governments and researchers in market access issues. The database, visible via the online tool Market Access Map, includes information on tariff and non-tariff barriers in all active trade agreements, not limited to those officially notified to the WTO. It also documents data on non-preferential trade agreements (for instance, Generalized System of Preferences schemes). Up until 2019, Market Access Map has provided downloadable links to texts agreements and their rules of origin. The new version of Market Access Map forthcoming this year will provide direct web links to relevant agreement pages and connect itself to other ITC's tools, particularly the Rules of Origin Facilitator. It is expected to become a versatile tool which assists enterprises in understanding free trade agreements and qualifying for origin requirements under these agreements.",
"title": "Databases on free trade areas"
}
]
| A free trade area is the region encompassing a trade bloc whose member countries have signed a free trade agreement (FTA). Such agreements involve cooperation between at least two countries to reduce trade barriers, import quotas and tariffs, and to increase trade of goods and services with each other. If natural persons are also free to move between the countries, in addition to a free trade agreement, it would also be considered an open border. It can be considered the second stage of economic integration. Customs unions are a special type of free trade area. All such areas have internal arrangements which parties conclude in order to liberalize and facilitate trade among themselves. The crucial difference between customs unions and free trade areas is their approach to third parties. While a customs union requires all parties to establish and maintain identical external tariffs with regard to trade with non-parties, parties to a free trade area are not subject to this requirement. Instead, they may establish and maintain whatever tariff regime applying to imports from non-parties as deemed necessary. In a free trade area without harmonized external tariffs, to eliminate the risk of trade deflection, parties will adopt a system of preferential rules of origin. The term free trade area was originally meant by the General Agreement on Tariffs and Trade to include only trade in goods. An agreement with a similar purpose, i.e., to enhance liberalization of trade in services, is named under Article V of the General Agreement on Trade in Services (GATS) as an "economic integration agreement". However, in practice, the term is now widely used to refer to agreements covering not only goods but also services and even investment. | 2001-07-25T06:57:37Z | 2023-11-03T10:06:14Z | [
"Template:Cite web",
"Template:Webarchive",
"Template:Economic integration",
"Template:Free trade agreements",
"Template:Further",
"Template:Portal",
"Template:Cite book",
"Template:Cite journal",
"Template:Short description",
"Template:World economic integration",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Free_trade_area |
10,885 | French fries | French fries (North American English), chips (British English and other national varieties), finger chips (Indian English), french-fried potatoes, or simply fries, are batonnet or allumette-cut deep-fried potatoes of disputed origin from Belgium or France. They are prepared by cutting potatoes into even strips, drying them, and frying them, usually in a deep fryer. Pre-cut, blanched, and frozen russet potatoes are widely used, and sometimes baked in a regular or convection oven; air fryers are small convection ovens marketed for frying potatoes.
French fries are served hot, either soft or crispy, and are generally eaten as part of lunch or dinner or by themselves as a snack, and they commonly appear on the menus of diners, fast food restaurants, pubs, and bars. They are often salted and may be served with ketchup, vinegar, mayonnaise, tomato sauce, or other local specialities. Fries can be topped more heavily, as in the dishes of poutine, loaded fries or chili cheese fries. French fries can be made from sweet potatoes instead of potatoes. A baked variant, oven fries, uses less or no oil.
The standard method for cooking french fries is deep frying, which submerges them in hot fat, nowadays most commonly oil. Vacuum fryers produce potato chips with lower oil content, while maintaining their colour and texture.
The potatoes are prepared by first cutting them (peeled or unpeeled) into even strips, which are then wiped off or soaked in cold water to remove the surface starch, and thoroughly dried. They may then be fried in one or two stages. Chefs generally agree that the two-bath technique produces better results. Potatoes fresh out of the ground can have too high a water content resulting in soggy fries, so preference is for those that have been stored for a while.
In the two-stage or two-bath method, the first bath, sometimes called blanching, is in hot fat (around 160 °C/320 °F) to cook the fries through. This step can be done in advance. Then they are more briefly fried in very hot fat (190 °C/375 °F) to crisp the exterior. They are then placed in a colander or on a cloth to drain, then served. The exact times of the two baths depend on the size of the fries. For example, for 2–3 mm strips, the first bath takes about 3 minutes, and the second bath takes only seconds.
Since the 1960s, most french fries in the US have been produced from frozen Russet potatoes which have been blanched or at least air-dried industrially. The usual fat for making french fries is vegetable oil. In the past, beef suet was recommended as superior, with vegetable shortening as an alternative. McDonald's used a mixture of 93% beef tallow and 7% cottonseed oil until 1990, when they changed to vegetable oil with beef flavouring. Horse fat was standard in northern France and Belgium until recently, and is recommended by some chefs.
French fries are fried in a two-step process: the first time is to cook the starch throughout the entire cut at low heat, and the second time is to create the golden crispy exterior of the fry at a higher temperature. This is necessary because if the potato cuts are only fried once, the temperature would either be too hot, causing only the exterior to be cooked and not the inside, or not hot enough where the entire fry is cooked, but its crispy exterior will not develop. Although the potato cuts may be baked or steamed as a preparation method, this section will only focus on french fries made using frying oil. During the initial frying process (approximately 150 °C), water on the surface of the cuts evaporates off the surface and the water inside the cuts gets absorbed by the starch granules, causing them to swell and produce the fluffy interior of the fry.
The starch granules are able to retain the water and expand due to gelatinisation. The water and heat break the glycosidic linkages between amylopectin and amylose strands, allowing a new gel matrix to form via hydrogen bonds which aid in water retention. The moisture that gets trapped within the gel matrix is responsible for the fluffy interior of the fry. The gelatinised starch molecules move towards the surface of the fries "forming a thick layer of gelatinised starch" and this layer of pre-gelatinised starch becomes the crisp exterior after the potato cuts are fried for a second time. During the second frying process (approximately 180 °C), the remaining water on the surface of the cuts evaporates and the gelatinised starch molecules that collected towards the potato surface are cooked again, forming the crisp exterior. The golden-brown colour of the fry will develop when the amino acids and glucose on the exterior participate in a Maillard browning reaction.
In the United States and most of Canada, the term french fries, sometimes capitalised as French fries, or shortened to fries, refers to all dishes of fried elongated pieces of potatoes. Variants in shape and size may have names such as curly fries, shoestring fries, etc.
In the United Kingdom, Australia, South Africa, Ireland and New Zealand, the term chips is generally used instead, though thinly cut fried potatoes are sometimes called french fries or skinny fries, to distinguish them from chips, which are cut thicker. In the US or Canada these more thickly-cut chips might be called steak fries, depending on the shape. The word chips is more often used in North America to refer to potato chips, known in the UK and Ireland as crisps.
Thomas Jefferson had "potatoes served in the French manner" at a White House dinner in 1802. The expression "french fried potatoes" first occurred in print in English in the 1856 work Cookery for Maids of All Work by Eliza Warren: "French Fried Potatoes. – Cut new potatoes in thin slices, put them in boiling fat, and a little salt; fry both sides of a light golden brown colour; drain." This account referred to thin, shallow-fried slices of potato. It is not clear where or when the now familiar deep-fried batons or fingers of potato were first prepared. In the early 20th century, the term "french fried" was being used in the sense of "deep-fried" for foods like onion rings or chicken.
One story about the name "french fries" claims that when the American Expeditionary Forces arrived in Belgium during World War I, they assumed that chips were a French dish because French was spoken in the Belgian Army. But the name existed long before that in English, and the popularity of the term did not increase for decades after 1917. At that time, the term "french fries" was growing in popularity, the term was already used in the United States as early as 1899, although it is not clear whether this referred to batons (chips) or slices of potato e.g. in an item in Good Housekeeping which specifically references "Kitchen Economy in France": "The perfection of French fries is due chiefly to the fact that plenty of fat is used".
In 1673, Chilean Francisco Núñez de Pineda mentioned eating "papas fritas" in 1629, but it is not known what exactly these were. Fries may have been invented in Spain, the first European country in which the potato appeared from the New World colonies. Professor Paul Ilegems, curator of the Frietmuseum in Bruges, Belgium, believes that Saint Teresa of Ávila of Spain cooked the first french fries, and refers also to the tradition of frying in Mediterranean cuisine as evidence.
Belgian food historian Pierre Leclercq has traced the history of the french fry and asserts that "it is clear that fries are of French origin". They became an emblematic Parisian dish in the 19th century. Frédéric Krieger, a Bavarian musician, learned to cook fries at a roaster on rue Montmartre in Paris in 1842, and took the recipe to Belgium in 1844, where he would create his business Fritz and sell "la pomme de terre frite à l'instar de Paris", 'Paris-style fried potatoes'. The modern style of fries born in Paris around 1855 is different from the domestic fried potato that existed in the 18th century.
The French and Belgians have an ongoing dispute about where fries were invented.
The myth of Belgian fries dates from around 1985. From the Belgian standpoint, the popularity of the term "french fries" is explained as "French gastronomic hegemony" into which the cuisine of Belgium was assimilated, because of a lack of understanding coupled with a shared language and geographic proximity of the countries. The Belgian journalist Jo Gérard [fr] claimed that a 1781 family manuscript recounts that potatoes were deep-fried prior to 1680 in the Meuse valley, as a substitute for frying fish when the rivers were frozen. Gérard never produced the manuscript that supports this claim, and "the historical value of this story is open to question". In any case, it is unrelated to the later history of the french fry, as the potato did not arrive in the region until around 1735. In any case, given 18th-century economic conditions: "it is absolutely unthinkable that a peasant could have dedicated large quantities of fat for cooking potatoes. At most they were sautéed in a pan".
"Pommes frites" or just "frites" (French), "frieten" (a word used in Flanders and the southern provinces of the Netherlands) or "patat" (used in the north and central parts of the Netherlands) became the national snack and a substantial part of several national dishes, such as Moules-frites or Steak-frites. Fries are very popular in Belgium, where they are known as frieten (in Dutch) or frites (in French), and the Netherlands, where among the working classes they are known as patat in the north and, in the south, friet(en). In Belgium, fries are sold in shops called friteries (French), frietkot/frituur (Belgian Dutch), snackbar (Dutch in The Netherlands) or Fritüre/Frittüre (German). They are served with a large variety of Belgian sauces and eaten either on their own or with other snacks. Traditionally fries are served in a cornet de frites (French), patatzak /frietzak/fritzak (Dutch/Flemish), or Frittentüte (German), a white cardboard cone, then wrapped in paper, with a spoonful of sauce (often mayonnaise) on top.
In France and other French-speaking countries, fried potatoes are formally pommes de terre frites, but more commonly pommes frites, patates frites, or simply frites. The words aiguillettes ("needle-ettes") or allumettes ("matchsticks") are used when the french fries are very small and thin. One enduring origin story holds that french fries were invented by street vendors on the Pont Neuf bridge in Paris in 1789, just before the outbreak of the French Revolution. However, a reference exists in France from 1775 to "a few pieces of fried potato" and to "fried potatoes". Eating potatoes for sustenance was promoted in France by Antoine-Augustin Parmentier, but he did not mention fried potatoes in particular. A note in a manuscript in U.S. president Thomas Jefferson's hand (circa 1801–1809) mentions "Pommes de terre frites à cru, en petites tranches" ("Potatoes deep-fried while raw, in small slices"). The recipe almost certainly comes from his French chef, Honoré Julien. The thick-cut fries are called pommes Pont-Neuf or simply pommes frites (about 10 mm); thinner variants are pommes allumettes (matchstick potatoes; about 7 mm), and pommes paille (potato straws; 3–4 mm). (Roughly 0.4, 0.3 and 0.15 inch respectively.) Pommes gaufrettes are waffle fries. A popular dish in France is steak frites, which is steak accompanied by thin french fries.
French fries migrated to the German-speaking countries during the 19th century. In Germany, they are usually known by the French words pommes frites, or only Pommes or Fritten (derived from the French words, but pronounced as German words). Often served with ketchup or mayonnaise, they are popular as a side dish in restaurants, or as a street-food snack purchased at an Imbissstand (snack stand). Since the 1950s, currywurst has become a widely-popular dish that is commonly offered with fries. Currywurst is a sausage (often bratwurst or bockwurst) in a spiced ketchup-based sauce, dusted with curry powder.
The standard deep-fried cut potatoes in the United Kingdom are called chips, and are cut into pieces between 10 and 15 mm (0.39 and 0.59 in) wide. They are occasionally made from unpeeled potatoes (skins showing). British chips are not the same thing as potato chips (an American term); those are called "crisps" in the UK and some other countries. In the UK, chips are part of the popular, and now international, fast food dish fish and chips. In the UK, the name "chips" is correct for both thick and thin cut, but chips are considered by some to be a separate item to french fries; with chips being more thickly cut than french fries, they are generally cooked only once and at a lower temperature. From 1813 on, recipes for deep-fried cut potatoes occur in popular cookbooks. By the late 1850s, at least one cookbook refers to "French Fried Potatoes".
The first commercially available chips in the UK were sold by Mrs 'Granny' Duce in one of the West Riding towns in 1854. A blue plaque in Oldham marks the origin of the fish-and-chip shop, and thus the start of the fast food industry in Britain. In Scotland, chips were first sold in Dundee: "in the 1870s, that glory of British gastronomy – the chip – was first sold by Belgian immigrant Edward De Gernier in the city's Greenmarket". In Ireland the first chip shop was "opened by Giuseppe Cervi", an Italian immigrant, "who arrived there in the 1880s". It was estimated in 2011 that in the UK, 80% of households bought frozen chips each year. Although chips were a popular dish in most Commonwealth countries, the "thin style" french fries have been popularised worldwide in large part by the large American fast food chains such as McDonald's and Burger King.
In the United States, the J. R. Simplot Company is credited with successfully commercialising french fries in frozen form during the 1940s. Subsequently, in 1967, Ray Kroc of McDonald's contracted the Simplot company to supply them with frozen fries, replacing fresh-cut potatoes. In 2004, 29% of the United States' potato crop was used to make frozen fries; 90% consumed by the food services sector and 10% by retail. The United States is also known for supplying China with most of their french fries, as 70% of China's french fries are imported. Pre-made french fries have been available for home cooking since the 1960s, having been pre-fried (or sometimes baked), frozen and placed in a sealed plastic bag. Some fast-food chains dip the fries in a sugar solution or a starch batter, to alter the appearance or texture. French fries are one of the most popular dishes in the United States, commonly being served as a side dish to main dishes and in fast food restaurants. The average American eats around 30 pounds (14 kg) of french fries a year.
The town of Florenceville-Bristol, New Brunswick in Canada, headquarters of McCain Foods, calls itself "the French fry capital of the world" and also hosts a museum about potatoes called Potato World. McCain Foods is the world's largest manufacturer of frozen french fries and other potato specialities.
French fries are the main ingredient in the Québécois dish known as poutine, a dish consisting of fried potatoes covered with cheese curds and brown gravy. Poutine has a growing number of variations, but it is generally considered to have been developed in rural Québec sometime in the 1950s, although precisely where in the province it first appeared is a matter of contention. Canada is also responsible for providing 22% of China's french fries.
In Spain, fried potatoes are called patatas fritas or papas fritas. Another common form, involving larger irregular cuts, is patatas bravas. The potatoes are cut into big chunks, partially boiled and then fried. They are usually seasoned with a spicy tomato sauce. Fries are a common side dish in Latin American cuisine or part of larger preparations such as the salchipapas in Peru or chorrillana in Chile.
Whilst eating 'regular' crispy french fries is common in South Africa, a regional favourite, particularly in Cape Town, is a soft soggy version doused in white vinegar called "slap-chips" (pronounced "slup-chips" in English or "slaptjips" in Afrikaans). These chips are typically thicker and fried at a lower temperature for a longer period of time than regular french fries. Slap-chips are an important component of a Gatsby sandwich, also a common Cape Town delicacy. Slap-chips are also commonly served with deep fried fish which are also served with the same white vinegar.
Fried potato (フライドポテト, Furaido poteto) is a standard fast-food side dish in Japan. Inspired by Japanese cuisine, okonomiyaki fries are served with a topping of unagi sauce, mayonnaise, katsuobushi, nori seasoning (furikake) and stir-fried cabbage.
French fries come in multiple variations and toppings. Some examples include:
Fries tend to be served with a variety of accompaniments, such as salt and vinegar (malt, balsamic or white), pepper, Cajun seasoning, grated cheese, melted cheese, mushy peas, heated curry sauce, curry ketchup, hot sauce, relish, mustard, mayonnaise, bearnaise sauce, tartar sauce, chili, tzatziki, feta cheese, garlic sauce, fry sauce, butter, sour cream, ranch dressing, barbecue sauce, gravy, honey, aioli, brown sauce, ketchup, lemon juice, piccalilli, pickled cucumber, pickled gherkins, pickled onions or pickled eggs. In Australia, a popular flavouring added to chips is chicken salt.
French fries primarily contain carbohydrates (mostly in the form of starch) and protein from the potato, and fat absorbed during the deep-frying process. Salt, which contains sodium, is almost always applied as a surface seasoning. For example, a large serving of french fries at McDonald's in the United States is 154 grams and includes 350 mg of sodium. The 510 calories come from 66 g of carbohydrates, 24 g of fat and 7 g of protein.
A number of experts have criticised french fries for being very unhealthful. According to Jonathan Bonnet in a Time magazine article, "fries are nutritionally unrecognizable from a spud" because they "involve frying, salting, and removing one of the healthiest parts of the potato: the skin, where many of the nutrients and fiber are found." Kristin Kirkpatrick calls french fries "an extremely starchy vegetable dipped in a fryer that then loads on the unhealthy fat, and what you have left is a food that has no nutritional redeeming value in it at all." David Katz states that "French fries are often the super-fatty side dish to a burger—and both are often used as vehicles for things like sugar-laced ketchup and fatty mayo." Eric Morrissette, spokesperson for Health Canada, states that people should limit their intake of french fries, but eating them occasionally is not likely to be a health concern.
Frying french fries in beef tallow, lard, or other animal fats adds saturated fat to them. Replacing animal fats with tropical vegetable oils, such as palm oil, simply substitutes one saturated fat for another. For many years partially hydrogenated vegetable oils were used as a means of avoiding cholesterol and reducing saturated fatty acid content, but in time the trans fat content of these oils was perceived as contributing to cardiovascular disease. Starting in 2008, many restaurant chains and manufacturers of pre-cooked frozen french fries for home reheating phased out trans-fat–containing vegetable oils.
French fries contain some of the highest levels of acrylamides of any foodstuff, and experts have raised concerns about the effects of acrylamides on human health. According to the American Cancer Society, it is not clear as of 2013 whether acrylamide consumption affects people's risk of getting cancer. A meta-analysis indicated that dietary acrylamide is not related to the risk of most common cancers, but could not exclude a modest association for kidney, endometrial or ovarian cancers. A lower-fat method for producing a french-fry–like product is to coat "frenched" or wedge potatoes in oil and spices/flavouring before baking them. The temperature will be lower compared to deep frying, which reduces acrylamide formation.
In April 2023, researchers from China suggested a possible link between the consumption of fried food and mental health problems. According to the study, those who frequently consume fried food, especially potatoes, have an increased risk of depression and anxiety, by 7% and 12% respectively, compared to those who do not. The connection was particularly prominent among younger males. However, the causal relationship is not conclusive. The results are still preliminary, and the researchers are uncertain whether consuming fried foods causes mental health problems or individuals with symptoms of anxiety and depression tend to opt for fried foods.
In June 2004, the United States Department of Agriculture (USDA), with the advisement of a federal district judge from Beaumont, Texas, classified batter-coated french fries as a vegetable under the Perishable Agricultural Commodities Act. This was primarily for trade reasons; french fries do not meet the standard to be listed as a processed food. This classification, referred to as the "French fry rule", was upheld in the United States Court of Appeals for the Fifth Circuit case Fleming Companies, Inc. v. USDA.
A 2022 study estimated the environmental impact of 57,000 food products in the UK and Ireland, finding that French fries have a lower impact on the environment than many other foods. | [
{
"paragraph_id": 0,
"text": "French fries (North American English), chips (British English and other national varieties), finger chips (Indian English), french-fried potatoes, or simply fries, are batonnet or allumette-cut deep-fried potatoes of disputed origin from Belgium or France. They are prepared by cutting potatoes into even strips, drying them, and frying them, usually in a deep fryer. Pre-cut, blanched, and frozen russet potatoes are widely used, and sometimes baked in a regular or convection oven; air fryers are small convection ovens marketed for frying potatoes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "French fries are served hot, either soft or crispy, and are generally eaten as part of lunch or dinner or by themselves as a snack, and they commonly appear on the menus of diners, fast food restaurants, pubs, and bars. They are often salted and may be served with ketchup, vinegar, mayonnaise, tomato sauce, or other local specialities. Fries can be topped more heavily, as in the dishes of poutine, loaded fries or chili cheese fries. French fries can be made from sweet potatoes instead of potatoes. A baked variant, oven fries, uses less or no oil.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The standard method for cooking french fries is deep frying, which submerges them in hot fat, nowadays most commonly oil. Vacuum fryers produce potato chips with lower oil content, while maintaining their colour and texture.",
"title": "Preparation"
},
{
"paragraph_id": 3,
"text": "The potatoes are prepared by first cutting them (peeled or unpeeled) into even strips, which are then wiped off or soaked in cold water to remove the surface starch, and thoroughly dried. They may then be fried in one or two stages. Chefs generally agree that the two-bath technique produces better results. Potatoes fresh out of the ground can have too high a water content resulting in soggy fries, so preference is for those that have been stored for a while.",
"title": "Preparation"
},
{
"paragraph_id": 4,
"text": "In the two-stage or two-bath method, the first bath, sometimes called blanching, is in hot fat (around 160 °C/320 °F) to cook the fries through. This step can be done in advance. Then they are more briefly fried in very hot fat (190 °C/375 °F) to crisp the exterior. They are then placed in a colander or on a cloth to drain, then served. The exact times of the two baths depend on the size of the fries. For example, for 2–3 mm strips, the first bath takes about 3 minutes, and the second bath takes only seconds.",
"title": "Preparation"
},
{
"paragraph_id": 5,
"text": "Since the 1960s, most french fries in the US have been produced from frozen Russet potatoes which have been blanched or at least air-dried industrially. The usual fat for making french fries is vegetable oil. In the past, beef suet was recommended as superior, with vegetable shortening as an alternative. McDonald's used a mixture of 93% beef tallow and 7% cottonseed oil until 1990, when they changed to vegetable oil with beef flavouring. Horse fat was standard in northern France and Belgium until recently, and is recommended by some chefs.",
"title": "Preparation"
},
{
"paragraph_id": 6,
"text": "French fries are fried in a two-step process: the first time is to cook the starch throughout the entire cut at low heat, and the second time is to create the golden crispy exterior of the fry at a higher temperature. This is necessary because if the potato cuts are only fried once, the temperature would either be too hot, causing only the exterior to be cooked and not the inside, or not hot enough where the entire fry is cooked, but its crispy exterior will not develop. Although the potato cuts may be baked or steamed as a preparation method, this section will only focus on french fries made using frying oil. During the initial frying process (approximately 150 °C), water on the surface of the cuts evaporates off the surface and the water inside the cuts gets absorbed by the starch granules, causing them to swell and produce the fluffy interior of the fry.",
"title": "Preparation"
},
{
"paragraph_id": 7,
"text": "The starch granules are able to retain the water and expand due to gelatinisation. The water and heat break the glycosidic linkages between amylopectin and amylose strands, allowing a new gel matrix to form via hydrogen bonds which aid in water retention. The moisture that gets trapped within the gel matrix is responsible for the fluffy interior of the fry. The gelatinised starch molecules move towards the surface of the fries \"forming a thick layer of gelatinised starch\" and this layer of pre-gelatinised starch becomes the crisp exterior after the potato cuts are fried for a second time. During the second frying process (approximately 180 °C), the remaining water on the surface of the cuts evaporates and the gelatinised starch molecules that collected towards the potato surface are cooked again, forming the crisp exterior. The golden-brown colour of the fry will develop when the amino acids and glucose on the exterior participate in a Maillard browning reaction.",
"title": "Preparation"
},
{
"paragraph_id": 8,
"text": "In the United States and most of Canada, the term french fries, sometimes capitalised as French fries, or shortened to fries, refers to all dishes of fried elongated pieces of potatoes. Variants in shape and size may have names such as curly fries, shoestring fries, etc.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "In the United Kingdom, Australia, South Africa, Ireland and New Zealand, the term chips is generally used instead, though thinly cut fried potatoes are sometimes called french fries or skinny fries, to distinguish them from chips, which are cut thicker. In the US or Canada these more thickly-cut chips might be called steak fries, depending on the shape. The word chips is more often used in North America to refer to potato chips, known in the UK and Ireland as crisps.",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "Thomas Jefferson had \"potatoes served in the French manner\" at a White House dinner in 1802. The expression \"french fried potatoes\" first occurred in print in English in the 1856 work Cookery for Maids of All Work by Eliza Warren: \"French Fried Potatoes. – Cut new potatoes in thin slices, put them in boiling fat, and a little salt; fry both sides of a light golden brown colour; drain.\" This account referred to thin, shallow-fried slices of potato. It is not clear where or when the now familiar deep-fried batons or fingers of potato were first prepared. In the early 20th century, the term \"french fried\" was being used in the sense of \"deep-fried\" for foods like onion rings or chicken.",
"title": "Etymology"
},
{
"paragraph_id": 11,
"text": "One story about the name \"french fries\" claims that when the American Expeditionary Forces arrived in Belgium during World War I, they assumed that chips were a French dish because French was spoken in the Belgian Army. But the name existed long before that in English, and the popularity of the term did not increase for decades after 1917. At that time, the term \"french fries\" was growing in popularity, the term was already used in the United States as early as 1899, although it is not clear whether this referred to batons (chips) or slices of potato e.g. in an item in Good Housekeeping which specifically references \"Kitchen Economy in France\": \"The perfection of French fries is due chiefly to the fact that plenty of fat is used\".",
"title": "Etymology"
},
{
"paragraph_id": 12,
"text": "In 1673, Chilean Francisco Núñez de Pineda mentioned eating \"papas fritas\" in 1629, but it is not known what exactly these were. Fries may have been invented in Spain, the first European country in which the potato appeared from the New World colonies. Professor Paul Ilegems, curator of the Frietmuseum in Bruges, Belgium, believes that Saint Teresa of Ávila of Spain cooked the first french fries, and refers also to the tradition of frying in Mediterranean cuisine as evidence.",
"title": "Origin"
},
{
"paragraph_id": 13,
"text": "Belgian food historian Pierre Leclercq has traced the history of the french fry and asserts that \"it is clear that fries are of French origin\". They became an emblematic Parisian dish in the 19th century. Frédéric Krieger, a Bavarian musician, learned to cook fries at a roaster on rue Montmartre in Paris in 1842, and took the recipe to Belgium in 1844, where he would create his business Fritz and sell \"la pomme de terre frite à l'instar de Paris\", 'Paris-style fried potatoes'. The modern style of fries born in Paris around 1855 is different from the domestic fried potato that existed in the 18th century.",
"title": "Origin"
},
{
"paragraph_id": 14,
"text": "The French and Belgians have an ongoing dispute about where fries were invented.",
"title": "Origin"
},
{
"paragraph_id": 15,
"text": "The myth of Belgian fries dates from around 1985. From the Belgian standpoint, the popularity of the term \"french fries\" is explained as \"French gastronomic hegemony\" into which the cuisine of Belgium was assimilated, because of a lack of understanding coupled with a shared language and geographic proximity of the countries. The Belgian journalist Jo Gérard [fr] claimed that a 1781 family manuscript recounts that potatoes were deep-fried prior to 1680 in the Meuse valley, as a substitute for frying fish when the rivers were frozen. Gérard never produced the manuscript that supports this claim, and \"the historical value of this story is open to question\". In any case, it is unrelated to the later history of the french fry, as the potato did not arrive in the region until around 1735. In any case, given 18th-century economic conditions: \"it is absolutely unthinkable that a peasant could have dedicated large quantities of fat for cooking potatoes. At most they were sautéed in a pan\".",
"title": "Origin"
},
{
"paragraph_id": 16,
"text": "\"Pommes frites\" or just \"frites\" (French), \"frieten\" (a word used in Flanders and the southern provinces of the Netherlands) or \"patat\" (used in the north and central parts of the Netherlands) became the national snack and a substantial part of several national dishes, such as Moules-frites or Steak-frites. Fries are very popular in Belgium, where they are known as frieten (in Dutch) or frites (in French), and the Netherlands, where among the working classes they are known as patat in the north and, in the south, friet(en). In Belgium, fries are sold in shops called friteries (French), frietkot/frituur (Belgian Dutch), snackbar (Dutch in The Netherlands) or Fritüre/Frittüre (German). They are served with a large variety of Belgian sauces and eaten either on their own or with other snacks. Traditionally fries are served in a cornet de frites (French), patatzak /frietzak/fritzak (Dutch/Flemish), or Frittentüte (German), a white cardboard cone, then wrapped in paper, with a spoonful of sauce (often mayonnaise) on top.",
"title": "Global use"
},
{
"paragraph_id": 17,
"text": "In France and other French-speaking countries, fried potatoes are formally pommes de terre frites, but more commonly pommes frites, patates frites, or simply frites. The words aiguillettes (\"needle-ettes\") or allumettes (\"matchsticks\") are used when the french fries are very small and thin. One enduring origin story holds that french fries were invented by street vendors on the Pont Neuf bridge in Paris in 1789, just before the outbreak of the French Revolution. However, a reference exists in France from 1775 to \"a few pieces of fried potato\" and to \"fried potatoes\". Eating potatoes for sustenance was promoted in France by Antoine-Augustin Parmentier, but he did not mention fried potatoes in particular. A note in a manuscript in U.S. president Thomas Jefferson's hand (circa 1801–1809) mentions \"Pommes de terre frites à cru, en petites tranches\" (\"Potatoes deep-fried while raw, in small slices\"). The recipe almost certainly comes from his French chef, Honoré Julien. The thick-cut fries are called pommes Pont-Neuf or simply pommes frites (about 10 mm); thinner variants are pommes allumettes (matchstick potatoes; about 7 mm), and pommes paille (potato straws; 3–4 mm). (Roughly 0.4, 0.3 and 0.15 inch respectively.) Pommes gaufrettes are waffle fries. A popular dish in France is steak frites, which is steak accompanied by thin french fries.",
"title": "Global use"
},
{
"paragraph_id": 18,
"text": "French fries migrated to the German-speaking countries during the 19th century. In Germany, they are usually known by the French words pommes frites, or only Pommes or Fritten (derived from the French words, but pronounced as German words). Often served with ketchup or mayonnaise, they are popular as a side dish in restaurants, or as a street-food snack purchased at an Imbissstand (snack stand). Since the 1950s, currywurst has become a widely-popular dish that is commonly offered with fries. Currywurst is a sausage (often bratwurst or bockwurst) in a spiced ketchup-based sauce, dusted with curry powder.",
"title": "Global use"
},
{
"paragraph_id": 19,
"text": "The standard deep-fried cut potatoes in the United Kingdom are called chips, and are cut into pieces between 10 and 15 mm (0.39 and 0.59 in) wide. They are occasionally made from unpeeled potatoes (skins showing). British chips are not the same thing as potato chips (an American term); those are called \"crisps\" in the UK and some other countries. In the UK, chips are part of the popular, and now international, fast food dish fish and chips. In the UK, the name \"chips\" is correct for both thick and thin cut, but chips are considered by some to be a separate item to french fries; with chips being more thickly cut than french fries, they are generally cooked only once and at a lower temperature. From 1813 on, recipes for deep-fried cut potatoes occur in popular cookbooks. By the late 1850s, at least one cookbook refers to \"French Fried Potatoes\".",
"title": "Global use"
},
{
"paragraph_id": 20,
"text": "The first commercially available chips in the UK were sold by Mrs 'Granny' Duce in one of the West Riding towns in 1854. A blue plaque in Oldham marks the origin of the fish-and-chip shop, and thus the start of the fast food industry in Britain. In Scotland, chips were first sold in Dundee: \"in the 1870s, that glory of British gastronomy – the chip – was first sold by Belgian immigrant Edward De Gernier in the city's Greenmarket\". In Ireland the first chip shop was \"opened by Giuseppe Cervi\", an Italian immigrant, \"who arrived there in the 1880s\". It was estimated in 2011 that in the UK, 80% of households bought frozen chips each year. Although chips were a popular dish in most Commonwealth countries, the \"thin style\" french fries have been popularised worldwide in large part by the large American fast food chains such as McDonald's and Burger King.",
"title": "Global use"
},
{
"paragraph_id": 21,
"text": "In the United States, the J. R. Simplot Company is credited with successfully commercialising french fries in frozen form during the 1940s. Subsequently, in 1967, Ray Kroc of McDonald's contracted the Simplot company to supply them with frozen fries, replacing fresh-cut potatoes. In 2004, 29% of the United States' potato crop was used to make frozen fries; 90% consumed by the food services sector and 10% by retail. The United States is also known for supplying China with most of their french fries, as 70% of China's french fries are imported. Pre-made french fries have been available for home cooking since the 1960s, having been pre-fried (or sometimes baked), frozen and placed in a sealed plastic bag. Some fast-food chains dip the fries in a sugar solution or a starch batter, to alter the appearance or texture. French fries are one of the most popular dishes in the United States, commonly being served as a side dish to main dishes and in fast food restaurants. The average American eats around 30 pounds (14 kg) of french fries a year.",
"title": "Global use"
},
{
"paragraph_id": 22,
"text": "The town of Florenceville-Bristol, New Brunswick in Canada, headquarters of McCain Foods, calls itself \"the French fry capital of the world\" and also hosts a museum about potatoes called Potato World. McCain Foods is the world's largest manufacturer of frozen french fries and other potato specialities.",
"title": "Global use"
},
{
"paragraph_id": 23,
"text": "French fries are the main ingredient in the Québécois dish known as poutine, a dish consisting of fried potatoes covered with cheese curds and brown gravy. Poutine has a growing number of variations, but it is generally considered to have been developed in rural Québec sometime in the 1950s, although precisely where in the province it first appeared is a matter of contention. Canada is also responsible for providing 22% of China's french fries.",
"title": "Global use"
},
{
"paragraph_id": 24,
"text": "In Spain, fried potatoes are called patatas fritas or papas fritas. Another common form, involving larger irregular cuts, is patatas bravas. The potatoes are cut into big chunks, partially boiled and then fried. They are usually seasoned with a spicy tomato sauce. Fries are a common side dish in Latin American cuisine or part of larger preparations such as the salchipapas in Peru or chorrillana in Chile.",
"title": "Global use"
},
{
"paragraph_id": 25,
"text": "Whilst eating 'regular' crispy french fries is common in South Africa, a regional favourite, particularly in Cape Town, is a soft soggy version doused in white vinegar called \"slap-chips\" (pronounced \"slup-chips\" in English or \"slaptjips\" in Afrikaans). These chips are typically thicker and fried at a lower temperature for a longer period of time than regular french fries. Slap-chips are an important component of a Gatsby sandwich, also a common Cape Town delicacy. Slap-chips are also commonly served with deep fried fish which are also served with the same white vinegar.",
"title": "Global use"
},
{
"paragraph_id": 26,
"text": "Fried potato (フライドポテト, Furaido poteto) is a standard fast-food side dish in Japan. Inspired by Japanese cuisine, okonomiyaki fries are served with a topping of unagi sauce, mayonnaise, katsuobushi, nori seasoning (furikake) and stir-fried cabbage.",
"title": "Global use"
},
{
"paragraph_id": 27,
"text": "French fries come in multiple variations and toppings. Some examples include:",
"title": "Variants"
},
{
"paragraph_id": 28,
"text": "Fries tend to be served with a variety of accompaniments, such as salt and vinegar (malt, balsamic or white), pepper, Cajun seasoning, grated cheese, melted cheese, mushy peas, heated curry sauce, curry ketchup, hot sauce, relish, mustard, mayonnaise, bearnaise sauce, tartar sauce, chili, tzatziki, feta cheese, garlic sauce, fry sauce, butter, sour cream, ranch dressing, barbecue sauce, gravy, honey, aioli, brown sauce, ketchup, lemon juice, piccalilli, pickled cucumber, pickled gherkins, pickled onions or pickled eggs. In Australia, a popular flavouring added to chips is chicken salt.",
"title": "Accompaniments"
},
{
"paragraph_id": 29,
"text": "French fries primarily contain carbohydrates (mostly in the form of starch) and protein from the potato, and fat absorbed during the deep-frying process. Salt, which contains sodium, is almost always applied as a surface seasoning. For example, a large serving of french fries at McDonald's in the United States is 154 grams and includes 350 mg of sodium. The 510 calories come from 66 g of carbohydrates, 24 g of fat and 7 g of protein.",
"title": "Nutrition"
},
{
"paragraph_id": 30,
"text": "A number of experts have criticised french fries for being very unhealthful. According to Jonathan Bonnet in a Time magazine article, \"fries are nutritionally unrecognizable from a spud\" because they \"involve frying, salting, and removing one of the healthiest parts of the potato: the skin, where many of the nutrients and fiber are found.\" Kristin Kirkpatrick calls french fries \"an extremely starchy vegetable dipped in a fryer that then loads on the unhealthy fat, and what you have left is a food that has no nutritional redeeming value in it at all.\" David Katz states that \"French fries are often the super-fatty side dish to a burger—and both are often used as vehicles for things like sugar-laced ketchup and fatty mayo.\" Eric Morrissette, spokesperson for Health Canada, states that people should limit their intake of french fries, but eating them occasionally is not likely to be a health concern.",
"title": "Nutrition"
},
{
"paragraph_id": 31,
"text": "Frying french fries in beef tallow, lard, or other animal fats adds saturated fat to them. Replacing animal fats with tropical vegetable oils, such as palm oil, simply substitutes one saturated fat for another. For many years partially hydrogenated vegetable oils were used as a means of avoiding cholesterol and reducing saturated fatty acid content, but in time the trans fat content of these oils was perceived as contributing to cardiovascular disease. Starting in 2008, many restaurant chains and manufacturers of pre-cooked frozen french fries for home reheating phased out trans-fat–containing vegetable oils.",
"title": "Nutrition"
},
{
"paragraph_id": 32,
"text": "French fries contain some of the highest levels of acrylamides of any foodstuff, and experts have raised concerns about the effects of acrylamides on human health. According to the American Cancer Society, it is not clear as of 2013 whether acrylamide consumption affects people's risk of getting cancer. A meta-analysis indicated that dietary acrylamide is not related to the risk of most common cancers, but could not exclude a modest association for kidney, endometrial or ovarian cancers. A lower-fat method for producing a french-fry–like product is to coat \"frenched\" or wedge potatoes in oil and spices/flavouring before baking them. The temperature will be lower compared to deep frying, which reduces acrylamide formation.",
"title": "Nutrition"
},
{
"paragraph_id": 33,
"text": "In April 2023, researchers from China suggested a possible link between the consumption of fried food and mental health problems. According to the study, those who frequently consume fried food, especially potatoes, have an increased risk of depression and anxiety, by 7% and 12% respectively, compared to those who do not. The connection was particularly prominent among younger males. However, the causal relationship is not conclusive. The results are still preliminary, and the researchers are uncertain whether consuming fried foods causes mental health problems or individuals with symptoms of anxiety and depression tend to opt for fried foods.",
"title": "Nutrition"
},
{
"paragraph_id": 34,
"text": "In June 2004, the United States Department of Agriculture (USDA), with the advisement of a federal district judge from Beaumont, Texas, classified batter-coated french fries as a vegetable under the Perishable Agricultural Commodities Act. This was primarily for trade reasons; french fries do not meet the standard to be listed as a processed food. This classification, referred to as the \"French fry rule\", was upheld in the United States Court of Appeals for the Fifth Circuit case Fleming Companies, Inc. v. USDA.",
"title": "Legal issues"
},
{
"paragraph_id": 35,
"text": "A 2022 study estimated the environmental impact of 57,000 food products in the UK and Ireland, finding that French fries have a lower impact on the environment than many other foods.",
"title": "Environmental impact"
}
]
| French fries, chips, finger chips, french-fried potatoes, or simply fries, are batonnet or allumette-cut deep-fried potatoes of disputed origin from Belgium or France. They are prepared by cutting potatoes into even strips, drying them, and frying them, usually in a deep fryer. Pre-cut, blanched, and frozen russet potatoes are widely used, and sometimes baked in a regular or convection oven; air fryers are small convection ovens marketed for frying potatoes. French fries are served hot, either soft or crispy, and are generally eaten as part of lunch or dinner or by themselves as a snack, and they commonly appear on the menus of diners, fast food restaurants, pubs, and bars. They are often salted and may be served with ketchup, vinegar, mayonnaise, tomato sauce, or other local specialities. Fries can be topped more heavily, as in the dishes of poutine, loaded fries or chili cheese fries. French fries can be made from sweet potatoes instead of potatoes. A baked variant, oven fries, uses less or no oil. | 2001-07-26T14:45:16Z | 2023-12-19T07:48:42Z | [
"Template:Cite journal",
"Template:Ill",
"Template:Main",
"Template:Div col",
"Template:Div col end",
"Template:Reflist",
"Template:Convert",
"Template:Asof",
"Template:Cite news",
"Template:Commons category-inline",
"Template:Redirect-multi",
"Template:Cite web",
"Template:Short description",
"Template:Pp-semi-indef",
"Template:Cn",
"Template:Portal",
"Template:Potato dishes",
"Template:Lists of prepared foods",
"Template:Pslink",
"Template:Lang",
"Template:Nihongo",
"Template:ISBN",
"Template:Cbignore",
"Template:Use dmy dates",
"Template:Use British English",
"Template:Wiktionary inline",
"Template:Authority control",
"Template:Infobox food",
"Template:Commons category",
"Template:Cite book",
"Template:Webarchive",
"Template:Cite magazine",
"Template:In lang",
"Template:Deep frying foods",
"Template:Street food"
]
| https://en.wikipedia.org/wiki/French_fries |
10,886 | Field hockey | Field hockey (or simply hockey) is a team sport structured in standard hockey format, in which each team plays with 11 players in total, made up of 10 field players and a goalkeeper. Teams must move a hockey ball around a pitch by hitting it with a hockey stick towards the rival team's shooting circle and then into the goal. The match is won by the team that scores the most goals. Matches are played on grass, watered turf, artificial turf, or indoor boarded surface.
The stick is made of wood, carbon fibre, fibreglass and carbon, or a combination of carbon fibre and fibreglass in different quantities. The stick has two sides; one rounded and one flat; only the flat face of the stick is allowed to progress the ball. During play, goalkeepers are the only players allowed to touch the ball with any part of their body. A player's hand is considered part of the stick if holding the stick. If the ball is "played" with the rounded part of the stick (i.e. deliberately stopped or hit), it will result in a penalty (accidental touches are not an offence if they do not materially affect play). Goalkeepers often have a different design of stick; they also cannot play the ball with the round side of their stick.
The modern game was developed at public schools in 19th century England and it is now played globally. The governing body is the International Hockey Federation (FIH), called the Fédération Internationale de Hockey in French. Men and women are represented internationally in competitions including the Olympic Games, World Cup, FIH Pro League, Junior World Cup and in past also World League, Champions Trophy. Many countries run extensive junior, senior, and masters club competitions. The FIH is also responsible for organizing the Hockey Rules Board and developing the sport's rules.
The sport is known simply as "hockey" in countries where it is the more common form of hockey. The term "field hockey" is used primarily in Canada and the United States, where "hockey" more often refers to ice hockey. In Sweden, the term landhockey is used. A popular variant is indoor field hockey, which differs in a number of respects while embodying the primary principles of hockey.
According to the International Hockey Federation (FIH), "the roots of hockey are buried deep in antiquity". There are historical records which suggest early forms of hockey were played in Egypt and Persia c. 2000 BC, and in Ethiopia c. 1000 BC. Later evidence suggest that the ancient Greeks, Romans and Aztecs all played hockey-like games. In Ancient Egypt, there is a depiction of two figures playing with sticks and ball in the Beni Hasan tomb of Khety, an administrator of Dynasty XI.
In Ancient Greece, there is a similar image dated c. 510 BC, which may have been called Κερητίζειν (kerētízein) because it was played with a horn (κέρας, kéras in Ancient Greek) and a ball. Researchers disagree over how to interpret this image. It could have been a team or one-on-one activity (the depiction shows two active players, and other figures who may be team-mates awaiting a face-off, or non-players waiting for their turn at play). Billiards historians Stein and Rubino believe it was among the games ancestral to lawn-and-field games like hockey and ground billiards, and near-identical depictions appear in later European illuminated manuscripts and other works of the 14th through 17th centuries, showing contemporary courtly and clerical life.
In East Asia, a similar game was entertained, using a carved wooden stick and ball, prior to 300 BC. In Inner Mongolia, China, the Daur people have for about 1,000 years been playing beikou, a game with some similarities to field hockey. A similar field hockey or ground billiards variant, called suigan, was played in China during the Ming dynasty (1368–1644, post-dating the Mongol-led Yuan dynasty). A game similar to field hockey was played in the 17th century in Punjab state in India under name khido khundi (khido refers to the woolen ball, and khundi to the stick). In South America, most specifically in Chile, the local natives of the 16th century used to play a game called Chueca, which also shares common elements with hockey.
In Northern Europe, the games of hurling (Ireland) and Knattleikr (Iceland), both team ball games involving sticks to drive a ball to the opponents' goal, date at least as far back as the Early Middle Ages. By the 12th century, a team ball game called la soule or choule, akin to a chaotic and sometimes long-distance version of hockey or rugby football (depending on whether sticks were used in a particular local variant), was regularly played in France and southern Britain between villages or parishes. Throughout the Middle Ages to the Early Modern era, such games often involved the local clergy or secular aristocracy, and in some periods were limited to them by various anti-gaming edicts, or even banned altogether. Stein and Rubino, among others, ultimately trace aspects of these games both to rituals in antiquity involving orbs and sceptres (on the aristocratic and clerical side), and to ancient military training exercises (on the popular side); polo (essentially hockey on horseback) was devised by the Ancient Persians for cavalry training, based on the local proto-hockey foot game of the region.
The word hockey itself has no clear origin. One belief is that it was recorded in 1363 when Edward III of England issued the proclamation: "Moreover we ordain that you prohibit under penalty of imprisonment all and sundry from such stone, wood and iron throwing; handball, football, or hockey; coursing and cock-fighting, or other such idle games". The belief is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games "Pilam Manualem, Pedivam, & Bacularem: & ad Canibucam & Gallorum Pugnam". It may be recalled at this point that baculum is the Latin for 'stick', so the reference would appear to be to a game played with sticks. The English historian and biographer John Strype did not use the word "hockey" when he translated the proclamation in 1720, and the word 'hockey' remains of unknown origin.
The modern game developed at public schools in 19th century England. It is now played globally, particularly in parts of Western Europe, South Asia, Southern Africa, Australia, New Zealand, Argentina, and parts of the United States, primarily New England and the mid-Atlantic states. The term "field hockey" is used primarily in Canada and the United States where "hockey" more often refers to ice hockey. In Sweden, the term landhockey is used, and to some degree in Norway, where the game is governed by Norges Bandyforbund.
The first known club was formed in 1849 at Blackheath in south-east London, but the modern rules grew out of a version played by Middlesex cricket clubs as a winter activity. Teddington Hockey Club formed the modern game by introducing the striking circle and changing the ball to a sphere from a rubber cube. The Hockey Association was founded in 1876. It lasted just six years, before being revived by nine founding members. The first international competition took place in 1895 (Ireland 3, Wales 0), and the International Rules Board was founded in 1900.
Field hockey was played at the Summer Olympics in 1908 and 1920. It was dropped in 1924, leading to the foundation of the Fédération Internationale de Hockey sur Gazon (FIH) as an international governing body by seven continental European nations; and hockey was reinstated as an Olympic game in 1928. Men's hockey united under the FIH in 1970.
The two oldest trophies are the Irish Senior Cup, which dates back to 1894, and the Irish Junior Cup, a second XI-only competition instituted in 1895.
In India, the Beighton Cup and the Aga Khan tournament commenced within ten years. Entering the Olympics in 1928, India won all five games without conceding a goal, and won from 1932 until 1956 and then in 1964 and 1980. Pakistan won Olympics gold in men's hockey in 1960, 1968 and 1984. In fact, all but two of Pakistan's 10 Olympics medals so far have been in field hockey, including three gold, three silver and two bronze medals.
In the early 1970s, artificial turf began to be used. Synthetic pitches changed most aspects of field hockey, gaining speed. New tactics and techniques such as the Indian dribble developed, followed by new rules to take account. The switch to synthetic surfaces ended Indian and Pakistani domination because artificial turf was too expensive in developing countries. Since the 1970s, Australia, the Netherlands, and Germany have dominated at the Olympics and World Cup stages.
Women's field hockey was first played at British universities and schools. The first club, the Molesey Ladies, was founded in 1887. The first national association was the Irish Ladies Hockey Union in 1894, and though rebuffed by the Hockey Association, women's field hockey grew rapidly around the world. This led to the International Federation of Women's Hockey Association (IFWHA) in 1927, though this did not include many continental European countries where women played as sections of men's associations and were affiliated to the FIH. The IFWHA held conferences every three years, and tournaments associated with these were the primary IFWHA competitions. These tournaments were non-competitive until 1975.
By the early 1970s, there were 22 associations with women's sections in the FIH and 36 associations in the IFWHA. Discussions started about a common rule book. The FIH introduced competitive tournaments in 1974, forcing the acceptance of the principle of competitive field hockey by the IFWHA in 1973. It took until 1982 for the two bodies to merge, but this allowed the introduction of women's field hockey to the Olympic games from 1980 where, as in the men's game, the Netherlands, Germany, and Australia have been consistently strong. Argentina has emerged as a team to be reckoned with since 2000, winning the world championship in 2002 and 2010 and medals at the last three Olympics.
In the United States, field hockey is played predominantly by girls and women. There are few field hockey clubs, most play taking place between high school or college sides. The sport was largely introduced in the U.S. by Constance Applebee, starting with a tour of Seven Sisters colleges in 1901 and continuing through Applebee's 24-year tenure as athletic director of Bryn Mawr College. The strength of college field hockey reflects the impact of Title IX, which mandated that colleges should fund men's and women's games programmes comparably. Hockey has been predominantly played on the East Coast, specifically the Mid-Atlantic in states such as New Jersey, New York, Pennsylvania, Maryland, and Virginia. In recent years, it has become increasingly played on the West Coast and in the Midwest.
In other countries, participation is fairly evenly balanced between men and women. For example, in the 2008–09 season, England Hockey reported 2,488 registered men's teams, 1,969 women's teams, 1,042 boys' teams, 966 girls' teams and 274 mixed teams. In 2006, the Irish Hockey Association reported that the gender split among its players was approximately 65% female and 35% male. In its 2008 census, Hockey Australia reported 40,534 male club players and 41,542 female.
Most hockey field dimensions were originally fixed using whole numbers of imperial measures. Metric measurements are now the official dimensions as laid down by the International Hockey Federation (FIH) in the Rules of Hockey.
The pitch is a 91.4 m × 55 m (100.0 yd × 60.1 yd) rectangular field. At each end is a goal 2.14 m (7 ft) high and 3.66 m (12 ft) wide, as well as lines across the field 22.90 m (25 yd) from each end-line (generally referred to as the 23-metre lines or the 25-yard lines) and in the center of the field. A spot 0.15 m (6 in) in diameter, called the penalty spot or stroke mark, is placed with its centre 6.40 m (7 yd) from the centre of each goal. The shooting circle is 15 m (16 yd) from the base line.
Field hockey goals are made of two upright posts, joined at the top by a horizontal crossbar, with a net positioned to catch the ball when it passes through the goalposts. The goalposts and crossbar must be white and rectangular in shape, and should be 2 in (51 mm) wide and 2–3 in (51–76 mm) deep. Field hockey goals also include sideboards and a backboard, which stand 50 cm (20 in) from the ground. The backboard runs the full 3.66 m (12.0 ft) width of the goal, while the sideboards are 1.2 m (3 ft 11 in) deep.
Historically the game developed on natural grass turf. In the early 1970s, synthetic grass fields began to be used for hockey, with the first Olympic Games on this surface being held at Montreal in 1976. Synthetic pitches are now mandatory for all international tournaments and for most national competitions. While hockey is still played on traditional grass fields at some local levels and lesser national divisions, it has been replaced by synthetic surfaces almost everywhere in the western world. There are three main types of artificial hockey surface:
Since the 1970s, sand-based pitches have been favoured as they dramatically speed up the game. However, in recent years there has been a massive increase in the number of "water-based" artificial turfs. Water-based synthetic turfs enable the ball to be transferred more quickly than on sand-based surfaces. It is this characteristic that has made them the surface of choice for international and national league competitions. Water-based surfaces are also less abrasive than sand-based surfaces and reduce the level of injury to players when they come into contact with the surface. The FIH are now proposing that new surfaces being laid should be of a hybrid variety which require less watering. This is due to the negative ecological effects of the high water requirements of water-based synthetic fields. It has also been stated that the decision to make artificial surfaces mandatory greatly favoured more affluent countries who could afford these new pitches.
The game is played between two teams of eleven, 10 field players and one goal keeper, are permitted to be on the pitch at any one time. The remaining players may be substituted in any combination. There is an unlimited number of times a team can sub in and out. Substitutions are permitted at any point in the game, apart from between the award and end of a penalty corner; two exceptions to this rule is for injury or suspension of the defending goalkeeper, which is not allowed when playing with a field keep, or a player can exit the field, but you must wait until after the penalty corner is complete. Play is not stopped for a substitution (except of a goalkeeper), the players leave and rejoin the match simultaneously at the half-way line.
Players are permitted to play the ball with the flat of the 'face side' and with the edges of the head and handle of the field hockey stick with the exception that, for reasons of safety, the ball may not be struck 'hard' with a forehand edge stroke, because of the difficulty of controlling the height and direction of the ball from that stroke.
The flat side is always on the "natural" side for a right-handed person swinging the stick at the ball from right to left. Left-handed sticks are rare, as International Hockey Federation rules forbid their use in a game. To make a strike at the ball with a left-to-right swing the player must present the flat of the 'face' of the stick to the ball by 'reversing' the stick head, i.e. by turning the handle through approximately 180° (while a reverse edge hit would turn the stick head through approximately 90° from the position of an upright forehand stroke with the 'face' of the stick head).
Edge hitting of the ball underwent a two-year "experimental period", twice the usual length of an "experimental trial" and is still a matter of some controversy within the game. Ric Charlesworth, the former Australian coach, has been a strong critic of the unrestricted use of the reverse edge hit. The 'hard' forehand edge hit was banned after similar concerns were expressed about the ability of players to direct the ball accurately, but the reverse edge hit does appear to be more predictable and controllable than its counterpart. This type of hit is now more commonly referred to as the "forehand sweep" where the ball is hit with the flat side or "natural" side of the stick and not the rounded edge.
Other rules include; no foot-to-ball contact, no use of hands, no obstructing other players, no high back swing, no hacking, and no third party. If a player is dribbling the ball and either loses control and kicks the ball or another player interferes that player is not permitted to gain control and continue dribbling. The rules do not allow the person who kicked the ball to gain advantage from the kick, so the ball will automatically be passed on to the opposing team. Conversely, if no advantage is gained from kicking the ball, play should continue. Players may not obstruct another's chance of hitting the ball in any way. No shoving/using your body/stick to prevent advancement in the other team. Penalty for this is the opposing team receives the ball and if the problem continues, the player can be carded. While a player is taking a free hit or starting a corner the back swing of their hit cannot be too high for this is considered dangerous. Finally there may not be three players touching the ball at one time. Two players from opposing teams can battle for the ball, however if another player interferes it is considered third party and the ball automatically goes to the team who only had one player involved in the third party.
A match ordinarily consists of two periods of 35 minutes and a halftime interval of 5 minutes. Other periods and interval may be agreed by both teams except as specified in Regulations for particular competitions. Since 2014, some international games have four 15-minute quarters with 2 minutes break between each quarter and 5 minutes break between quarter two and three. At the 2018 Commonwealth Games, held on the Gold Coast in Brisbane, the hockey games for both men and women had four 15-minute quarters.
In December 2018, the FIH announced rule changes that would make 15-minute quarters universal from January 2019. England Hockey confirmed that while no changes would be made to the domestic game mid-season, the new rules would be implemented at the start of the 2019–20 season. However, in July 2019 England Hockey announced that 17.5-minute quarters would only be implemented in elite domestic club games.
The game begins with a pass back from the centre-forward usually to the centre-half back from the halfway line. The opposing team cannot try to tackle this play until the ball has been pushed back. The team consists of eleven players, usually aligned as follows: goalkeeper, right fullback, left fullback, three half-backs and five forwards who are right wing, right inner, centre forward, left inner and left wing. These positions can change and adapt throughout the course of the game depending on the attacking and defensive style of the opposition.
When hockey positions are discussed, notions of fluidity are very common. Each team can be fielded with a maximum of 11 players and will typically arrange themselves into forwards, midfielders, and defensive players (fullbacks) with players frequently moving between these lines with the flow of play. Each team may also play with:
As hockey has a very dynamic style of play, it is difficult to simplify positions to the static formations which are common in association football. Although positions will typically be categorised as either fullback, halfback, midfield/inner or striker, it is important for players to have an understanding of every position on the field. For example, it is not uncommon to see a halfback overlap and end up in either attacking position, with the midfield and strikers being responsible for re-adjusting to fill the space they left. Movement between lines like this is particularly common across all positions.
This fluid Australian culture of hockey has been responsible for developing an international trend towards players occupying spaces on the field, not having assigned positions. Although they may have particular spaces on the field which they are more comfortable and effective as players, they are responsible for occupying the space nearest them. This fluid approach to hockey and player movement has made it easy for teams to transition between formations such as: "3 at the back", "5 midfields", "2 at the front", and more.
When the ball is inside the circle, they are defending and they have their stick in their hand, goalkeepers wearing full protective equipment are permitted to use their stick, feet, kickers or leg guards to propel the ball and to use their stick, feet, kickers, leg guards or any other part of their body to stop the ball or deflect it in any direction including over the back line. Similarly, field players are permitted to use their stick. They are not allowed to use their feet and legs to propel the ball, stop the ball or deflect it in any direction including over the back line. However, neither goalkeepers, or players with goalkeeping privileges are permitted to conduct themselves in a manner which is dangerous to other players by taking advantage of the protective equipment they wear.
Neither goalkeepers or players with goalkeeping privileges may lie on the ball, however, they are permitted to use arms, hands and any other part of their body to push the ball away. Lying on the ball deliberately will result in a penalty stroke, whereas if an umpire deems a goalkeeper has lain on the ball accidentally (e.g. it gets stuck in their protective equipment), a penalty corner is awarded.
* The action above is permitted only as part of a goal saving action or to move the ball away from the possibility of a goal scoring action by opponents. It does not permit a goalkeeper or player with goalkeeping privileges to propel the ball forcefully with arms, hands or body so that it travels a long distance
When the ball is outside the circle they are defending, goalkeepers or players with goalkeeping privileges are only permitted to play the ball with their stick. Further, a goalkeeper, or player with goalkeeping privileges who is wearing a helmet, must not take part in the match outside the 23m area they are defending, except when taking a penalty stroke. A goalkeeper must wear protective headgear at all times, except when taking a penalty stroke.
For the purposes of the rules, all players on the team in possession of the ball are attackers, and those on the team without the ball are defenders, yet throughout the game being played you are always "defending" your goal and "attacking" the opposite goal.
The match is officiated by two field umpires. Traditionally each umpire generally controls half of the field, divided roughly diagonally. These umpires are often assisted by a technical bench including a timekeeper and record keeper.
Prior to the start of the game, a coin is tossed and the winning captain can choose a starting end or whether to start with the ball. Since 2017 the game consists of four periods of 15 minutes with a 2-minute break after every period, and a 15-minute intermission at half time before changing ends. At the start of each period, as well as after goals are scored, play is started with a pass from the centre of the field. All players must start in their defensive half (apart from the player making the pass), but the ball may be played in any direction along the floor. Each team starts with the ball in one half, and the team that conceded the goal has possession for the restart. Teams trade sides at halftime.
Field players may only play the ball with the face of the stick. If the back side of the stick is used, it is a penalty and the other team will get the ball back. Tackling is permitted as long as the tackler does not make contact with the attacker or the other person's stick before playing the ball (contact after the tackle may also be penalised if the tackle was made from a position where contact was inevitable). Further, the player with the ball may not deliberately use his body to push a defender out of the way.
Field players may not play the ball with their feet, but if the ball accidentally hits the feet, and the player gains no benefit from the contact, then the contact is not penalised. Although there has been a change in the wording of this rule from 1 January 2007, the current FIH umpires' briefing instructs umpires not to change the way they interpret this rule.
Obstruction typically occurs in three circumstances – when a defender comes between the player with possession and the ball in order to prevent them tackling; when a defender's stick comes between the attacker's stick and the ball or makes contact with the attacker's stick or body; and also when blocking the opposition's attempt to tackle a teammate with the ball (called third party obstruction).
When the ball passes completely over the sidelines (on the sideline is still in), it is returned to play with a sideline hit, taken by a member of the team whose players were not the last to touch the ball before crossing the sideline. The ball must be placed on the sideline, with the hit taken from as near the place the ball went out of play as possible. If it crosses the back line after last touched by an attacker, a 15 m (16 yd) hit is awarded. A 15 m hit is also awarded for offences committed by the attacking side within 15 m of the end of the pitch they are attacking.
Set plays are often utilised for specific situations such as a penalty corner or free hit. For instance, many teams have penalty corner variations that they can use to beat the defensive team. The coach may have plays that sends the ball between two defenders and lets the player attack the opposing team's goal. There are no set plays unless your team has them.
Free hits are awarded when offences are committed outside the scoring circles (the term 'free hit' is standard usage but the ball need not be hit). The ball may be hit, pushed or lifted in any direction by the team offended against. The ball can be lifted from a free hit but not by hitting, you must flick or scoop to lift from a free hit. (In previous versions of the rules, hits in the area outside the circle in open play have been permitted but lifting one direction from a free hit was prohibited). Opponents must move 5 m (5.5 yd) from the ball when a free hit is awarded. A free hit must be taken from within playing distance of the place of the offence for which it was awarded and the ball must be stationary when the free hit is taken.
As mentioned above, a 15 m hit is awarded if an attacking player commits a foul forward of that line, or if the ball passes over the back line off an attacker. These free hits are taken in-line with where the foul was committed (taking a line parallel with the sideline between where the offence was committed, or the ball went out of play). When an attacking free hit is awarded within 5 m of the circle everyone including the person taking the penalty must be five meters from the circle and everyone apart from the person taking the free hit must be five meters away from the ball. When taking an attacking free hit, the ball may not be hit straight into the circle if you are within your attacking 23 meter area (25-yard area). It has to travel 5 meters before going in.
In February 2009 the FIH introduced, as a "Mandatory Experiment" for international competition, an updated version of the free-hit rule. The changes allows a player taking a free hit to pass the ball to themselves. Importantly, this is not a "play on" situation, but to the untrained eye it may appear to be. The player must play the ball any distance in two separate motions, before continuing as if it were a play-on situation. They may raise an aerial or overhead immediately as the second action, or any other stroke permitted by the rules of field hockey. At high-school level, this is called a self pass and was adopted in Pennsylvania in 2010 as a legal technique for putting the ball in play.
Also, all players (from both teams) must be at least 5 m from any free hit awarded to the attack within the 23 m area. The ball may not travel directly into the circle from a free hit to the attack within the 23 m area without first being touched by another player or being dribbled at least 5 m by a player making a "self-pass". These experimental rules apply to all free-hit situations, including sideline and corner hits. National associations may also choose to introduce these rules for their domestic competitions.
A free hit from the 23-metre line – called a long corner – is awarded to the attacking team if the ball goes over the back-line after last being touched by a defender, provided they do not play it over the back-line deliberately, in which case a penalty corner is awarded. This free hit is played by the attacking team from a spot on the 23-metre line, in line with where the ball went out of play. All the parameters of an attacking free hit within the attacking quarter of the playing surface apply.
The short or penalty corner is awarded:
Short corners begin with five defenders (usually including the keeper) positioned behind the back line and the ball placed at least 10 yards from the nearest goal post. All other players in the defending team must be beyond the centre line, that is not in their 'own' half of the pitch, until the ball is in play. Attacking players begin the play standing outside the scoring circle, except for one attacker who starts the corner by playing the ball from a mark 10 m either side of the goal (the circle has a 14.63 m radius). This player puts the ball into play by pushing or hitting the ball to the other attackers outside the circle; the ball must pass outside the circle and then put back into the circle before the attackers may make a shot at the goal from which a goal can be scored. FIH rules do not forbid a shot at goal before the ball leaves the circle after being 'inserted', nor is a shot at the goal from outside the circle prohibited, but a goal cannot be scored at all if the ball has not gone out of the circle and cannot be scored from a shot from outside the circle if it is not again played by an attacking player before it enters the goal.
For safety reasons, the first shot of a penalty corner must not exceed 460 mm high (the height of the "backboard" of the goal) at the point it crosses the goal line if it is hit. However, if the ball is deemed to be below backboard height, the ball can be subsequently deflected above this height by another player (defender or attacker), providing that this deflection does not lead to danger. The "Slap" stroke (a sweeping motion towards the ball, where the stick is kept on or close to the ground when striking the ball) is classed as a hit, and so the first shot at goal must be below backboard height for this type of shot also.
If the first shot at goal in a short corner situation is a push, flick or scoop, in particular the drag flick (which has become popular at international and national league standards), the shot is permitted to rise above the height of the backboard, as long as the shot is not deemed dangerous to any opponent. This form of shooting was developed because it is not height restricted in the same way as the first hit shot at the goal and players with good technique are able to drag-flick with as much power as many others can hit a ball.
A penalty stroke is awarded when a defender commits a foul in the circle (accidental or otherwise) that prevents a probable goal or commits a deliberate foul in the circle or if defenders repeatedly run from the back line too early at a penalty corner. The penalty stroke is taken by a single attacker in the circle, against the goalkeeper, from a spot 6.4 m from goal. The ball is played only once at goal by the attacker using a push, flick or scoop stroke. If the shot is saved, play is restarted with a 15 m hit to the defenders. When a goal is scored, play is restarted in the normal way.
According to the Rules of Hockey 2015 issued by the FIH there are only two criteria for a dangerously played ball. The first is legitimate evasive action by an opponent (what constitutes legitimate evasive action is an umpiring judgment). The second is specific to the rule concerning a shot at goal at a penalty corner but is generally, if somewhat inconsistently, applied throughout the game and in all parts of the pitch: it is that a ball lifted above knee height and at an opponent who is within 5m of the ball is certainly dangerous.
The velocity of the ball is not mentioned in the rules concerning a dangerously played ball. A ball that hits a player above the knee may on some occasions not be penalised, this is at the umpire's discretion. A jab tackle, for example, might accidentally lift the ball above knee height into an opponent from close range but at such low velocity as not to be, in the opinion of the umpire, dangerous play. In the same way a high-velocity hit at very close range into an opponent, but below knee height, could be considered to be dangerous or reckless play in the view of the umpire, especially when safer alternatives are open to the striker of the ball.
A ball that has been lifted high so that it will fall among close opponents may be deemed to be potentially dangerous and play may be stopped for that reason. A lifted ball that is falling to a player in clear space may be made potentially dangerous by the actions of an opponent closing to within 5m of the receiver before the ball has been controlled to ground – a rule which is often only loosely applied; the distance allowed is often only what might be described as playing distance, 2–3 m, and opponents tend to be permitted to close on the ball as soon as the receiver plays it: these unofficial variations are often based on the umpire's perception of the skill of the players i.e. on the level of the game, in order to maintain game flow, which umpires are in general in both Rules and Briefing instructed to do, by not penalising when it is unnecessary to do so; this is also a matter at the umpire's discretion.
The term "falling ball" is important in what may be termed encroaching offences. It is generally only considered an offence to encroach on an opponent receiving a lifted ball that has been lifted to above head height (although the height is not specified in rule) and is falling. So, for example, a lifted shot at the goal which is still rising as it crosses the goal line (or would have been rising as it crossed the goal line) can be legitimately followed up by any of the attacking team looking for a rebound.
In general even potentially dangerous play is not penalised if an opponent is not disadvantaged by it or, obviously, not injured by it so that he cannot continue. A personal penalty, that is a caution or a suspension, rather than a team penalty, such as a free ball or a penalty corner, may be (many would say should be or even must be, but again this is at the umpire's discretion) issued to the guilty party after an advantage allowed by the umpire has been played out in any situation where an offence has occurred, including dangerous play (but once advantage has been allowed the umpire cannot then call play back and award a team penalty).
It is not an offence to lift the ball over an opponent's stick (or body on the ground), provided that it is done with consideration for the safety of the opponent and not dangerously. For example, a skilful attacker may lift the ball over a defenders stick or prone body and run past them, however if the attacker lifts the ball into or at the defender's body, this would almost certainly be regarded as dangerous.
It is not against the rules to bounce the ball on the stick and even to run with it while doing so, as long as that does not lead to a potentially dangerous conflict with an opponent who is attempting to make a tackle. For example, two players trying to play at the ball in the air at the same time, would probably be considered a dangerous situation and it is likely that the player who first put the ball up or who was so 'carrying' it would be penalised.
Dangerous play rules also apply to the usage of the stick when approaching the ball, making a stroke at it (replacing what was at one time referred to as the "sticks" rule, which once forbade the raising of any part of the stick above the shoulder during any play. This last restriction has been removed but the stick should still not be used in a way that endangers an opponent) or attempting to tackle, (fouls relating to tripping, impeding and obstruction). The use of the stick to strike an opponent will usually be much more severely dealt with by the umpires than offences such as barging, impeding and obstruction with the body, although these are also dealt with firmly, especially when these fouls are intentional.
Hockey uses a three-tier penalty card system of warnings and suspensions:
If a coach is sent off, depending on local rules, a player may have to leave the field for the remaining length of the match.
In addition to their colours, field hockey penalty cards are often shaped differently, so they can be recognised easily. Green cards are normally triangular, yellow cards rectangular and red cards circular.
Unlike football, a player may receive more than one green or yellow card. However, they cannot receive the same card for the same offence (for example two yellows for dangerous play), and the second must always be a more serious card. In the case of a second yellow card for a different breach of the rules (for example a yellow for deliberate foot, and a second later in the game for dangerous play) the temporary suspension would be expected to be of considerably longer duration than the first. However, local playing conditions may mandate that cards are awarded only progressively, and not allow any second awards.
Umpires, if the free hit would have been in the attacking 23 m area, may upgrade the free hit to a penalty corner for dissent or other misconduct after the free hit has been awarded.
The teams' object is to play the ball into their attacking circle and, from there, hit, push or flick the ball into the goal, scoring a goal. The team with more goals after 60 minutes wins the game. The playing time may be shortened, particularly when younger players are involved, or for some tournament play. If the game is played in a countdown clock, like ice hockey, a goal can only count if the ball completely crosses the goal line and into the goal before time expires, not when the ball leaves the stick in the act of shooting.
If the score is tied at the end of the game, either a draw is declared or the game goes into extra time, or there is a penalty shoot-out, depending on the format of the competition. In many competitions (such as regular club competition, or in pool games in FIH international tournaments such as the Olympics or the World Cup), a tied result stands and the overall competition standings are adjusted accordingly. Since March 2013, when tie breaking is required, the official FIH Tournament Regulations mandate to no longer have extra time and go directly into a penalty shoot-out when a classification match ends in a tie. However, many associations follow the previous procedure consisting of two periods of 7.5 minutes of "golden goal" extra time during which the game ends as soon as one team scores.
There are many variations to overtime play that depend on the league or tournament rules. In American college play, a seven-a-side overtime period consists of a 10-minute golden goal period with seven players for each team. If the scores remain equal, the game enters a one-on-one competition where each team chooses five players to dribble from the 25-yard (23 m) line down to the circle against the opposing goalkeeper. The player has eight seconds to score against the goalkeeper while keeping the ball in bounds. The game ends after a goal is scored, the ball goes out of bounds, a foul is committed (ending in either a penalty stroke or flick or the end of the one-on-one) or time expires. If the tie still persists, more rounds are played until one team has scored.
The FIH implemented a two-year rules cycle with the 2007–08 edition of the rules, with the intention that the rules be reviewed on a biennial basis. The 2009 rulebook was officially released in early March 2009 (effective 1 May 2009), however the FIH published the major changes in February. The current rule book is effective from 1 January 2022.
There are sometimes minor variations in rules from competition to competition; for instance, the duration of matches is often varied for junior competitions or for carnivals. Different national associations also have slightly differing rules on player equipment.
The new Euro Hockey League and the Olympics has made major alterations to the rules to aid television viewers, such as splitting the game into four-quarters, and to try to improve player behavior, such as a two-minute suspension for green cards—the latter was also used in the 2010 World Cup and 2016 Olympics. In the United States, the NCAA has its own rules for inter-collegiate competitions; high school associations similarly play to different rules, usually using the rules published by the National Federation of State High School Associations (NFHS). This article assumes FIH rules unless otherwise stated. USA Field Hockey produces an annual summary of the differences.
In the United States, the games at the junior high level consist of four 12-minute periods, while the high-school level consists of four 15-minute periods. Many private American schools play 12-minute quarters, and some have adopted FIH rules rather than NFHS rules.
Players are required to wear mouth guards and shin guards in order to play the game. Also, there is a newer rule requiring certain types of sticks be used. In recent years, the NFHS rules have moved closer to FIH, but in 2011 a new rule requiring protective eyewear was introduced for the 2011 Fall season. Further clarification of NFHS's rule requiring protective eyewear states, "effective 1 January 2019, all eye protection shall be permanently labeled with the current ASTM 2713 standard for field hockey". Metal 'cage style' goggles favored by US high school lacrosse and permitted in high school field hockey is prohibited under FIH rules.
Each player carries a hockey stick that normally measures between 80 and 95 cm (31 and 37 in); shorter or longer sticks are available. The length of the stick is based on the player's individual height: the top of the stick usually comes to the player's hip, and taller players typically have longer sticks. Goalkeepers can use either a specialised stick, or an ordinary field hockey stick. The specific goal-keeping sticks have another curve at the end of the stick, to give it more surface area to block the ball.
Sticks were traditionally made of wood, but are now often made also with fibreglass, kevlar or carbon fibre composites. Metal is forbidden from use in field hockey sticks, due to the risk of injury from sharp edges if the stick were to break. The stick has a rounded handle, has a J-shaped hook at the bottom, and is flattened on the left side (when looking down the handle with the hook facing upwards). All sticks must be right-handed; left-handed ones are prohibited.
There was traditionally a slight curve (called the bow, or rake) from the top to bottom of the face side of the stick and another on the 'heel' edge to the top of the handle (usually made according to the angle at which the handle part was inserted into the splice of the head part of the stick), which assisted in the positioning of the stick head in relation to the ball and made striking the ball easier and more accurate.
The hook at the bottom of the stick was only recently the tight curve (Indian style) that we have nowadays. The older 'English' sticks had a longer bend, making it very hard to use the stick on the reverse. For this reason players now use the tight curved sticks.
The handle makes up about the top third of the stick. It is wrapped in a grip similar to that used on tennis racket. The grip may be made of a variety of materials, including chamois leather, which improves grip in the wet and gives the stick a softer touch and different weighting it wrapped over a preexisting grip.
It was recently discovered that increasing the depth of the face bow made it easier to get high speeds from the dragflick and made the stroke easier to execute. At first, after this feature was introduced, the Hockey Rules Board placed a limit of 50 mm on the maximum depth of bow over the length of the stick but experience quickly demonstrated this to be excessive. New rules now limit this curve to under 25 mm so as to limit the power with which the ball can be flicked.
Standard field hockey balls are hard spherical balls, made of solid plastic (sometimes over a cork core), and are usually white, although they can be any colour as long as they contrast with the playing surface. The balls have a diameter of 71.3–74.8 mm (2.81–2.94 in) and a mass of 156–163 g (5.5–5.7 oz). The ball is often covered with indentations to reduce aquaplaning that can cause an inconsistent ball speed on wet surfaces.
The 2007 rulebook saw major changes regarding goalkeepers. A fully equipped goalkeeper must wear a helmet, leg guards and kickers, and like all players, they must carry a stick. Goalkeepers may use either a field player's stick or a specialised goalkeeping stick provided always the stick is of legal dimensions. Usually field hockey goalkeepers also wear extensive additional protective equipment including chest guards, padded shorts, heavily padded hand protectors, groin protectors, neck protectors and arm guards. A goalie may not cross the 23 m line, the sole exception to this being if the goalkeeper is to take a penalty stroke at the other end of the field, when the clock is stopped. The goalkeeper can also remove their helmet for this action. While goalkeepers are allowed to use their feet and hands to clear the ball, like field players they may only use the one side of their stick. Slide tackling is permitted as long as it is with the intention of clearing the ball, not aimed at a player.
It is now also even possible for teams to have a full eleven outfield players and no goalkeeper at all. No player may wear a helmet or other goalkeeping equipment, neither will any player be able to play the ball with any other part of the body than with their stick. This may be used to offer a tactical advantage, for example, if a team is trailing with only a short time to play, or to allow for play to commence if no goalkeeper or kit is available.
The basic tactic in field hockey, as in association football and many other team games, is to outnumber the opponent in a particular area of the field at a moment in time. When in possession of the ball this temporary numerical superiority can be used to pass the ball around opponents so that they cannot effect a tackle because they cannot get within playing reach of the ball and to further use this numerical advantage to gain time and create clear space for making scoring shots on the opponent's goal. When not in possession of the ball numerical superiority is used to isolate and channel an opponent in possession and 'mark out' any passing options so that an interception or a tackle may be made to gain possession. Highly skillful players can sometimes get the better of more than one opponent and retain the ball and successfully pass or shoot but this tends to use more energy than quick early passing.
Every player has a role depending on their relationship to the ball if the team communicates throughout the play of the game. There will be players on the ball (offensively – ball carriers; defensively – pressure, support players, and movement players.
The main methods by which the ball is moved around the field by players are a) passing b) pushing the ball and running with it controlled to the front or right of the body and c) "dribbling"; where the player controls the ball with the stick and moves in various directions with it to elude opponents. To make a pass the ball may be propelled with a pushing stroke, where the player uses their wrists to push the stick head through the ball while the stick head is in contact with it; the "flick" or "scoop", similar to the push but with an additional arm and leg and rotational actions to lift the ball off the ground; and the "hit", where a swing at ball is taken and contact with it is often made very forcefully, causing the ball to be propelled at velocities in excess of 70 mph (110 km/h). In order to produce a powerful hit, usually for travel over long distances or shooting at the goal, the stick is raised higher and swung with maximum power at the ball, a stroke sometimes known as a "drive".
Tackles are made by placing the stick into the path of the ball or playing the stick head or shaft directly at the ball. To increase the effectiveness of the tackle, players will often place the entire stick close to the ground horizontally, thus representing a wider barrier. To avoid the tackle, the ball carrier will either pass the ball to a teammate using any of the push, flick, or hit strokes, or attempt to maneuver or "drag" the ball around the tackle, trying to deceive the tackler.
In recent years, the penalty corner has gained importance as a goal scoring opportunity. Particularly with the technical development of the drag flick. Tactics at penalty corners to set up time for a shot with a drag flick or a hit shot at the goal involve various complex plays, including multiple passes before deflections towards the goal is made but the most common method of shooting is the direct flick or hit at the goal.
At the highest level, field hockey is a fast moving, highly skilled game, with players using fast moves with the stick, quick accurate passing, and hard hits, in attempts to keep possession and move the ball towards the goal. Tackling with physical contact and otherwise physically obstructing players is not permitted. Some of the tactics used resemble football (soccer), but with greater ball speed.
With the 2009 changes to the rules regarding free hits in the attacking 23 m area, the common tactic of hitting the ball hard into the circle was forbidden. Although at higher levels this was considered tactically risky and low-percentage at creating scoring opportunities, it was used with some effect to 'win' penalty corners by forcing the ball onto a defender's foot or to deflect high (and dangerously) off a defender's stick. The FIH felt it was a dangerous practice that could easily lead to raised deflections and injuries in the circle, which is often crowded at a free-hit situation, and outlawed it.
The biggest two field hockey tournaments are the Olympic Games tournament, and the Hockey World Cup, which is also held every four years. Apart from this, there is the men's and women's Pro League held each year for the nine top-ranked teams.
Of the men's teams, Pakistan has won the Hockey World cup four times, more times than any other side. India has won the Hockey at the Summer Olympics eight times, including in six successive Olympiads. Of the female teams, the Netherlands has won the Hockey World cup the most times, with six titles. At the Olympics, Australia and the Netherlands have both won three Olympic tournaments.
India and Pakistan dominated men's hockey until the early 1980s, winning eight Olympic golds and three of the first five world cups, respectively, but have become less prominent with the ascendancy of Belgium, the Netherlands, Germany, New Zealand, Australia, and Spain since the late 1980s, as grass playing surfaces were replaced with artificial turf. Other notable men's nations include Argentina, England (who combine with other British "Home Nations" to form the Great Britain side at Olympic events) and South Korea.
The Netherlands, Australia and Argentina are the most successful national teams among women. The Netherlands was the predominant women's team before field hockey was added to Olympic events. In the early 1990s, Australia emerged as the strongest women's country, though retirement of a number of players has weakened the team somewhat. Argentina improved its play in the 2000s, heading IFH rankings in 2003, 2010 and 2013. Other prominent women's teams are Germany, Great Britain, China, South Korea and India. Four nations have won Olympic gold medals in both men's and women's hockey: Germany, Netherlands, Australia and Great Britain.
As of January 2022 the Australian men's team and the Dutch women's teams lead the FIH world rankings.
For a couple of years, Belgium has emerged as a leading nation, with a World Champions title (2018), a European Champions title (2019), a silver medal (2016) followed with a title (2021) at the Olympics, and a lead in the FIH men's team world ranking.
This is a list of the major international field hockey tournaments, in chronological order. Tournaments included are:
Defunct tournaments:
Other international tournaments include:
A popular variant of field hockey is indoor hockey, which is 6-a-side (5-a-side during 2014–2015) using a field which is reduced to approximately 40 m × 20 m (131 ft × 66 ft). Although many of the rules remain the same, including obstruction and feet, there are several key variations: players may not raise the ball unless shooting at goal, players may not hit the ball, instead using pushes to transfer it, and the sidelines are replaced with solid barriers, from which the ball will rebound and remain in play. In addition, the regulation guidelines for the indoor field hockey stick require a slightly thinner, lighter stick than an outdoor one.
As the name suggests, Hockey5s is a hockey variant which features five players on each team (including a goalkeeper). The field of play is 55 m long and 41.70 m wide—this is approximately half the size of a regular pitch. Few additional markings are needed as there is no penalty circle nor penalty corners; shots can be taken from anywhere on the pitch. Penalty strokes are replaced by a "challenge" which is like the one-on-one method used in a penalty shoot-out. The duration of the match is three 12-minute periods with an interval of two minutes between periods; golden goal periods are multiple 5-minute periods. The rules are simpler and it is intended that the game is faster, creating more shots on goal with less play in midfield, and more attractive to spectators.
An Asian qualification tournament for two places at the 2014 Youth Olympic Games was the first time an FIH event used the Hockey5s format. Hockey5s was also used for the Youth Olympic hockey tournament, the Pacific Games in 2015 and at the African Youth Games is 2018.
In 2022, the FIH staged its first senior international Hockey5s event, with a men's and women's event being held in Lausanne. The FIH Men's Hockey5s World Cup and FIH Women's Hockey5s World Cup are set to debut in 2024.
NOTE: Many of the sources here are suspect and may be unreliable. Y indicates a reference has been reviewed and is approved. All ticks will be removed when the article reconstruction is complete. | [
{
"paragraph_id": 0,
"text": "Field hockey (or simply hockey) is a team sport structured in standard hockey format, in which each team plays with 11 players in total, made up of 10 field players and a goalkeeper. Teams must move a hockey ball around a pitch by hitting it with a hockey stick towards the rival team's shooting circle and then into the goal. The match is won by the team that scores the most goals. Matches are played on grass, watered turf, artificial turf, or indoor boarded surface.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The stick is made of wood, carbon fibre, fibreglass and carbon, or a combination of carbon fibre and fibreglass in different quantities. The stick has two sides; one rounded and one flat; only the flat face of the stick is allowed to progress the ball. During play, goalkeepers are the only players allowed to touch the ball with any part of their body. A player's hand is considered part of the stick if holding the stick. If the ball is \"played\" with the rounded part of the stick (i.e. deliberately stopped or hit), it will result in a penalty (accidental touches are not an offence if they do not materially affect play). Goalkeepers often have a different design of stick; they also cannot play the ball with the round side of their stick.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The modern game was developed at public schools in 19th century England and it is now played globally. The governing body is the International Hockey Federation (FIH), called the Fédération Internationale de Hockey in French. Men and women are represented internationally in competitions including the Olympic Games, World Cup, FIH Pro League, Junior World Cup and in past also World League, Champions Trophy. Many countries run extensive junior, senior, and masters club competitions. The FIH is also responsible for organizing the Hockey Rules Board and developing the sport's rules.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The sport is known simply as \"hockey\" in countries where it is the more common form of hockey. The term \"field hockey\" is used primarily in Canada and the United States, where \"hockey\" more often refers to ice hockey. In Sweden, the term landhockey is used. A popular variant is indoor field hockey, which differs in a number of respects while embodying the primary principles of hockey.",
"title": ""
},
{
"paragraph_id": 4,
"text": "According to the International Hockey Federation (FIH), \"the roots of hockey are buried deep in antiquity\". There are historical records which suggest early forms of hockey were played in Egypt and Persia c. 2000 BC, and in Ethiopia c. 1000 BC. Later evidence suggest that the ancient Greeks, Romans and Aztecs all played hockey-like games. In Ancient Egypt, there is a depiction of two figures playing with sticks and ball in the Beni Hasan tomb of Khety, an administrator of Dynasty XI.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In Ancient Greece, there is a similar image dated c. 510 BC, which may have been called Κερητίζειν (kerētízein) because it was played with a horn (κέρας, kéras in Ancient Greek) and a ball. Researchers disagree over how to interpret this image. It could have been a team or one-on-one activity (the depiction shows two active players, and other figures who may be team-mates awaiting a face-off, or non-players waiting for their turn at play). Billiards historians Stein and Rubino believe it was among the games ancestral to lawn-and-field games like hockey and ground billiards, and near-identical depictions appear in later European illuminated manuscripts and other works of the 14th through 17th centuries, showing contemporary courtly and clerical life.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In East Asia, a similar game was entertained, using a carved wooden stick and ball, prior to 300 BC. In Inner Mongolia, China, the Daur people have for about 1,000 years been playing beikou, a game with some similarities to field hockey. A similar field hockey or ground billiards variant, called suigan, was played in China during the Ming dynasty (1368–1644, post-dating the Mongol-led Yuan dynasty). A game similar to field hockey was played in the 17th century in Punjab state in India under name khido khundi (khido refers to the woolen ball, and khundi to the stick). In South America, most specifically in Chile, the local natives of the 16th century used to play a game called Chueca, which also shares common elements with hockey.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In Northern Europe, the games of hurling (Ireland) and Knattleikr (Iceland), both team ball games involving sticks to drive a ball to the opponents' goal, date at least as far back as the Early Middle Ages. By the 12th century, a team ball game called la soule or choule, akin to a chaotic and sometimes long-distance version of hockey or rugby football (depending on whether sticks were used in a particular local variant), was regularly played in France and southern Britain between villages or parishes. Throughout the Middle Ages to the Early Modern era, such games often involved the local clergy or secular aristocracy, and in some periods were limited to them by various anti-gaming edicts, or even banned altogether. Stein and Rubino, among others, ultimately trace aspects of these games both to rituals in antiquity involving orbs and sceptres (on the aristocratic and clerical side), and to ancient military training exercises (on the popular side); polo (essentially hockey on horseback) was devised by the Ancient Persians for cavalry training, based on the local proto-hockey foot game of the region.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The word hockey itself has no clear origin. One belief is that it was recorded in 1363 when Edward III of England issued the proclamation: \"Moreover we ordain that you prohibit under penalty of imprisonment all and sundry from such stone, wood and iron throwing; handball, football, or hockey; coursing and cock-fighting, or other such idle games\". The belief is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games \"Pilam Manualem, Pedivam, & Bacularem: & ad Canibucam & Gallorum Pugnam\". It may be recalled at this point that baculum is the Latin for 'stick', so the reference would appear to be to a game played with sticks. The English historian and biographer John Strype did not use the word \"hockey\" when he translated the proclamation in 1720, and the word 'hockey' remains of unknown origin.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The modern game developed at public schools in 19th century England. It is now played globally, particularly in parts of Western Europe, South Asia, Southern Africa, Australia, New Zealand, Argentina, and parts of the United States, primarily New England and the mid-Atlantic states. The term \"field hockey\" is used primarily in Canada and the United States where \"hockey\" more often refers to ice hockey. In Sweden, the term landhockey is used, and to some degree in Norway, where the game is governed by Norges Bandyforbund.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first known club was formed in 1849 at Blackheath in south-east London, but the modern rules grew out of a version played by Middlesex cricket clubs as a winter activity. Teddington Hockey Club formed the modern game by introducing the striking circle and changing the ball to a sphere from a rubber cube. The Hockey Association was founded in 1876. It lasted just six years, before being revived by nine founding members. The first international competition took place in 1895 (Ireland 3, Wales 0), and the International Rules Board was founded in 1900.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Field hockey was played at the Summer Olympics in 1908 and 1920. It was dropped in 1924, leading to the foundation of the Fédération Internationale de Hockey sur Gazon (FIH) as an international governing body by seven continental European nations; and hockey was reinstated as an Olympic game in 1928. Men's hockey united under the FIH in 1970.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The two oldest trophies are the Irish Senior Cup, which dates back to 1894, and the Irish Junior Cup, a second XI-only competition instituted in 1895.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In India, the Beighton Cup and the Aga Khan tournament commenced within ten years. Entering the Olympics in 1928, India won all five games without conceding a goal, and won from 1932 until 1956 and then in 1964 and 1980. Pakistan won Olympics gold in men's hockey in 1960, 1968 and 1984. In fact, all but two of Pakistan's 10 Olympics medals so far have been in field hockey, including three gold, three silver and two bronze medals.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In the early 1970s, artificial turf began to be used. Synthetic pitches changed most aspects of field hockey, gaining speed. New tactics and techniques such as the Indian dribble developed, followed by new rules to take account. The switch to synthetic surfaces ended Indian and Pakistani domination because artificial turf was too expensive in developing countries. Since the 1970s, Australia, the Netherlands, and Germany have dominated at the Olympics and World Cup stages.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Women's field hockey was first played at British universities and schools. The first club, the Molesey Ladies, was founded in 1887. The first national association was the Irish Ladies Hockey Union in 1894, and though rebuffed by the Hockey Association, women's field hockey grew rapidly around the world. This led to the International Federation of Women's Hockey Association (IFWHA) in 1927, though this did not include many continental European countries where women played as sections of men's associations and were affiliated to the FIH. The IFWHA held conferences every three years, and tournaments associated with these were the primary IFWHA competitions. These tournaments were non-competitive until 1975.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "By the early 1970s, there were 22 associations with women's sections in the FIH and 36 associations in the IFWHA. Discussions started about a common rule book. The FIH introduced competitive tournaments in 1974, forcing the acceptance of the principle of competitive field hockey by the IFWHA in 1973. It took until 1982 for the two bodies to merge, but this allowed the introduction of women's field hockey to the Olympic games from 1980 where, as in the men's game, the Netherlands, Germany, and Australia have been consistently strong. Argentina has emerged as a team to be reckoned with since 2000, winning the world championship in 2002 and 2010 and medals at the last three Olympics.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In the United States, field hockey is played predominantly by girls and women. There are few field hockey clubs, most play taking place between high school or college sides. The sport was largely introduced in the U.S. by Constance Applebee, starting with a tour of Seven Sisters colleges in 1901 and continuing through Applebee's 24-year tenure as athletic director of Bryn Mawr College. The strength of college field hockey reflects the impact of Title IX, which mandated that colleges should fund men's and women's games programmes comparably. Hockey has been predominantly played on the East Coast, specifically the Mid-Atlantic in states such as New Jersey, New York, Pennsylvania, Maryland, and Virginia. In recent years, it has become increasingly played on the West Coast and in the Midwest.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In other countries, participation is fairly evenly balanced between men and women. For example, in the 2008–09 season, England Hockey reported 2,488 registered men's teams, 1,969 women's teams, 1,042 boys' teams, 966 girls' teams and 274 mixed teams. In 2006, the Irish Hockey Association reported that the gender split among its players was approximately 65% female and 35% male. In its 2008 census, Hockey Australia reported 40,534 male club players and 41,542 female.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Most hockey field dimensions were originally fixed using whole numbers of imperial measures. Metric measurements are now the official dimensions as laid down by the International Hockey Federation (FIH) in the Rules of Hockey.",
"title": "Field of play"
},
{
"paragraph_id": 20,
"text": "The pitch is a 91.4 m × 55 m (100.0 yd × 60.1 yd) rectangular field. At each end is a goal 2.14 m (7 ft) high and 3.66 m (12 ft) wide, as well as lines across the field 22.90 m (25 yd) from each end-line (generally referred to as the 23-metre lines or the 25-yard lines) and in the center of the field. A spot 0.15 m (6 in) in diameter, called the penalty spot or stroke mark, is placed with its centre 6.40 m (7 yd) from the centre of each goal. The shooting circle is 15 m (16 yd) from the base line.",
"title": "Field of play"
},
{
"paragraph_id": 21,
"text": "Field hockey goals are made of two upright posts, joined at the top by a horizontal crossbar, with a net positioned to catch the ball when it passes through the goalposts. The goalposts and crossbar must be white and rectangular in shape, and should be 2 in (51 mm) wide and 2–3 in (51–76 mm) deep. Field hockey goals also include sideboards and a backboard, which stand 50 cm (20 in) from the ground. The backboard runs the full 3.66 m (12.0 ft) width of the goal, while the sideboards are 1.2 m (3 ft 11 in) deep.",
"title": "Field of play"
},
{
"paragraph_id": 22,
"text": "Historically the game developed on natural grass turf. In the early 1970s, synthetic grass fields began to be used for hockey, with the first Olympic Games on this surface being held at Montreal in 1976. Synthetic pitches are now mandatory for all international tournaments and for most national competitions. While hockey is still played on traditional grass fields at some local levels and lesser national divisions, it has been replaced by synthetic surfaces almost everywhere in the western world. There are three main types of artificial hockey surface:",
"title": "Field of play"
},
{
"paragraph_id": 23,
"text": "Since the 1970s, sand-based pitches have been favoured as they dramatically speed up the game. However, in recent years there has been a massive increase in the number of \"water-based\" artificial turfs. Water-based synthetic turfs enable the ball to be transferred more quickly than on sand-based surfaces. It is this characteristic that has made them the surface of choice for international and national league competitions. Water-based surfaces are also less abrasive than sand-based surfaces and reduce the level of injury to players when they come into contact with the surface. The FIH are now proposing that new surfaces being laid should be of a hybrid variety which require less watering. This is due to the negative ecological effects of the high water requirements of water-based synthetic fields. It has also been stated that the decision to make artificial surfaces mandatory greatly favoured more affluent countries who could afford these new pitches.",
"title": "Field of play"
},
{
"paragraph_id": 24,
"text": "The game is played between two teams of eleven, 10 field players and one goal keeper, are permitted to be on the pitch at any one time. The remaining players may be substituted in any combination. There is an unlimited number of times a team can sub in and out. Substitutions are permitted at any point in the game, apart from between the award and end of a penalty corner; two exceptions to this rule is for injury or suspension of the defending goalkeeper, which is not allowed when playing with a field keep, or a player can exit the field, but you must wait until after the penalty corner is complete. Play is not stopped for a substitution (except of a goalkeeper), the players leave and rejoin the match simultaneously at the half-way line.",
"title": "Rules and play"
},
{
"paragraph_id": 25,
"text": "Players are permitted to play the ball with the flat of the 'face side' and with the edges of the head and handle of the field hockey stick with the exception that, for reasons of safety, the ball may not be struck 'hard' with a forehand edge stroke, because of the difficulty of controlling the height and direction of the ball from that stroke.",
"title": "Rules and play"
},
{
"paragraph_id": 26,
"text": "The flat side is always on the \"natural\" side for a right-handed person swinging the stick at the ball from right to left. Left-handed sticks are rare, as International Hockey Federation rules forbid their use in a game. To make a strike at the ball with a left-to-right swing the player must present the flat of the 'face' of the stick to the ball by 'reversing' the stick head, i.e. by turning the handle through approximately 180° (while a reverse edge hit would turn the stick head through approximately 90° from the position of an upright forehand stroke with the 'face' of the stick head).",
"title": "Rules and play"
},
{
"paragraph_id": 27,
"text": "Edge hitting of the ball underwent a two-year \"experimental period\", twice the usual length of an \"experimental trial\" and is still a matter of some controversy within the game. Ric Charlesworth, the former Australian coach, has been a strong critic of the unrestricted use of the reverse edge hit. The 'hard' forehand edge hit was banned after similar concerns were expressed about the ability of players to direct the ball accurately, but the reverse edge hit does appear to be more predictable and controllable than its counterpart. This type of hit is now more commonly referred to as the \"forehand sweep\" where the ball is hit with the flat side or \"natural\" side of the stick and not the rounded edge.",
"title": "Rules and play"
},
{
"paragraph_id": 28,
"text": "Other rules include; no foot-to-ball contact, no use of hands, no obstructing other players, no high back swing, no hacking, and no third party. If a player is dribbling the ball and either loses control and kicks the ball or another player interferes that player is not permitted to gain control and continue dribbling. The rules do not allow the person who kicked the ball to gain advantage from the kick, so the ball will automatically be passed on to the opposing team. Conversely, if no advantage is gained from kicking the ball, play should continue. Players may not obstruct another's chance of hitting the ball in any way. No shoving/using your body/stick to prevent advancement in the other team. Penalty for this is the opposing team receives the ball and if the problem continues, the player can be carded. While a player is taking a free hit or starting a corner the back swing of their hit cannot be too high for this is considered dangerous. Finally there may not be three players touching the ball at one time. Two players from opposing teams can battle for the ball, however if another player interferes it is considered third party and the ball automatically goes to the team who only had one player involved in the third party.",
"title": "Rules and play"
},
{
"paragraph_id": 29,
"text": "A match ordinarily consists of two periods of 35 minutes and a halftime interval of 5 minutes. Other periods and interval may be agreed by both teams except as specified in Regulations for particular competitions. Since 2014, some international games have four 15-minute quarters with 2 minutes break between each quarter and 5 minutes break between quarter two and three. At the 2018 Commonwealth Games, held on the Gold Coast in Brisbane, the hockey games for both men and women had four 15-minute quarters.",
"title": "Rules and play"
},
{
"paragraph_id": 30,
"text": "In December 2018, the FIH announced rule changes that would make 15-minute quarters universal from January 2019. England Hockey confirmed that while no changes would be made to the domestic game mid-season, the new rules would be implemented at the start of the 2019–20 season. However, in July 2019 England Hockey announced that 17.5-minute quarters would only be implemented in elite domestic club games.",
"title": "Rules and play"
},
{
"paragraph_id": 31,
"text": "The game begins with a pass back from the centre-forward usually to the centre-half back from the halfway line. The opposing team cannot try to tackle this play until the ball has been pushed back. The team consists of eleven players, usually aligned as follows: goalkeeper, right fullback, left fullback, three half-backs and five forwards who are right wing, right inner, centre forward, left inner and left wing. These positions can change and adapt throughout the course of the game depending on the attacking and defensive style of the opposition.",
"title": "Rules and play"
},
{
"paragraph_id": 32,
"text": "When hockey positions are discussed, notions of fluidity are very common. Each team can be fielded with a maximum of 11 players and will typically arrange themselves into forwards, midfielders, and defensive players (fullbacks) with players frequently moving between these lines with the flow of play. Each team may also play with:",
"title": "Rules and play"
},
{
"paragraph_id": 33,
"text": "As hockey has a very dynamic style of play, it is difficult to simplify positions to the static formations which are common in association football. Although positions will typically be categorised as either fullback, halfback, midfield/inner or striker, it is important for players to have an understanding of every position on the field. For example, it is not uncommon to see a halfback overlap and end up in either attacking position, with the midfield and strikers being responsible for re-adjusting to fill the space they left. Movement between lines like this is particularly common across all positions.",
"title": "Rules and play"
},
{
"paragraph_id": 34,
"text": "This fluid Australian culture of hockey has been responsible for developing an international trend towards players occupying spaces on the field, not having assigned positions. Although they may have particular spaces on the field which they are more comfortable and effective as players, they are responsible for occupying the space nearest them. This fluid approach to hockey and player movement has made it easy for teams to transition between formations such as: \"3 at the back\", \"5 midfields\", \"2 at the front\", and more.",
"title": "Rules and play"
},
{
"paragraph_id": 35,
"text": "When the ball is inside the circle, they are defending and they have their stick in their hand, goalkeepers wearing full protective equipment are permitted to use their stick, feet, kickers or leg guards to propel the ball and to use their stick, feet, kickers, leg guards or any other part of their body to stop the ball or deflect it in any direction including over the back line. Similarly, field players are permitted to use their stick. They are not allowed to use their feet and legs to propel the ball, stop the ball or deflect it in any direction including over the back line. However, neither goalkeepers, or players with goalkeeping privileges are permitted to conduct themselves in a manner which is dangerous to other players by taking advantage of the protective equipment they wear.",
"title": "Rules and play"
},
{
"paragraph_id": 36,
"text": "Neither goalkeepers or players with goalkeeping privileges may lie on the ball, however, they are permitted to use arms, hands and any other part of their body to push the ball away. Lying on the ball deliberately will result in a penalty stroke, whereas if an umpire deems a goalkeeper has lain on the ball accidentally (e.g. it gets stuck in their protective equipment), a penalty corner is awarded.",
"title": "Rules and play"
},
{
"paragraph_id": 37,
"text": "* The action above is permitted only as part of a goal saving action or to move the ball away from the possibility of a goal scoring action by opponents. It does not permit a goalkeeper or player with goalkeeping privileges to propel the ball forcefully with arms, hands or body so that it travels a long distance",
"title": "Rules and play"
},
{
"paragraph_id": 38,
"text": "When the ball is outside the circle they are defending, goalkeepers or players with goalkeeping privileges are only permitted to play the ball with their stick. Further, a goalkeeper, or player with goalkeeping privileges who is wearing a helmet, must not take part in the match outside the 23m area they are defending, except when taking a penalty stroke. A goalkeeper must wear protective headgear at all times, except when taking a penalty stroke.",
"title": "Rules and play"
},
{
"paragraph_id": 39,
"text": "For the purposes of the rules, all players on the team in possession of the ball are attackers, and those on the team without the ball are defenders, yet throughout the game being played you are always \"defending\" your goal and \"attacking\" the opposite goal.",
"title": "Rules and play"
},
{
"paragraph_id": 40,
"text": "The match is officiated by two field umpires. Traditionally each umpire generally controls half of the field, divided roughly diagonally. These umpires are often assisted by a technical bench including a timekeeper and record keeper.",
"title": "Rules and play"
},
{
"paragraph_id": 41,
"text": "Prior to the start of the game, a coin is tossed and the winning captain can choose a starting end or whether to start with the ball. Since 2017 the game consists of four periods of 15 minutes with a 2-minute break after every period, and a 15-minute intermission at half time before changing ends. At the start of each period, as well as after goals are scored, play is started with a pass from the centre of the field. All players must start in their defensive half (apart from the player making the pass), but the ball may be played in any direction along the floor. Each team starts with the ball in one half, and the team that conceded the goal has possession for the restart. Teams trade sides at halftime.",
"title": "Rules and play"
},
{
"paragraph_id": 42,
"text": "Field players may only play the ball with the face of the stick. If the back side of the stick is used, it is a penalty and the other team will get the ball back. Tackling is permitted as long as the tackler does not make contact with the attacker or the other person's stick before playing the ball (contact after the tackle may also be penalised if the tackle was made from a position where contact was inevitable). Further, the player with the ball may not deliberately use his body to push a defender out of the way.",
"title": "Rules and play"
},
{
"paragraph_id": 43,
"text": "Field players may not play the ball with their feet, but if the ball accidentally hits the feet, and the player gains no benefit from the contact, then the contact is not penalised. Although there has been a change in the wording of this rule from 1 January 2007, the current FIH umpires' briefing instructs umpires not to change the way they interpret this rule.",
"title": "Rules and play"
},
{
"paragraph_id": 44,
"text": "Obstruction typically occurs in three circumstances – when a defender comes between the player with possession and the ball in order to prevent them tackling; when a defender's stick comes between the attacker's stick and the ball or makes contact with the attacker's stick or body; and also when blocking the opposition's attempt to tackle a teammate with the ball (called third party obstruction).",
"title": "Rules and play"
},
{
"paragraph_id": 45,
"text": "When the ball passes completely over the sidelines (on the sideline is still in), it is returned to play with a sideline hit, taken by a member of the team whose players were not the last to touch the ball before crossing the sideline. The ball must be placed on the sideline, with the hit taken from as near the place the ball went out of play as possible. If it crosses the back line after last touched by an attacker, a 15 m (16 yd) hit is awarded. A 15 m hit is also awarded for offences committed by the attacking side within 15 m of the end of the pitch they are attacking.",
"title": "Rules and play"
},
{
"paragraph_id": 46,
"text": "Set plays are often utilised for specific situations such as a penalty corner or free hit. For instance, many teams have penalty corner variations that they can use to beat the defensive team. The coach may have plays that sends the ball between two defenders and lets the player attack the opposing team's goal. There are no set plays unless your team has them.",
"title": "Rules and play"
},
{
"paragraph_id": 47,
"text": "Free hits are awarded when offences are committed outside the scoring circles (the term 'free hit' is standard usage but the ball need not be hit). The ball may be hit, pushed or lifted in any direction by the team offended against. The ball can be lifted from a free hit but not by hitting, you must flick or scoop to lift from a free hit. (In previous versions of the rules, hits in the area outside the circle in open play have been permitted but lifting one direction from a free hit was prohibited). Opponents must move 5 m (5.5 yd) from the ball when a free hit is awarded. A free hit must be taken from within playing distance of the place of the offence for which it was awarded and the ball must be stationary when the free hit is taken.",
"title": "Rules and play"
},
{
"paragraph_id": 48,
"text": "As mentioned above, a 15 m hit is awarded if an attacking player commits a foul forward of that line, or if the ball passes over the back line off an attacker. These free hits are taken in-line with where the foul was committed (taking a line parallel with the sideline between where the offence was committed, or the ball went out of play). When an attacking free hit is awarded within 5 m of the circle everyone including the person taking the penalty must be five meters from the circle and everyone apart from the person taking the free hit must be five meters away from the ball. When taking an attacking free hit, the ball may not be hit straight into the circle if you are within your attacking 23 meter area (25-yard area). It has to travel 5 meters before going in.",
"title": "Rules and play"
},
{
"paragraph_id": 49,
"text": "In February 2009 the FIH introduced, as a \"Mandatory Experiment\" for international competition, an updated version of the free-hit rule. The changes allows a player taking a free hit to pass the ball to themselves. Importantly, this is not a \"play on\" situation, but to the untrained eye it may appear to be. The player must play the ball any distance in two separate motions, before continuing as if it were a play-on situation. They may raise an aerial or overhead immediately as the second action, or any other stroke permitted by the rules of field hockey. At high-school level, this is called a self pass and was adopted in Pennsylvania in 2010 as a legal technique for putting the ball in play.",
"title": "Rules and play"
},
{
"paragraph_id": 50,
"text": "Also, all players (from both teams) must be at least 5 m from any free hit awarded to the attack within the 23 m area. The ball may not travel directly into the circle from a free hit to the attack within the 23 m area without first being touched by another player or being dribbled at least 5 m by a player making a \"self-pass\". These experimental rules apply to all free-hit situations, including sideline and corner hits. National associations may also choose to introduce these rules for their domestic competitions.",
"title": "Rules and play"
},
{
"paragraph_id": 51,
"text": "A free hit from the 23-metre line – called a long corner – is awarded to the attacking team if the ball goes over the back-line after last being touched by a defender, provided they do not play it over the back-line deliberately, in which case a penalty corner is awarded. This free hit is played by the attacking team from a spot on the 23-metre line, in line with where the ball went out of play. All the parameters of an attacking free hit within the attacking quarter of the playing surface apply.",
"title": "Rules and play"
},
{
"paragraph_id": 52,
"text": "The short or penalty corner is awarded:",
"title": "Rules and play"
},
{
"paragraph_id": 53,
"text": "Short corners begin with five defenders (usually including the keeper) positioned behind the back line and the ball placed at least 10 yards from the nearest goal post. All other players in the defending team must be beyond the centre line, that is not in their 'own' half of the pitch, until the ball is in play. Attacking players begin the play standing outside the scoring circle, except for one attacker who starts the corner by playing the ball from a mark 10 m either side of the goal (the circle has a 14.63 m radius). This player puts the ball into play by pushing or hitting the ball to the other attackers outside the circle; the ball must pass outside the circle and then put back into the circle before the attackers may make a shot at the goal from which a goal can be scored. FIH rules do not forbid a shot at goal before the ball leaves the circle after being 'inserted', nor is a shot at the goal from outside the circle prohibited, but a goal cannot be scored at all if the ball has not gone out of the circle and cannot be scored from a shot from outside the circle if it is not again played by an attacking player before it enters the goal.",
"title": "Rules and play"
},
{
"paragraph_id": 54,
"text": "For safety reasons, the first shot of a penalty corner must not exceed 460 mm high (the height of the \"backboard\" of the goal) at the point it crosses the goal line if it is hit. However, if the ball is deemed to be below backboard height, the ball can be subsequently deflected above this height by another player (defender or attacker), providing that this deflection does not lead to danger. The \"Slap\" stroke (a sweeping motion towards the ball, where the stick is kept on or close to the ground when striking the ball) is classed as a hit, and so the first shot at goal must be below backboard height for this type of shot also.",
"title": "Rules and play"
},
{
"paragraph_id": 55,
"text": "If the first shot at goal in a short corner situation is a push, flick or scoop, in particular the drag flick (which has become popular at international and national league standards), the shot is permitted to rise above the height of the backboard, as long as the shot is not deemed dangerous to any opponent. This form of shooting was developed because it is not height restricted in the same way as the first hit shot at the goal and players with good technique are able to drag-flick with as much power as many others can hit a ball.",
"title": "Rules and play"
},
{
"paragraph_id": 56,
"text": "A penalty stroke is awarded when a defender commits a foul in the circle (accidental or otherwise) that prevents a probable goal or commits a deliberate foul in the circle or if defenders repeatedly run from the back line too early at a penalty corner. The penalty stroke is taken by a single attacker in the circle, against the goalkeeper, from a spot 6.4 m from goal. The ball is played only once at goal by the attacker using a push, flick or scoop stroke. If the shot is saved, play is restarted with a 15 m hit to the defenders. When a goal is scored, play is restarted in the normal way.",
"title": "Rules and play"
},
{
"paragraph_id": 57,
"text": "According to the Rules of Hockey 2015 issued by the FIH there are only two criteria for a dangerously played ball. The first is legitimate evasive action by an opponent (what constitutes legitimate evasive action is an umpiring judgment). The second is specific to the rule concerning a shot at goal at a penalty corner but is generally, if somewhat inconsistently, applied throughout the game and in all parts of the pitch: it is that a ball lifted above knee height and at an opponent who is within 5m of the ball is certainly dangerous.",
"title": "Rules and play"
},
{
"paragraph_id": 58,
"text": "The velocity of the ball is not mentioned in the rules concerning a dangerously played ball. A ball that hits a player above the knee may on some occasions not be penalised, this is at the umpire's discretion. A jab tackle, for example, might accidentally lift the ball above knee height into an opponent from close range but at such low velocity as not to be, in the opinion of the umpire, dangerous play. In the same way a high-velocity hit at very close range into an opponent, but below knee height, could be considered to be dangerous or reckless play in the view of the umpire, especially when safer alternatives are open to the striker of the ball.",
"title": "Rules and play"
},
{
"paragraph_id": 59,
"text": "A ball that has been lifted high so that it will fall among close opponents may be deemed to be potentially dangerous and play may be stopped for that reason. A lifted ball that is falling to a player in clear space may be made potentially dangerous by the actions of an opponent closing to within 5m of the receiver before the ball has been controlled to ground – a rule which is often only loosely applied; the distance allowed is often only what might be described as playing distance, 2–3 m, and opponents tend to be permitted to close on the ball as soon as the receiver plays it: these unofficial variations are often based on the umpire's perception of the skill of the players i.e. on the level of the game, in order to maintain game flow, which umpires are in general in both Rules and Briefing instructed to do, by not penalising when it is unnecessary to do so; this is also a matter at the umpire's discretion.",
"title": "Rules and play"
},
{
"paragraph_id": 60,
"text": "The term \"falling ball\" is important in what may be termed encroaching offences. It is generally only considered an offence to encroach on an opponent receiving a lifted ball that has been lifted to above head height (although the height is not specified in rule) and is falling. So, for example, a lifted shot at the goal which is still rising as it crosses the goal line (or would have been rising as it crossed the goal line) can be legitimately followed up by any of the attacking team looking for a rebound.",
"title": "Rules and play"
},
{
"paragraph_id": 61,
"text": "In general even potentially dangerous play is not penalised if an opponent is not disadvantaged by it or, obviously, not injured by it so that he cannot continue. A personal penalty, that is a caution or a suspension, rather than a team penalty, such as a free ball or a penalty corner, may be (many would say should be or even must be, but again this is at the umpire's discretion) issued to the guilty party after an advantage allowed by the umpire has been played out in any situation where an offence has occurred, including dangerous play (but once advantage has been allowed the umpire cannot then call play back and award a team penalty).",
"title": "Rules and play"
},
{
"paragraph_id": 62,
"text": "It is not an offence to lift the ball over an opponent's stick (or body on the ground), provided that it is done with consideration for the safety of the opponent and not dangerously. For example, a skilful attacker may lift the ball over a defenders stick or prone body and run past them, however if the attacker lifts the ball into or at the defender's body, this would almost certainly be regarded as dangerous.",
"title": "Rules and play"
},
{
"paragraph_id": 63,
"text": "It is not against the rules to bounce the ball on the stick and even to run with it while doing so, as long as that does not lead to a potentially dangerous conflict with an opponent who is attempting to make a tackle. For example, two players trying to play at the ball in the air at the same time, would probably be considered a dangerous situation and it is likely that the player who first put the ball up or who was so 'carrying' it would be penalised.",
"title": "Rules and play"
},
{
"paragraph_id": 64,
"text": "Dangerous play rules also apply to the usage of the stick when approaching the ball, making a stroke at it (replacing what was at one time referred to as the \"sticks\" rule, which once forbade the raising of any part of the stick above the shoulder during any play. This last restriction has been removed but the stick should still not be used in a way that endangers an opponent) or attempting to tackle, (fouls relating to tripping, impeding and obstruction). The use of the stick to strike an opponent will usually be much more severely dealt with by the umpires than offences such as barging, impeding and obstruction with the body, although these are also dealt with firmly, especially when these fouls are intentional.",
"title": "Rules and play"
},
{
"paragraph_id": 65,
"text": "Hockey uses a three-tier penalty card system of warnings and suspensions:",
"title": "Rules and play"
},
{
"paragraph_id": 66,
"text": "If a coach is sent off, depending on local rules, a player may have to leave the field for the remaining length of the match.",
"title": "Rules and play"
},
{
"paragraph_id": 67,
"text": "In addition to their colours, field hockey penalty cards are often shaped differently, so they can be recognised easily. Green cards are normally triangular, yellow cards rectangular and red cards circular.",
"title": "Rules and play"
},
{
"paragraph_id": 68,
"text": "Unlike football, a player may receive more than one green or yellow card. However, they cannot receive the same card for the same offence (for example two yellows for dangerous play), and the second must always be a more serious card. In the case of a second yellow card for a different breach of the rules (for example a yellow for deliberate foot, and a second later in the game for dangerous play) the temporary suspension would be expected to be of considerably longer duration than the first. However, local playing conditions may mandate that cards are awarded only progressively, and not allow any second awards.",
"title": "Rules and play"
},
{
"paragraph_id": 69,
"text": "Umpires, if the free hit would have been in the attacking 23 m area, may upgrade the free hit to a penalty corner for dissent or other misconduct after the free hit has been awarded.",
"title": "Rules and play"
},
{
"paragraph_id": 70,
"text": "The teams' object is to play the ball into their attacking circle and, from there, hit, push or flick the ball into the goal, scoring a goal. The team with more goals after 60 minutes wins the game. The playing time may be shortened, particularly when younger players are involved, or for some tournament play. If the game is played in a countdown clock, like ice hockey, a goal can only count if the ball completely crosses the goal line and into the goal before time expires, not when the ball leaves the stick in the act of shooting.",
"title": "Rules and play"
},
{
"paragraph_id": 71,
"text": "If the score is tied at the end of the game, either a draw is declared or the game goes into extra time, or there is a penalty shoot-out, depending on the format of the competition. In many competitions (such as regular club competition, or in pool games in FIH international tournaments such as the Olympics or the World Cup), a tied result stands and the overall competition standings are adjusted accordingly. Since March 2013, when tie breaking is required, the official FIH Tournament Regulations mandate to no longer have extra time and go directly into a penalty shoot-out when a classification match ends in a tie. However, many associations follow the previous procedure consisting of two periods of 7.5 minutes of \"golden goal\" extra time during which the game ends as soon as one team scores.",
"title": "Rules and play"
},
{
"paragraph_id": 72,
"text": "There are many variations to overtime play that depend on the league or tournament rules. In American college play, a seven-a-side overtime period consists of a 10-minute golden goal period with seven players for each team. If the scores remain equal, the game enters a one-on-one competition where each team chooses five players to dribble from the 25-yard (23 m) line down to the circle against the opposing goalkeeper. The player has eight seconds to score against the goalkeeper while keeping the ball in bounds. The game ends after a goal is scored, the ball goes out of bounds, a foul is committed (ending in either a penalty stroke or flick or the end of the one-on-one) or time expires. If the tie still persists, more rounds are played until one team has scored.",
"title": "Rules and play"
},
{
"paragraph_id": 73,
"text": "The FIH implemented a two-year rules cycle with the 2007–08 edition of the rules, with the intention that the rules be reviewed on a biennial basis. The 2009 rulebook was officially released in early March 2009 (effective 1 May 2009), however the FIH published the major changes in February. The current rule book is effective from 1 January 2022.",
"title": "Rules and play"
},
{
"paragraph_id": 74,
"text": "There are sometimes minor variations in rules from competition to competition; for instance, the duration of matches is often varied for junior competitions or for carnivals. Different national associations also have slightly differing rules on player equipment.",
"title": "Local rules"
},
{
"paragraph_id": 75,
"text": "The new Euro Hockey League and the Olympics has made major alterations to the rules to aid television viewers, such as splitting the game into four-quarters, and to try to improve player behavior, such as a two-minute suspension for green cards—the latter was also used in the 2010 World Cup and 2016 Olympics. In the United States, the NCAA has its own rules for inter-collegiate competitions; high school associations similarly play to different rules, usually using the rules published by the National Federation of State High School Associations (NFHS). This article assumes FIH rules unless otherwise stated. USA Field Hockey produces an annual summary of the differences.",
"title": "Local rules"
},
{
"paragraph_id": 76,
"text": "In the United States, the games at the junior high level consist of four 12-minute periods, while the high-school level consists of four 15-minute periods. Many private American schools play 12-minute quarters, and some have adopted FIH rules rather than NFHS rules.",
"title": "Local rules"
},
{
"paragraph_id": 77,
"text": "Players are required to wear mouth guards and shin guards in order to play the game. Also, there is a newer rule requiring certain types of sticks be used. In recent years, the NFHS rules have moved closer to FIH, but in 2011 a new rule requiring protective eyewear was introduced for the 2011 Fall season. Further clarification of NFHS's rule requiring protective eyewear states, \"effective 1 January 2019, all eye protection shall be permanently labeled with the current ASTM 2713 standard for field hockey\". Metal 'cage style' goggles favored by US high school lacrosse and permitted in high school field hockey is prohibited under FIH rules.",
"title": "Local rules"
},
{
"paragraph_id": 78,
"text": "Each player carries a hockey stick that normally measures between 80 and 95 cm (31 and 37 in); shorter or longer sticks are available. The length of the stick is based on the player's individual height: the top of the stick usually comes to the player's hip, and taller players typically have longer sticks. Goalkeepers can use either a specialised stick, or an ordinary field hockey stick. The specific goal-keeping sticks have another curve at the end of the stick, to give it more surface area to block the ball.",
"title": "Equipment"
},
{
"paragraph_id": 79,
"text": "Sticks were traditionally made of wood, but are now often made also with fibreglass, kevlar or carbon fibre composites. Metal is forbidden from use in field hockey sticks, due to the risk of injury from sharp edges if the stick were to break. The stick has a rounded handle, has a J-shaped hook at the bottom, and is flattened on the left side (when looking down the handle with the hook facing upwards). All sticks must be right-handed; left-handed ones are prohibited.",
"title": "Equipment"
},
{
"paragraph_id": 80,
"text": "There was traditionally a slight curve (called the bow, or rake) from the top to bottom of the face side of the stick and another on the 'heel' edge to the top of the handle (usually made according to the angle at which the handle part was inserted into the splice of the head part of the stick), which assisted in the positioning of the stick head in relation to the ball and made striking the ball easier and more accurate.",
"title": "Equipment"
},
{
"paragraph_id": 81,
"text": "The hook at the bottom of the stick was only recently the tight curve (Indian style) that we have nowadays. The older 'English' sticks had a longer bend, making it very hard to use the stick on the reverse. For this reason players now use the tight curved sticks.",
"title": "Equipment"
},
{
"paragraph_id": 82,
"text": "The handle makes up about the top third of the stick. It is wrapped in a grip similar to that used on tennis racket. The grip may be made of a variety of materials, including chamois leather, which improves grip in the wet and gives the stick a softer touch and different weighting it wrapped over a preexisting grip.",
"title": "Equipment"
},
{
"paragraph_id": 83,
"text": "It was recently discovered that increasing the depth of the face bow made it easier to get high speeds from the dragflick and made the stroke easier to execute. At first, after this feature was introduced, the Hockey Rules Board placed a limit of 50 mm on the maximum depth of bow over the length of the stick but experience quickly demonstrated this to be excessive. New rules now limit this curve to under 25 mm so as to limit the power with which the ball can be flicked.",
"title": "Equipment"
},
{
"paragraph_id": 84,
"text": "Standard field hockey balls are hard spherical balls, made of solid plastic (sometimes over a cork core), and are usually white, although they can be any colour as long as they contrast with the playing surface. The balls have a diameter of 71.3–74.8 mm (2.81–2.94 in) and a mass of 156–163 g (5.5–5.7 oz). The ball is often covered with indentations to reduce aquaplaning that can cause an inconsistent ball speed on wet surfaces.",
"title": "Equipment"
},
{
"paragraph_id": 85,
"text": "The 2007 rulebook saw major changes regarding goalkeepers. A fully equipped goalkeeper must wear a helmet, leg guards and kickers, and like all players, they must carry a stick. Goalkeepers may use either a field player's stick or a specialised goalkeeping stick provided always the stick is of legal dimensions. Usually field hockey goalkeepers also wear extensive additional protective equipment including chest guards, padded shorts, heavily padded hand protectors, groin protectors, neck protectors and arm guards. A goalie may not cross the 23 m line, the sole exception to this being if the goalkeeper is to take a penalty stroke at the other end of the field, when the clock is stopped. The goalkeeper can also remove their helmet for this action. While goalkeepers are allowed to use their feet and hands to clear the ball, like field players they may only use the one side of their stick. Slide tackling is permitted as long as it is with the intention of clearing the ball, not aimed at a player.",
"title": "Equipment"
},
{
"paragraph_id": 86,
"text": "It is now also even possible for teams to have a full eleven outfield players and no goalkeeper at all. No player may wear a helmet or other goalkeeping equipment, neither will any player be able to play the ball with any other part of the body than with their stick. This may be used to offer a tactical advantage, for example, if a team is trailing with only a short time to play, or to allow for play to commence if no goalkeeper or kit is available.",
"title": "Equipment"
},
{
"paragraph_id": 87,
"text": "The basic tactic in field hockey, as in association football and many other team games, is to outnumber the opponent in a particular area of the field at a moment in time. When in possession of the ball this temporary numerical superiority can be used to pass the ball around opponents so that they cannot effect a tackle because they cannot get within playing reach of the ball and to further use this numerical advantage to gain time and create clear space for making scoring shots on the opponent's goal. When not in possession of the ball numerical superiority is used to isolate and channel an opponent in possession and 'mark out' any passing options so that an interception or a tackle may be made to gain possession. Highly skillful players can sometimes get the better of more than one opponent and retain the ball and successfully pass or shoot but this tends to use more energy than quick early passing.",
"title": "Tactics"
},
{
"paragraph_id": 88,
"text": "Every player has a role depending on their relationship to the ball if the team communicates throughout the play of the game. There will be players on the ball (offensively – ball carriers; defensively – pressure, support players, and movement players.",
"title": "Tactics"
},
{
"paragraph_id": 89,
"text": "The main methods by which the ball is moved around the field by players are a) passing b) pushing the ball and running with it controlled to the front or right of the body and c) \"dribbling\"; where the player controls the ball with the stick and moves in various directions with it to elude opponents. To make a pass the ball may be propelled with a pushing stroke, where the player uses their wrists to push the stick head through the ball while the stick head is in contact with it; the \"flick\" or \"scoop\", similar to the push but with an additional arm and leg and rotational actions to lift the ball off the ground; and the \"hit\", where a swing at ball is taken and contact with it is often made very forcefully, causing the ball to be propelled at velocities in excess of 70 mph (110 km/h). In order to produce a powerful hit, usually for travel over long distances or shooting at the goal, the stick is raised higher and swung with maximum power at the ball, a stroke sometimes known as a \"drive\".",
"title": "Tactics"
},
{
"paragraph_id": 90,
"text": "Tackles are made by placing the stick into the path of the ball or playing the stick head or shaft directly at the ball. To increase the effectiveness of the tackle, players will often place the entire stick close to the ground horizontally, thus representing a wider barrier. To avoid the tackle, the ball carrier will either pass the ball to a teammate using any of the push, flick, or hit strokes, or attempt to maneuver or \"drag\" the ball around the tackle, trying to deceive the tackler.",
"title": "Tactics"
},
{
"paragraph_id": 91,
"text": "In recent years, the penalty corner has gained importance as a goal scoring opportunity. Particularly with the technical development of the drag flick. Tactics at penalty corners to set up time for a shot with a drag flick or a hit shot at the goal involve various complex plays, including multiple passes before deflections towards the goal is made but the most common method of shooting is the direct flick or hit at the goal.",
"title": "Tactics"
},
{
"paragraph_id": 92,
"text": "At the highest level, field hockey is a fast moving, highly skilled game, with players using fast moves with the stick, quick accurate passing, and hard hits, in attempts to keep possession and move the ball towards the goal. Tackling with physical contact and otherwise physically obstructing players is not permitted. Some of the tactics used resemble football (soccer), but with greater ball speed.",
"title": "Tactics"
},
{
"paragraph_id": 93,
"text": "With the 2009 changes to the rules regarding free hits in the attacking 23 m area, the common tactic of hitting the ball hard into the circle was forbidden. Although at higher levels this was considered tactically risky and low-percentage at creating scoring opportunities, it was used with some effect to 'win' penalty corners by forcing the ball onto a defender's foot or to deflect high (and dangerously) off a defender's stick. The FIH felt it was a dangerous practice that could easily lead to raised deflections and injuries in the circle, which is often crowded at a free-hit situation, and outlawed it.",
"title": "Tactics"
},
{
"paragraph_id": 94,
"text": "The biggest two field hockey tournaments are the Olympic Games tournament, and the Hockey World Cup, which is also held every four years. Apart from this, there is the men's and women's Pro League held each year for the nine top-ranked teams.",
"title": "International competition"
},
{
"paragraph_id": 95,
"text": "Of the men's teams, Pakistan has won the Hockey World cup four times, more times than any other side. India has won the Hockey at the Summer Olympics eight times, including in six successive Olympiads. Of the female teams, the Netherlands has won the Hockey World cup the most times, with six titles. At the Olympics, Australia and the Netherlands have both won three Olympic tournaments.",
"title": "International competition"
},
{
"paragraph_id": 96,
"text": "India and Pakistan dominated men's hockey until the early 1980s, winning eight Olympic golds and three of the first five world cups, respectively, but have become less prominent with the ascendancy of Belgium, the Netherlands, Germany, New Zealand, Australia, and Spain since the late 1980s, as grass playing surfaces were replaced with artificial turf. Other notable men's nations include Argentina, England (who combine with other British \"Home Nations\" to form the Great Britain side at Olympic events) and South Korea.",
"title": "International competition"
},
{
"paragraph_id": 97,
"text": "The Netherlands, Australia and Argentina are the most successful national teams among women. The Netherlands was the predominant women's team before field hockey was added to Olympic events. In the early 1990s, Australia emerged as the strongest women's country, though retirement of a number of players has weakened the team somewhat. Argentina improved its play in the 2000s, heading IFH rankings in 2003, 2010 and 2013. Other prominent women's teams are Germany, Great Britain, China, South Korea and India. Four nations have won Olympic gold medals in both men's and women's hockey: Germany, Netherlands, Australia and Great Britain.",
"title": "International competition"
},
{
"paragraph_id": 98,
"text": "As of January 2022 the Australian men's team and the Dutch women's teams lead the FIH world rankings.",
"title": "International competition"
},
{
"paragraph_id": 99,
"text": "For a couple of years, Belgium has emerged as a leading nation, with a World Champions title (2018), a European Champions title (2019), a silver medal (2016) followed with a title (2021) at the Olympics, and a lead in the FIH men's team world ranking.",
"title": "International competition"
},
{
"paragraph_id": 100,
"text": "This is a list of the major international field hockey tournaments, in chronological order. Tournaments included are:",
"title": "International competition"
},
{
"paragraph_id": 101,
"text": "Defunct tournaments:",
"title": "International competition"
},
{
"paragraph_id": 102,
"text": "Other international tournaments include:",
"title": "International competition"
},
{
"paragraph_id": 103,
"text": "A popular variant of field hockey is indoor hockey, which is 6-a-side (5-a-side during 2014–2015) using a field which is reduced to approximately 40 m × 20 m (131 ft × 66 ft). Although many of the rules remain the same, including obstruction and feet, there are several key variations: players may not raise the ball unless shooting at goal, players may not hit the ball, instead using pushes to transfer it, and the sidelines are replaced with solid barriers, from which the ball will rebound and remain in play. In addition, the regulation guidelines for the indoor field hockey stick require a slightly thinner, lighter stick than an outdoor one.",
"title": "Variants"
},
{
"paragraph_id": 104,
"text": "As the name suggests, Hockey5s is a hockey variant which features five players on each team (including a goalkeeper). The field of play is 55 m long and 41.70 m wide—this is approximately half the size of a regular pitch. Few additional markings are needed as there is no penalty circle nor penalty corners; shots can be taken from anywhere on the pitch. Penalty strokes are replaced by a \"challenge\" which is like the one-on-one method used in a penalty shoot-out. The duration of the match is three 12-minute periods with an interval of two minutes between periods; golden goal periods are multiple 5-minute periods. The rules are simpler and it is intended that the game is faster, creating more shots on goal with less play in midfield, and more attractive to spectators.",
"title": "Variants"
},
{
"paragraph_id": 105,
"text": "An Asian qualification tournament for two places at the 2014 Youth Olympic Games was the first time an FIH event used the Hockey5s format. Hockey5s was also used for the Youth Olympic hockey tournament, the Pacific Games in 2015 and at the African Youth Games is 2018.",
"title": "Variants"
},
{
"paragraph_id": 106,
"text": "In 2022, the FIH staged its first senior international Hockey5s event, with a men's and women's event being held in Lausanne. The FIH Men's Hockey5s World Cup and FIH Women's Hockey5s World Cup are set to debut in 2024.",
"title": "Variants"
},
{
"paragraph_id": 107,
"text": "NOTE: Many of the sources here are suspect and may be unreliable. Y indicates a reference has been reviewed and is approved. All ticks will be removed when the article reconstruction is complete.",
"title": "References"
}
]
| Field hockey is a team sport structured in standard hockey format, in which each team plays with 11 players in total, made up of 10 field players and a goalkeeper. Teams must move a hockey ball around a pitch by hitting it with a hockey stick towards the rival team's shooting circle and then into the goal. The match is won by the team that scores the most goals. Matches are played on grass, watered turf, artificial turf, or indoor boarded surface. The stick is made of wood, carbon fibre, fibreglass and carbon, or a combination of carbon fibre and fibreglass in different quantities. The stick has two sides; one rounded and one flat; only the flat face of the stick is allowed to progress the ball. During play, goalkeepers are the only players allowed to touch the ball with any part of their body. A player's hand is considered part of the stick if holding the stick. If the ball is "played" with the rounded part of the stick, it will result in a penalty. Goalkeepers often have a different design of stick; they also cannot play the ball with the round side of their stick. The modern game was developed at public schools in 19th century England and it is now played globally. The governing body is the International Hockey Federation (FIH), called the Fédération Internationale de Hockey in French. Men and women are represented internationally in competitions including the Olympic Games, World Cup, FIH Pro League, Junior World Cup and in past also World League, Champions Trophy. Many countries run extensive junior, senior, and masters club competitions. The FIH is also responsible for organizing the Hockey Rules Board and developing the sport's rules. The sport is known simply as "hockey" in countries where it is the more common form of hockey. The term "field hockey" is used primarily in Canada and the United States, where "hockey" more often refers to ice hockey. In Sweden, the term landhockey is used. A popular variant is indoor field hockey, which differs in a number of respects while embodying the primary principles of hockey. | 2001-07-26T15:21:18Z | 2024-01-01T00:06:49Z | [
"Template:More citations needed section",
"Template:Short description",
"Template:Infobox sport",
"Template:Lang",
"Template:Clarify",
"Template:Explain",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Authority control",
"Template:Main",
"Template:Transliteration",
"Template:Citation needed",
"Template:Convert",
"Template:Wiktionary",
"Template:International field hockey",
"Template:Team Sport",
"Template:About",
"Template:Use dmy dates",
"Template:Circa",
"Template:See also",
"Template:Cite web",
"Template:Cite news",
"Template:Use British English",
"Template:When",
"Template:Reflist",
"Template:Stein & Rubino 2008",
"Template:Cite book",
"Template:Summer Olympic sports",
"Template:Tick",
"Template:Webarchive",
"Template:Commons category",
"Template:Field Hockey",
"Template:Multiple issues",
"Template:Cvt",
"Template:Unreferenced section",
"Template:As of"
]
| https://en.wikipedia.org/wiki/Field_hockey |
10,887 | Finagle's law | Finagle's law of dynamic negatives (also known as Melody's law, Sod's Law or Finagle's corollary to Murphy's law) is usually rendered as "Anything that can go wrong, will—at the worst possible moment."
The term "Finagle's law" was first used by John W. Campbell Jr., the influential editor of Astounding Science Fiction (later Analog). He used it frequently in his editorials for many years in the 1940s to 1960s, but it never came into general usage the way Murphy's law has.
One variant (known as O'Toole's corollary of Finagle's law) favored among hackers is a takeoff on the second law of thermodynamics (related to the augmentation of entropy):
The perversity of the Universe tends towards a maximum.
In the Star Trek episode "Amok Time" (written by Theodore Sturgeon in 1967), Captain Kirk tells Spock, "As one of Finagle's laws puts it: 'Any home port the ship makes will be somebody else's, not mine.'"
The term "Finagle's law" was popularized by science fiction author Larry Niven in several stories (for example, Protector [Ballantine Books paperback edition, 4th printing, p. 23]), depicting a frontier culture of asteroid miners; this "Belter" culture professed a religion or running joke involving the worship of the dread god Finagle and his mad prophet Murphy.
"Finagle's law" can also be the related belief "Inanimate objects are out to get us", also known as Resistentialism. Similar to Finagle's law is the verbless phrase of the German novelist Friedrich Theodor Vischer: "die Tücke des Objekts" (the perfidy of inanimate objects).
A related concept, the "Finagle factor", is an ad hoc multiplicative or additive term in an equation, which can be justified only by the fact that it gives more correct results. Also known as Finagle's variable constant, it is sometimes defined as the correct answer divided by your answer.
One of the first records of "Finagle factor" is probably a December 1962 article in The Michigan Technic, credited to Campbell, but bylined "I Finaglin"
The term is also used in a 1960 wildlife management article. | [
{
"paragraph_id": 0,
"text": "Finagle's law of dynamic negatives (also known as Melody's law, Sod's Law or Finagle's corollary to Murphy's law) is usually rendered as \"Anything that can go wrong, will—at the worst possible moment.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term \"Finagle's law\" was first used by John W. Campbell Jr., the influential editor of Astounding Science Fiction (later Analog). He used it frequently in his editorials for many years in the 1940s to 1960s, but it never came into general usage the way Murphy's law has.",
"title": ""
},
{
"paragraph_id": 2,
"text": "One variant (known as O'Toole's corollary of Finagle's law) favored among hackers is a takeoff on the second law of thermodynamics (related to the augmentation of entropy):",
"title": "Variants"
},
{
"paragraph_id": 3,
"text": "The perversity of the Universe tends towards a maximum.",
"title": "Variants"
},
{
"paragraph_id": 4,
"text": "In the Star Trek episode \"Amok Time\" (written by Theodore Sturgeon in 1967), Captain Kirk tells Spock, \"As one of Finagle's laws puts it: 'Any home port the ship makes will be somebody else's, not mine.'\"",
"title": "Variants"
},
{
"paragraph_id": 5,
"text": "The term \"Finagle's law\" was popularized by science fiction author Larry Niven in several stories (for example, Protector [Ballantine Books paperback edition, 4th printing, p. 23]), depicting a frontier culture of asteroid miners; this \"Belter\" culture professed a religion or running joke involving the worship of the dread god Finagle and his mad prophet Murphy.",
"title": "Variants"
},
{
"paragraph_id": 6,
"text": "\"Finagle's law\" can also be the related belief \"Inanimate objects are out to get us\", also known as Resistentialism. Similar to Finagle's law is the verbless phrase of the German novelist Friedrich Theodor Vischer: \"die Tücke des Objekts\" (the perfidy of inanimate objects).",
"title": "Variants"
},
{
"paragraph_id": 7,
"text": "A related concept, the \"Finagle factor\", is an ad hoc multiplicative or additive term in an equation, which can be justified only by the fact that it gives more correct results. Also known as Finagle's variable constant, it is sometimes defined as the correct answer divided by your answer.",
"title": "Variants"
},
{
"paragraph_id": 8,
"text": "One of the first records of \"Finagle factor\" is probably a December 1962 article in The Michigan Technic, credited to Campbell, but bylined \"I Finaglin\"",
"title": "Variants"
},
{
"paragraph_id": 9,
"text": "The term is also used in a 1960 wildlife management article.",
"title": "Variants"
}
]
| Finagle's law of dynamic negatives is usually rendered as "Anything that can go wrong, will—at the worst possible moment." The term "Finagle's law" was first used by John W. Campbell Jr., the influential editor of Astounding Science Fiction. He used it frequently in his editorials for many years in the 1940s to 1960s, but it never came into general usage the way Murphy's law has. | 2002-02-25T15:51:15Z | 2023-12-18T17:40:13Z | [
"Template:Short description",
"Template:Blockquote",
"Template:Cite web",
"Template:Cite book",
"Template:Spoken Wikipedia"
]
| https://en.wikipedia.org/wiki/Finagle%27s_law |
10,890 | Fundamental interaction | In physics, the fundamental interactions or fundamental forces are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist:
The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at minuscule, subatomic distances and govern nuclear interactions inside atoms.
Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. It is possible, however, that the fifth force is a combination of the prior four forces in the form of a scalar field; such as the Higgs field.
Each of the known fundamental interactions can be described mathematically as a field. The gravitational force is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics.
Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies.
Many theoretical physicists believe these fundamental forces to be related and to become unified into a single force at very high energies on a minuscule scale, the Planck scale, but particle accelerators cannot produce the enormous energies required to experimentally probe this. Devising a common theoretical framework that would explain the relation between the forces in a single theory is perhaps the greatest goal of today's theoretical physicists. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Some physicists seek to unite the electroweak and strong fields within what is called a Grand Unified Theory (GUT). An even bigger challenge is to find a way to quantize the gravitational field, resulting in a theory of quantum gravity (QG) which would unite gravity in a common theoretical framework with the other three forces. Some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything (ToE).
In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one.
In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.)
The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED).
The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory.
Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support.
In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons.
The interaction of any pair of fermions in perturbation theory can then be modelled thus:
The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck's constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force".
According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of:
Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research.
The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples.
Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate.
Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out.
Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe.
The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high.
Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime.
Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton.
Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible.
Proposed extra dimensions could explain why the gravity force is so weak.
Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force.
The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 10 K, the electromagnetic force and the weak force were still merged as a combined electroweak force.
For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.
Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other.
Electromagnetism has an infinite range like gravity, but is vastly stronger than it, and therefore describes several macroscopic phenomena of everyday experience such as friction, rainbows, lightning, and all human-made devices using electric current, such as television, lasers, and computers. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements, including all chemical bonding.
In a four kilogram (~1 gallon) jug of water, there is
of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of
This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity but cancel out so that for large bodies gravity dominates.
Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes.
The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space.
In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function.
The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT.
The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows.
After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV.
The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably:
While each of these approaches offered insights, no approach led directly to a fundamental theory.
Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined.
In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons.
Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions.
QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances.
Conventionally, the Higgs interaction is not counted among the four fundamental forces.
Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form
with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form
with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 10 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances.
Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification.
Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces (this was, for example, verified at the Large Electron–Positron Collider in 1991 for supersymmetric theories).
Theories of everything, which integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it.
Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow. | [
{
"paragraph_id": 0,
"text": "In physics, the fundamental interactions or fundamental forces are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist:",
"title": ""
},
{
"paragraph_id": 1,
"text": "The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at minuscule, subatomic distances and govern nuclear interactions inside atoms.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. It is possible, however, that the fifth force is a combination of the prior four forces in the form of a scalar field; such as the Higgs field.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Each of the known fundamental interactions can be described mathematically as a field. The gravitational force is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Many theoretical physicists believe these fundamental forces to be related and to become unified into a single force at very high energies on a minuscule scale, the Planck scale, but particle accelerators cannot produce the enormous energies required to experimentally probe this. Devising a common theoretical framework that would explain the relation between the forces in a single theory is perhaps the greatest goal of today's theoretical physicists. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Some physicists seek to unite the electroweak and strong fields within what is called a Grand Unified Theory (GUT). An even bigger challenge is to find a way to quantize the gravitational field, resulting in a theory of quantum gravity (QG) which would unite gravity in a common theoretical framework with the other three forces. Some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything (ToE).",
"title": ""
},
{
"paragraph_id": 6,
"text": "In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a \"Maxwell aether\" was later disproven; Newton's laws did, in fact, have to be replaced.)",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—force carriers or messenger particles of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize \"shadow\" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±1⁄2 (intrinsic angular momentum ±ħ⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons.",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 12,
"text": "The interaction of any pair of fermions in perturbation theory can then be modelled thus:",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 13,
"text": "The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +1⁄2 to −1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck's constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms \"force\" and \"interaction\" interchangeably; for example, the weak interaction is sometimes referred to as the \"weak force\".",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 14,
"text": "According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of:",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 15,
"text": "Both magnitude (\"relative strength\") and \"range\" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research.",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 16,
"text": "The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples.",
"title": "Overview of the fundamental interactions"
},
{
"paragraph_id": 17,
"text": "Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate.",
"title": "The interactions"
},
{
"paragraph_id": 18,
"text": "Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out.",
"title": "The interactions"
},
{
"paragraph_id": 19,
"text": "Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing \"cancels\" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe.",
"title": "The interactions"
},
{
"paragraph_id": 20,
"text": "The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high.",
"title": "The interactions"
},
{
"paragraph_id": 21,
"text": "Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime.",
"title": "The interactions"
},
{
"paragraph_id": 22,
"text": "Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton.",
"title": "The interactions"
},
{
"paragraph_id": 23,
"text": "Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible.",
"title": "The interactions"
},
{
"paragraph_id": 24,
"text": "Proposed extra dimensions could explain why the gravity force is so weak.",
"title": "The interactions"
},
{
"paragraph_id": 25,
"text": "Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force.",
"title": "The interactions"
},
{
"paragraph_id": 26,
"text": "The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 10 K, the electromagnetic force and the weak force were still merged as a combined electroweak force.",
"title": "The interactions"
},
{
"paragraph_id": 27,
"text": "For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.",
"title": "The interactions"
},
{
"paragraph_id": 28,
"text": "Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other.",
"title": "The interactions"
},
{
"paragraph_id": 29,
"text": "Electromagnetism has an infinite range like gravity, but is vastly stronger than it, and therefore describes several macroscopic phenomena of everyday experience such as friction, rainbows, lightning, and all human-made devices using electric current, such as television, lasers, and computers. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements, including all chemical bonding.",
"title": "The interactions"
},
{
"paragraph_id": 30,
"text": "In a four kilogram (~1 gallon) jug of water, there is",
"title": "The interactions"
},
{
"paragraph_id": 31,
"text": "of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of",
"title": "The interactions"
},
{
"paragraph_id": 32,
"text": "This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity but cancel out so that for large bodies gravity dominates.",
"title": "The interactions"
},
{
"paragraph_id": 33,
"text": "Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes.",
"title": "The interactions"
},
{
"paragraph_id": 34,
"text": "The constant speed of light in vacuum (customarily denoted with a lowercase letter c) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space.",
"title": "The interactions"
},
{
"paragraph_id": 35,
"text": "In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function.",
"title": "The interactions"
},
{
"paragraph_id": 36,
"text": "The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT.",
"title": "The interactions"
},
{
"paragraph_id": 37,
"text": "The strong interaction, or strong nuclear force, is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows.",
"title": "The interactions"
},
{
"paragraph_id": 38,
"text": "After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV.",
"title": "The interactions"
},
{
"paragraph_id": 39,
"text": "The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably:",
"title": "The interactions"
},
{
"paragraph_id": 40,
"text": "While each of these approaches offered insights, no approach led directly to a fundamental theory.",
"title": "The interactions"
},
{
"paragraph_id": 41,
"text": "Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined.",
"title": "The interactions"
},
{
"paragraph_id": 42,
"text": "In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons.",
"title": "The interactions"
},
{
"paragraph_id": 43,
"text": "Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions.",
"title": "The interactions"
},
{
"paragraph_id": 44,
"text": "QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances.",
"title": "The interactions"
},
{
"paragraph_id": 45,
"text": "Conventionally, the Higgs interaction is not counted among the four fundamental forces.",
"title": "The interactions"
},
{
"paragraph_id": 46,
"text": "Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form",
"title": "The interactions"
},
{
"paragraph_id": 47,
"text": "with Yukawa coupling λ i {\\displaystyle \\lambda _{i}} , particle mass m i {\\displaystyle m_{i}} (in eV), and Higgs vacuum expectation value 246.22 GeV. Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form",
"title": "The interactions"
},
{
"paragraph_id": 48,
"text": "with Higgs mass 125.18 GeV. Because the reduced Compton wavelength of the Higgs boson is so small (1.576×10 m, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 10 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances.",
"title": "The interactions"
},
{
"paragraph_id": 49,
"text": "Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification.",
"title": "The interactions"
},
{
"paragraph_id": 50,
"text": "Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces (this was, for example, verified at the Large Electron–Positron Collider in 1991 for supersymmetric theories).",
"title": "The interactions"
},
{
"paragraph_id": 51,
"text": "Theories of everything, which integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it.",
"title": "The interactions"
},
{
"paragraph_id": 52,
"text": "Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow.",
"title": "The interactions"
}
]
| In physics, the fundamental interactions or fundamental forces are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist: gravity
electromagnetism
weak interaction
strong interaction The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at minuscule, subatomic distances and govern nuclear interactions inside atoms. Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. It is possible, however, that the fifth force is a combination of the prior four forces in the form of a scalar field; such as the Higgs field. Each of the known fundamental interactions can be described mathematically as a field. The gravitational force is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics. Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. Many theoretical physicists believe these fundamental forces to be related and to become unified into a single force at very high energies on a minuscule scale, the Planck scale, but particle accelerators cannot produce the enormous energies required to experimentally probe this. Devising a common theoretical framework that would explain the relation between the forces in a single theory is perhaps the greatest goal of today's theoretical physicists. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Some physicists seek to unite the electroweak and strong fields within what is called a Grand Unified Theory (GUT). An even bigger challenge is to find a way to quantize the gravitational field, resulting in a theory of quantum gravity (QG) which would unite gravity in a common theoretical framework with the other three forces. Some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything (ToE). | 2001-08-16T00:43:20Z | 2023-12-14T06:37:29Z | [
"Template:See also",
"Template:Frac",
"Template:Mvar",
"Template:More footnotes needed",
"Template:Val",
"Template:Portal",
"Template:Cite news",
"Template:Fundamental interactions",
"Template:Cite journal",
"Template:Main",
"Template:Math",
"Template:Specify",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Short description",
"Template:Citation",
"Template:Branches of physics",
"Template:Stellar core collapse",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Fundamental_interaction |
10,891 | Floppy disk | A floppy disk or floppy diskette (casually referred to as a floppy or a diskette) is a type of disk storage composed of a thin and flexible disk of a magnetic storage medium in a square or nearly square plastic enclosure lined with a fabric that removes dust particles from the spinning disk. Floppy disks store digital data which can be read and written when the disk is inserted into a floppy disk drive (FDD) connected to or inside a computer or other device.
The first floppy disks, invented and made by IBM, had a disk diameter of 8 inches (203.2 mm). Subsequently, the 5¼-inch and then the 3½-inch became a ubiquitous form of data storage and transfer into the first years of the 21st century. 3½-inch floppy disks can still be used with an external USB floppy disk drive. USB drives for 5¼-inch, 8-inch, and other-size floppy disks are rare to non-existent. Some individuals and organizations continue to use older equipment to read or transfer data from floppy disks.
Floppy disks were so common in late 20th-century culture that many electronic and software programs continue to use save icons that look like floppy disks well into the 21st century, as a form of skeuomorphic design. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have been superseded by data storage methods with much greater data storage capacity and data transfer speed, such as USB flash drives, memory cards, optical discs, and storage available through local computer networks and cloud storage.
The first commercial floppy disks, developed in the late 1960s, were 8 inches (203.2 mm) in diameter; they became commercially available in 1971 as a component of IBM products and both drives and disks were then sold separately starting in 1972 by Memorex and others. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, and Burroughs Corporation. The term "floppy disk" appeared in print as early as 1970, and although IBM announced its first media as the Type 1 Diskette in 1973, the industry continued to use the terms "floppy disk" or "floppy".
In 1976, Shugart Associates introduced the 5¼-inch FDD. By 1978, there were more than ten manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as differential Manchester encoding (DM), modified frequency modulation (MFM), MFM and group coded recording (GCR). The 5¼-inch format displaced the 8-inch one for most uses, and the hard-sectored disk format disappeared. The most common capacity of the 5¼-inch format in DOS-based PCs was 360 KB (368,640 bytes) for the Double-Sided Double-Density (DSDD) format using MFM encoding. In 1984, IBM introduced with its PC/AT the 1.2 MB (1,228,800 bytes) dual-sided 5¼-inch floppy disk, but it never became very popular. IBM started using the 720 KB double density 3½-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the IBM Personal System/2 (PS/2) line in 1987. These disk drives could be added to older PC models. In 1988, Y-E Data introduced a drive for 2.88 MB Double-Sided Extended-Density (DSED) diskettes which was used by IBM in its top-of-the-line PS/2 and some RS/6000 models and in the second-generation NeXTcube and NeXTstation; however, this format had limited market success due to lack of standards and movement to 1.44 MB drives.
Throughout the early 1980s, limits of the 5¼-inch format became clear. Originally designed to be more practical than the 8-inch format, it was becoming considered too large; as the quality of recording media grew, data could be stored in a smaller area. Several solutions were developed, with drives at 2-, 2½-, 3-, 3¼-, 3½- and 4-inches (and Sony's 90 mm × 94 mm (3.54 in × 3.70 in) disk) offered by various companies. They all had several advantages over the old format, including a rigid case with a sliding metal (or later, sometimes plastic) shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, and a sliding write protection tab, which was far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5¼-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share. A variant on the Sony design, introduced in 1983 by many manufacturers, was then rapidly adopted. By 1988, the 3½-inch was outselling the 5¼-inch.
Generally, the term floppy disk persisted, even though later style floppy disks have a rigid case around an internal floppy disk.
By the end of the 1980s, 5¼-inch disks had been superseded by 3½-inch disks. During this time, PCs frequently came equipped with drives of both sizes. By the mid-1990s, 5¼-inch drives had virtually disappeared, as the 3½-inch disk became the predominant floppy disk. The advantages of the 3½-inch disk were its higher capacity, its smaller physical size, and its rigid case which provided better protection from dirt and other environmental risks.
Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, and create backups. Before hard disks became affordable to the general population, floppy disks were often used to store a computer's operating system (OS). Most home computers from that time have an elementary OS and BASIC stored in read-only memory (ROM), with the option of loading a more advanced OS from a floppy disk.
By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Then, distribution of larger packages was gradually replaced by CD-ROMs, DVDs, and online distribution.
An attempt to enhance the existing 3½-inch designs was the SuperDisk in the late 1990s, using very narrow data tracks and a high precision head guidance mechanism with a capacity of 120 MB and backward-compatibility with standard 3½-inch floppies; a format war briefly occurred between SuperDisk and other high-density floppy-disk products, although ultimately recordable CDs/DVDs, solid-state flash storage, and eventually cloud-based online storage would render all these removable disk formats obsolete. External USB-based floppy disk drives are still available, and many modern systems provide firmware support for booting from such drives.
In the mid-1990s, mechanically incompatible higher-density floppy disks were introduced, like the Iomega Zip disk. Adoption was limited by the competition between proprietary formats and the need to buy expensive drives for computers where the disks would be used. In some cases, failure in market penetration was exacerbated by the release of higher-capacity versions of the drive and media being not backward-compatible with the original drives, dividing the users between new and old adopters. Consumers were wary of making costly investments into unproven and rapidly changing technologies, so none of the technologies became the established standard.
Apple introduced the iMac G3 in 1998 with a CD-ROM drive but no floppy drive; this made USB-connected floppy drives popular accessories, as the iMac came without any writable removable media device.
Recordable CDs were touted as an alternative, because of the greater capacity, compatibility with existing CD-ROM drives, and—with the advent of re-writeable CDs and packet writing—a similar reusability as floppy disks. However, CD-R/RWs remained mostly an archival medium, not a medium for exchanging data or editing files on the medium itself, because there was no common standard for packet writing which allowed for small updates. Other formats, such as magneto-optical discs, had the flexibility of floppy disks combined with greater capacity, but remained niche due to costs. High-capacity backward compatible floppy technologies became popular for a while and were sold as an option or even included in standard PCs, but in the long run, their use was limited to professionals and enthusiasts.
Flash-based USB-thumb drives finally were a practical and popular replacement, that supported traditional file systems and all common usage scenarios of floppy disks. As opposed to other solutions, no new drive type or special software was required that impeded adoption, since all that was necessary was an already common USB port.
By 2002, most manufacturers still provided floppy disk drives as standard equipment to meet user demand for file-transfer and an emergency boot device, as well as for the general secure feeling of having the familiar device. By this time, the retail cost of a floppy drive had fallen to around $20 (equivalent to $33 in 2022), so there was little financial incentive to omit the device from a system. Subsequently, enabled by the widespread support for USB flash drives and BIOS boot, manufacturers and retailers progressively reduced the availability of floppy disk drives as standard equipment. In February 2003, Dell, one of the leading personal computer vendors, announced that floppy drives would no longer be pre-installed on Dell Dimension home computers, although they were still available as a selectable option and purchasable as an aftermarket OEM add-on. By January 2007, only 2% of computers sold in stores contained built-in floppy disk drives.
Floppy disks are used for emergency boots in aging systems lacking support for other bootable media and for BIOS updates, since most BIOS and firmware programs can still be executed from bootable floppy disks. If BIOS updates fail or become corrupt, floppy drives can sometimes be used to perform a recovery. The music and theatre industries still use equipment requiring standard floppy disks (e.g. synthesizers, samplers, drum machines, sequencers, and lighting consoles). Industrial automation equipment such as programmable machinery and industrial robots may not have a USB interface; data and programs are then loaded from disks, damageable in industrial environments. This equipment may not be replaced due to cost or requirement for continuous availability; existing software emulation and virtualization do not solve this problem because a customized operating system is used that has no drivers for USB devices. Hardware floppy disk emulators can be made to interface floppy-disk controllers to a USB port that can be used for flash drives.
In May 2016, the United States Government Accountability Office released a report that covered the need to upgrade or replace legacy computer systems within federal agencies. According to this document, old IBM Series/1 minicomputers running on 8-inch floppy disks are still used to coordinate "the operational functions of the United States' nuclear forces". The government planned to update some of the technology by the end of the 2017 fiscal year.
Windows 10 and Windows 11 no longer come with drivers for floppy disk drives (both internal and external). However, they will still support them with a separate device driver provided by Microsoft.
The British Airways Boeing 747-400 fleet, up to its retirement in 2020, used 3½-inch floppy disks to load avionics software.
Some workstations in corporate computing environments still retained floppy disks while disabling USB ports, both moves done to restrict the amount of data that could be copied by unscrupulous employees.
Sony, who had been in the floppy disk business since 1983, ended domestic sales of all six 3½-inch floppy disk models as of March 2011. This has been viewed by some as the end of the floppy disk. While production of new floppy disk media has ceased, sales and uses of this media from inventories is expected to continue until at least 2026.
For more than two decades, the floppy disk was the primary external writable storage device used. Most computing environments before the 1990s were non-networked, and floppy disks were the primary means to transfer data between computers, a method known informally as sneakernet. Unlike hard disks, floppy disks are handled and seen; even a novice user can identify a floppy disk. Because of these factors, a picture of a 3½-inch floppy disk became an interface metaphor for saving data. The floppy disk symbol is still used by software on user-interface elements related to saving files (such as LibreOffice) even though physical floppy disks are largely obsolete.
The 8-inch and 5¼-inch floppy disks contain a magnetically coated round plastic medium with a large circular hole in the center for a drive's spindle. The medium is contained in a square plastic cover that has a small oblong opening in both sides to allow the drive's heads to read and write data and a large hole in the center to allow the magnetic medium to spin by rotating it from its middle hole.
Inside the cover are two layers of fabric with the magnetic medium sandwiched in the middle. The fabric is designed to reduce friction between the medium and the outer cover, and catch particles of debris abraded off the disk to keep them from accumulating on the heads. The cover is usually a one-part sheet, double-folded with flaps glued or spot-welded together.
A small notch on the side of the disk identifies that it is writable, detected by a mechanical switch or phototransistor above it; if it is not present, the disk can be written; in the 8-inch disk the notch is covered to enable writing while in the 5¼-inch disk the notch is open to enable writing. Tape may be used over the notch to change the mode of the disk. Punch devices were sold to convert read-only disks to writable ones and enable writing on the unused side of single sided disks; such modified disks became known as flippy disks.
Another LED/photo-transistor pair located near the center of the disk detects the index hole once per rotation in the magnetic disk; it is used to detect the angular start of each track and whether or not the disk is rotating at the correct speed. Early 8‑inch and 5¼‑inch disks had physical holes for each sector and were termed hard sectored disks. Later soft-sectored disks have only one index hole, and sector position is determined by the disk controller or low-level software from patterns marking the start of a sector. Generally, the same drives are used to read and write both types of disks, with only the disks and controllers differing. Some operating systems using soft sectors, such as Apple DOS, do not use the index hole, and the drives designed for such systems often lack the corresponding sensor; this was mainly a hardware cost-saving measure.
The core of the 3½-inch disk is the same as the other two disks, but the front has only a label and a small opening for reading and writing data, protected by the shutter—a spring-loaded metal or plastic cover, pushed to the side on entry into the drive. Rather than having a hole in the center, it has a metal hub which mates to the spindle of the drive. Typical 3½-inch disk magnetic coating materials are:
Two holes at the bottom left and right indicate whether the disk is write-protected and whether it is high-density; these holes are spaced as far apart as the holes in punched A4 paper, allowing write-protected high-density floppies to be clipped into standard ring binders. The dimensions of the disk shell are not quite square: its width is slightly less than its depth, so that it is impossible to insert the disk into a drive slot sideways (i.e. rotated 90 degrees from the correct shutter-first orientation). A diagonal notch at top right ensures that the disk is inserted into the drive in the correct orientation—not upside down or label-end first—and an arrow at top left indicates direction of insertion. The drive usually has a button that, when pressed, ejects the disk with varying degrees of force, the discrepancy due to the ejection force provided by the spring of the shutter. In IBM PC compatibles, Commodores, Apple II/IIIs, and other non-Apple-Macintosh machines with standard floppy disk drives, a disk may be ejected manually at any time. The drive has a disk-change switch that detects when a disk is ejected or inserted. Failure of this mechanical switch is a common source of disk corruption if a disk is changed and the drive (and hence the operating system) fails to notice.
One of the chief usability problems of the floppy disk is its vulnerability; even inside a closed plastic housing, the disk medium is highly sensitive to dust, condensation and temperature extremes. As with all magnetic storage, it is vulnerable to magnetic fields. Blank disks have been distributed with an extensive set of warnings, cautioning the user not to expose it to dangerous conditions. Rough treatment or removing the disk from the drive while the magnetic media is still spinning is likely to cause damage to the disk, drive head, or stored data. On the other hand, the 3½‑inch floppy has been lauded for its mechanical usability by human–computer interaction expert Donald Norman:
A simple example of a good design is the 3½-inch magnetic diskette for computers, a small circle of floppy magnetic material encased in hard plastic. Earlier types of floppy disks did not have this plastic case, which protects the magnetic material from abuse and damage. A sliding metal cover protects the delicate magnetic surface when the diskette is not in use and automatically opens when the diskette is inserted into the computer. The diskette has a square shape: there are apparently eight possible ways to insert it into the machine, only one of which is correct. What happens if I do it wrong? I try inserting the disk sideways. Ah, the designer thought of that. A little study shows that the case really isn't square: it's rectangular, so you can't insert a longer side. I try backward. The diskette goes in only part of the way. Small protrusions, indentations, and cutouts prevent the diskette from being inserted backward or upside down: of the eight ways one might try to insert the diskette, only one is correct, and only that one will fit. An excellent design.
A spindle motor in the drive rotates the magnetic medium at a certain speed, while a stepper motor-operated mechanism moves the magnetic read/write heads radially along the surface of the disk. Both read and write operations require the media to be rotating and the head to contact the disk media, an action originally accomplished by a disk-load solenoid. Later drives held the heads out of contact until a front-panel lever was rotated (5¼-inch) or disk insertion was complete (3½-inch). To write data, current is sent through a coil in the head as the media rotates. The head's magnetic field aligns the magnetization of the particles directly below the head on the media. When the current is reversed the magnetization aligns in the opposite direction, encoding one bit of data. To read data, the magnetization of the particles in the media induce a tiny voltage in the head coil as they pass under it. This small signal is amplified and sent to the floppy disk controller, which converts the streams of pulses from the media into data, checks it for errors, and sends it to the host computer system.
A blank unformatted diskette has a coating of magnetic oxide with no magnetic order to the particles. During formatting, the magnetizations of the particles are aligned forming tracks, each broken up into sectors, enabling the controller to properly read and write data. The tracks are concentric rings around the center, with spaces between tracks where no data is written; gaps with padding bytes are provided between the sectors and at the end of the track to allow for slight speed variations in the disk drive, and to permit better interoperability with disk drives connected to other similar systems.
Each sector of data has a header that identifies the sector location on the disk. A cyclic redundancy check (CRC) is written into the sector headers and at the end of the user data so that the disk controller can detect potential errors.
Some errors are soft and can be resolved by automatically re-trying the read operation; other errors are permanent and the disk controller will signal a failure to the operating system if multiple attempts to read the data still fail.
After a disk is inserted, a catch or lever at the front of the drive is manually lowered to prevent the disk from accidentally emerging, engage the spindle clamping hub, and in two-sided drives, engage the second read/write head with the media.
In some 5¼-inch drives, insertion of the disk compresses and locks an ejection spring which partially ejects the disk upon opening the catch or lever. This enables a smaller concave area for the thumb and fingers to grasp the disk during removal.
Newer 5¼-inch drives and all 3½-inch drives automatically engage the spindle and heads when a disk is inserted, doing the opposite with the press of the eject button.
On Apple Macintosh computers with built-in 3½-inch disk drives, the ejection button is replaced by software controlling an ejection motor which only does so when the operating system no longer needs to access the drive. The user could drag the image of the floppy drive to the trash can on the desktop to eject the disk. In the case of a power failure or drive malfunction, a loaded disk can be removed manually by inserting a straightened paper clip into a small hole at the drive's front panel, just as one would do with a CD-ROM drive in a similar situation. The Sharp X68000 featured soft-eject 5¼-inch drives. Some late-generation IBM PS/2 machines had soft-eject 3½-inch disk drives as well for which some issues of DOS (i.e. PC DOS 5.02 and higher) offered an EJECT command.
Before a disk can be accessed, the drive needs to synchronize its head position with the disk tracks. In some drives, this is accomplished with a Track Zero Sensor, while for others it involves the drive head striking an immobile reference surface.
In either case, the head is moved so that it is approaching track zero position of the disk. When a drive with the sensor has reached track zero, the head stops moving immediately and is correctly aligned. For a drive without the sensor, the mechanism attempts to move the head the maximum possible number of positions needed to reach track zero, knowing that once this motion is complete, the head will be positioned over track zero.
Some drive mechanisms such as the Apple II 5¼-inch drive without a track zero sensor, produce characteristic mechanical noises when trying to move the heads past the reference surface. This physical striking is responsible for the 5¼-inch drive clicking during the boot of an Apple II, and the loud rattles of its DOS and ProDOS when disk errors occurred and track zero synchronization was attempted.
All 8-inch and some 5¼-inch drives used a mechanical method to locate sectors, known as either hard sectors or soft sectors, and is the purpose of the small hole in the jacket, off to the side of the spindle hole. A light beam sensor detects when a punched hole in the disk is visible through the hole in the jacket.
For a soft-sectored disk, there is only a single hole, which is used to locate the first sector of each track. Clock timing is then used to find the other sectors behind it, which requires precise speed regulation of the drive motor.
For a hard-sectored disk, there are many holes, one for each sector row, plus an additional hole in a half-sector position, that is used to indicate sector zero.
The Apple II computer system is notable in that it did not have an index hole sensor and ignored the presence of hard or soft sectoring. Instead, it used special repeating data synchronization patterns written to the disk between each sector, to assist the computer in finding and synchronizing with the data in each track.
The later 3½-inch drives of the mid-1980s did not use sector index holes, but instead also used synchronization patterns.
Most 3½-inch drives used a constant speed drive motor and contain the same number of sectors across all tracks. This is sometimes referred to as Constant Angular Velocity (CAV). In order to fit more data onto a disk, some 3½-inch drives (notably the Macintosh External 400K and 800K drives) instead use Constant Linear Velocity (CLV), which uses a variable speed drive motor that spins more slowly as the head moves away from the center of the disk, maintaining the same speed of the head(s) relative to the surface(s) of the disk. This allows more sectors to be written to the longer middle and outer tracks as the track length increases.
While the original IBM 8-inch disk was actually so defined, the other sizes are defined in the metric system, their usual names being but rough approximations.
Different sizes of floppy disks are mechanically incompatible, and disks can fit only one size of drive. Drive assemblies with both 3½-inch and 5¼-inch slots were available during the transition period between the sizes, but they contained two separate drive mechanisms. In addition, there are many subtle, usually software-driven incompatibilities between the two. 5¼-inch disks formatted for use with Apple II computers would be unreadable and treated as unformatted on a Commodore. As computer platforms began to form, attempts were made at interchangeability. For example, the "SuperDrive" included from the Macintosh SE to the Power Macintosh G3 could read, write and format IBM PC format 3½-inch disks, but few IBM-compatible computers had drives that did the reverse. 8-inch, 5¼-inch and 3½-inch drives were manufactured in a variety of sizes, most to fit standardized drive bays. Alongside the common disk sizes were non-classical sizes for specialized systems.
Floppy disks of the first standard are 8 inches in diameter, protected by a flexible plastic jacket. It was a read-only device used by IBM as a way of loading microcode. Read/write floppy disks and their drives became available in 1972, but it was IBM's 1973 introduction of the 3740 data entry system that began the establishment of floppy disks, called by IBM the Diskette 1, as an industry standard for information interchange. Formatted diskette for this system store 242,944 bytes. Early microcomputers used for engineering, business, or word processing often used one or more 8-inch disk drives for removable storage; the CP/M operating system was developed for microcomputers with 8-inch drives.
The family of 8-inch disks and drives increased over time and later versions could store up to 1.2 MB; many microcomputer applications did not need that much capacity on one disk, so a smaller size disk with lower-cost media and drives was feasible. The 5¼-inch drive succeeded the 8-inch size in many applications, and developed to about the same storage capacity as the original 8-inch size, using higher-density media and recording techniques.
The head gap of an 80‑track high-density (1.2 MB in the MFM format) 5¼‑inch drive (a.k.a. Mini diskette, Mini disk, or Minifloppy) is smaller than that of a 40‑track double-density (360 KB if double-sided) drive but can also format, read and write 40‑track disks provided the controller supports double stepping or has a switch to do so. 5¼-inch 80-track drives were also called hyper drives. A blank 40‑track disk formatted and written on an 80‑track drive can be taken to its native drive without problems, and a disk formatted on a 40‑track drive can be used on an 80‑track drive. Disks written on a 40‑track drive and then updated on an 80 track drive become unreadable on any 40‑track drives due to track width incompatibility.
Single-sided disks were coated on both sides, despite the availability of more expensive double sided disks. The reason usually given for the higher price was that double sided disks were certified error-free on both sides of the media. Double-sided disks could be used in some drives for single-sided disks, as long as an index signal was not needed. This was done one side at a time, by turning them over (flippy disks); more expensive dual-head drives which could read both sides without turning over were later produced, and eventually became used universally.
In the early 1980s, many manufacturers introduced smaller floppy drives and media in various formats. A consortium of 21 companies eventually settled on a 3½-inch design known as the Micro diskette, Micro disk, or Micro floppy, similar to a Sony design but improved to support both single-sided and double-sided media, with formatted capacities generally of 360 KB and 720 KB respectively. Single-sided drives shipped in 1983, and double-sided in 1984. The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB) disk drive, which would become the most popular, first shipped in 1986. The first Macintosh computers used single-sided 3½-inch floppy disks, but with 400 KB formatted capacity. These were followed in 1986 by double-sided 800 KB floppies. The higher capacity was achieved at the same recording density by varying the disk-rotation speed with head position so that the linear speed of the disk was closer to constant. Later Macs could also read and write 1.44 MB HD disks in PC format with fixed rotation speed. Higher capacities were similarly achieved by Acorn's RISC OS (800 KB for DD, 1,600 KB for HD) and AmigaOS (880 KB for DD, 1,760 KB for HD).
All 3½-inch disks have a rectangular hole in one corner which, if obstructed, write-enables the disk. A sliding detented piece can be moved to block or reveal the part of the rectangular hole that is sensed by the drive. The HD 1.44 MB disks have a second, unobstructed hole in the opposite corner that identifies them as being of that capacity.
In IBM-compatible PCs, the three densities of 3½-inch floppy disks are backwards-compatible; higher-density drives can read, write and format lower-density media. It is also possible to format a disk at a lower density than that for which it was intended, but only if the disk is first thoroughly demagnetized with a bulk eraser, as the high-density format is magnetically stronger and will prevent the disk from working in lower-density modes.
Writing at different densities than those at which disks were intended, sometimes by altering or drilling holes, was possible but not supported by manufacturers. A hole on one side of a 3½-inch disk can be altered as to make some disk drives and operating systems treat the disk as one of higher or lower density, for bidirectional compatibility or economical reasons. Some computers, such as the PS/2 and Acorn Archimedes, ignored these holes altogether.
Other smaller floppy sizes were proposed, especially for portable or pocket-sized devices that needed a smaller storage device.
None of these sizes achieved much market success.
Floppy disk size is often referred to in inches, even in countries using metric and though the size is defined in metric. The ANSI specification of 3½-inch disks is entitled in part "90 mm (3.5-inch)" though 90 mm is closer to 3.54 inches. Formatted capacities are generally set in terms of kilobytes and megabytes.
Data is generally written to floppy disks in sectors (angular blocks) and tracks (concentric rings at a constant radius). For example, the HD format of 3½-inch floppy disks uses 512 bytes per sector, 18 sectors per track, 80 tracks per side and two sides, for a total of 1,474,560 bytes per disk. Some disk controllers can vary these parameters at the user's request, increasing storage on the disk, although they may not be able to be read on machines with other controllers. For example, Microsoft applications were often distributed on 3½-inch 1.68 MB DMF disks formatted with 21 sectors instead of 18; they could still be recognized by a standard controller. On the IBM PC, MSX and most other microcomputer platforms, disks were written using a constant angular velocity (CAV) format, with the disk spinning at a constant speed and the sectors holding the same amount of information on each track regardless of radial location.
Because the sectors have constant angular size, the 512 bytes in each sector are compressed more near the disk's center. A more space-efficient technique would be to increase the number of sectors per track toward the outer edge of the disk, from 18 to 30 for instance, thereby keeping nearly constant the amount of physical disk space used for storing each sector; an example is zone bit recording. Apple implemented this in early Macintosh computers by spinning the disk more slowly when the head was at the edge, while maintaining the data rate, allowing 400 KB of storage per side and an extra 80 KB on a double-sided disk. This higher capacity came with a disadvantage: the format used a unique drive mechanism and control circuitry, meaning that Mac disks could not be read on other computers. Apple eventually reverted to constant angular velocity on HD floppy disks with their later machines, still unique to Apple as they supported the older variable-speed formats.
Disk formatting is usually done by a utility program supplied by the computer OS manufacturer; generally, it sets up a file storage directory system on the disk, and initializes its sectors and tracks. Areas of the disk unusable for storage due to flaws can be locked (marked as "bad sectors") so that the operating system does not attempt to use them. This was time-consuming so many environments had quick formatting which skipped the error checking process. When floppy disks were often used, disks pre-formatted for popular computers were sold. The unformatted capacity of a floppy disk does not include the sector and track headings of a formatted disk; the difference in storage between them depends on the drive's application. Floppy disk drive and media manufacturers specify the unformatted capacity (for example, 2 MB for a standard 3½-inch HD floppy). It is implied that this should not be exceeded, since doing so will most likely result in performance problems. DMF was introduced permitting 1.68 MB to fit onto an otherwise standard 3½-inch disk; utilities then appeared allowing disks to be formatted as such.
Mixtures of decimal prefixes and binary sector sizes require care to properly calculate total capacity. Whereas semiconductor memory naturally favors powers of two (size doubles each time an address pin is added to the integrated circuit), the capacity of a disk drive is the product of sector size, sectors per track, tracks per side and sides (which in hard disk drives with multiple platters can be greater than 2). Although other sector sizes have been known in the past, formatted sector sizes are now almost always set to powers of two (256 bytes, 512 bytes, etc.), and, in some cases, disk capacity is calculated as multiples of the sector size rather than only in bytes, leading to a combination of decimal multiples of sectors and binary sector sizes. For example, 1.44 MB 3½-inch HD disks have the "M" prefix peculiar to their context, coming from their capacity of 2,880 512-byte sectors (1,440 KiB), consistent with neither a decimal megabyte nor a binary mebibyte (MiB). Hence, these disks hold 1.47 MB or 1.41 MiB. Usable data capacity is a function of the disk format used, which in turn is determined by the FDD controller and its settings. Differences between such formats can result in capacities ranging from approximately 1,300 to 1,760 KiB (1.80 MB) on a standard 3½-inch high-density floppy (and up to nearly 2 MB with utilities such as 2M/2MGUI). The highest capacity techniques require much tighter matching of drive head geometry between drives, something not always possible and unreliable. For example, the LS-240 drive supports a 32 MB capacity on standard 3½-inch HD disks, but this is a write-once technique, and requires its own drive.
The raw maximum transfer rate of 3½-inch ED floppy drives (2.88 MB) is nominally 1,000 kilobits/s, or approximately 83% that of single-speed CD‑ROM (71% of audio CD). This represents the speed of raw data bits moving under the read head; however, the effective speed is somewhat less due to space used for headers, gaps and other format fields and can be even further reduced by delays to seek between tracks. | [
{
"paragraph_id": 0,
"text": "A floppy disk or floppy diskette (casually referred to as a floppy or a diskette) is a type of disk storage composed of a thin and flexible disk of a magnetic storage medium in a square or nearly square plastic enclosure lined with a fabric that removes dust particles from the spinning disk. Floppy disks store digital data which can be read and written when the disk is inserted into a floppy disk drive (FDD) connected to or inside a computer or other device.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The first floppy disks, invented and made by IBM, had a disk diameter of 8 inches (203.2 mm). Subsequently, the 5¼-inch and then the 3½-inch became a ubiquitous form of data storage and transfer into the first years of the 21st century. 3½-inch floppy disks can still be used with an external USB floppy disk drive. USB drives for 5¼-inch, 8-inch, and other-size floppy disks are rare to non-existent. Some individuals and organizations continue to use older equipment to read or transfer data from floppy disks.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Floppy disks were so common in late 20th-century culture that many electronic and software programs continue to use save icons that look like floppy disks well into the 21st century, as a form of skeuomorphic design. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have been superseded by data storage methods with much greater data storage capacity and data transfer speed, such as USB flash drives, memory cards, optical discs, and storage available through local computer networks and cloud storage.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The first commercial floppy disks, developed in the late 1960s, were 8 inches (203.2 mm) in diameter; they became commercially available in 1971 as a component of IBM products and both drives and disks were then sold separately starting in 1972 by Memorex and others. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, and Burroughs Corporation. The term \"floppy disk\" appeared in print as early as 1970, and although IBM announced its first media as the Type 1 Diskette in 1973, the industry continued to use the terms \"floppy disk\" or \"floppy\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1976, Shugart Associates introduced the 5¼-inch FDD. By 1978, there were more than ten manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as differential Manchester encoding (DM), modified frequency modulation (MFM), MFM and group coded recording (GCR). The 5¼-inch format displaced the 8-inch one for most uses, and the hard-sectored disk format disappeared. The most common capacity of the 5¼-inch format in DOS-based PCs was 360 KB (368,640 bytes) for the Double-Sided Double-Density (DSDD) format using MFM encoding. In 1984, IBM introduced with its PC/AT the 1.2 MB (1,228,800 bytes) dual-sided 5¼-inch floppy disk, but it never became very popular. IBM started using the 720 KB double density 3½-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the IBM Personal System/2 (PS/2) line in 1987. These disk drives could be added to older PC models. In 1988, Y-E Data introduced a drive for 2.88 MB Double-Sided Extended-Density (DSED) diskettes which was used by IBM in its top-of-the-line PS/2 and some RS/6000 models and in the second-generation NeXTcube and NeXTstation; however, this format had limited market success due to lack of standards and movement to 1.44 MB drives.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Throughout the early 1980s, limits of the 5¼-inch format became clear. Originally designed to be more practical than the 8-inch format, it was becoming considered too large; as the quality of recording media grew, data could be stored in a smaller area. Several solutions were developed, with drives at 2-, 2½-, 3-, 3¼-, 3½- and 4-inches (and Sony's 90 mm × 94 mm (3.54 in × 3.70 in) disk) offered by various companies. They all had several advantages over the old format, including a rigid case with a sliding metal (or later, sometimes plastic) shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, and a sliding write protection tab, which was far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5¼-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share. A variant on the Sony design, introduced in 1983 by many manufacturers, was then rapidly adopted. By 1988, the 3½-inch was outselling the 5¼-inch.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Generally, the term floppy disk persisted, even though later style floppy disks have a rigid case around an internal floppy disk.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "By the end of the 1980s, 5¼-inch disks had been superseded by 3½-inch disks. During this time, PCs frequently came equipped with drives of both sizes. By the mid-1990s, 5¼-inch drives had virtually disappeared, as the 3½-inch disk became the predominant floppy disk. The advantages of the 3½-inch disk were its higher capacity, its smaller physical size, and its rigid case which provided better protection from dirt and other environmental risks.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, and create backups. Before hard disks became affordable to the general population, floppy disks were often used to store a computer's operating system (OS). Most home computers from that time have an elementary OS and BASIC stored in read-only memory (ROM), with the option of loading a more advanced OS from a floppy disk.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Then, distribution of larger packages was gradually replaced by CD-ROMs, DVDs, and online distribution.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "An attempt to enhance the existing 3½-inch designs was the SuperDisk in the late 1990s, using very narrow data tracks and a high precision head guidance mechanism with a capacity of 120 MB and backward-compatibility with standard 3½-inch floppies; a format war briefly occurred between SuperDisk and other high-density floppy-disk products, although ultimately recordable CDs/DVDs, solid-state flash storage, and eventually cloud-based online storage would render all these removable disk formats obsolete. External USB-based floppy disk drives are still available, and many modern systems provide firmware support for booting from such drives.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the mid-1990s, mechanically incompatible higher-density floppy disks were introduced, like the Iomega Zip disk. Adoption was limited by the competition between proprietary formats and the need to buy expensive drives for computers where the disks would be used. In some cases, failure in market penetration was exacerbated by the release of higher-capacity versions of the drive and media being not backward-compatible with the original drives, dividing the users between new and old adopters. Consumers were wary of making costly investments into unproven and rapidly changing technologies, so none of the technologies became the established standard.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Apple introduced the iMac G3 in 1998 with a CD-ROM drive but no floppy drive; this made USB-connected floppy drives popular accessories, as the iMac came without any writable removable media device.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Recordable CDs were touted as an alternative, because of the greater capacity, compatibility with existing CD-ROM drives, and—with the advent of re-writeable CDs and packet writing—a similar reusability as floppy disks. However, CD-R/RWs remained mostly an archival medium, not a medium for exchanging data or editing files on the medium itself, because there was no common standard for packet writing which allowed for small updates. Other formats, such as magneto-optical discs, had the flexibility of floppy disks combined with greater capacity, but remained niche due to costs. High-capacity backward compatible floppy technologies became popular for a while and were sold as an option or even included in standard PCs, but in the long run, their use was limited to professionals and enthusiasts.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Flash-based USB-thumb drives finally were a practical and popular replacement, that supported traditional file systems and all common usage scenarios of floppy disks. As opposed to other solutions, no new drive type or special software was required that impeded adoption, since all that was necessary was an already common USB port.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "By 2002, most manufacturers still provided floppy disk drives as standard equipment to meet user demand for file-transfer and an emergency boot device, as well as for the general secure feeling of having the familiar device. By this time, the retail cost of a floppy drive had fallen to around $20 (equivalent to $33 in 2022), so there was little financial incentive to omit the device from a system. Subsequently, enabled by the widespread support for USB flash drives and BIOS boot, manufacturers and retailers progressively reduced the availability of floppy disk drives as standard equipment. In February 2003, Dell, one of the leading personal computer vendors, announced that floppy drives would no longer be pre-installed on Dell Dimension home computers, although they were still available as a selectable option and purchasable as an aftermarket OEM add-on. By January 2007, only 2% of computers sold in stores contained built-in floppy disk drives.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Floppy disks are used for emergency boots in aging systems lacking support for other bootable media and for BIOS updates, since most BIOS and firmware programs can still be executed from bootable floppy disks. If BIOS updates fail or become corrupt, floppy drives can sometimes be used to perform a recovery. The music and theatre industries still use equipment requiring standard floppy disks (e.g. synthesizers, samplers, drum machines, sequencers, and lighting consoles). Industrial automation equipment such as programmable machinery and industrial robots may not have a USB interface; data and programs are then loaded from disks, damageable in industrial environments. This equipment may not be replaced due to cost or requirement for continuous availability; existing software emulation and virtualization do not solve this problem because a customized operating system is used that has no drivers for USB devices. Hardware floppy disk emulators can be made to interface floppy-disk controllers to a USB port that can be used for flash drives.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In May 2016, the United States Government Accountability Office released a report that covered the need to upgrade or replace legacy computer systems within federal agencies. According to this document, old IBM Series/1 minicomputers running on 8-inch floppy disks are still used to coordinate \"the operational functions of the United States' nuclear forces\". The government planned to update some of the technology by the end of the 2017 fiscal year.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Windows 10 and Windows 11 no longer come with drivers for floppy disk drives (both internal and external). However, they will still support them with a separate device driver provided by Microsoft.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The British Airways Boeing 747-400 fleet, up to its retirement in 2020, used 3½-inch floppy disks to load avionics software.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Some workstations in corporate computing environments still retained floppy disks while disabling USB ports, both moves done to restrict the amount of data that could be copied by unscrupulous employees.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Sony, who had been in the floppy disk business since 1983, ended domestic sales of all six 3½-inch floppy disk models as of March 2011. This has been viewed by some as the end of the floppy disk. While production of new floppy disk media has ceased, sales and uses of this media from inventories is expected to continue until at least 2026.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "For more than two decades, the floppy disk was the primary external writable storage device used. Most computing environments before the 1990s were non-networked, and floppy disks were the primary means to transfer data between computers, a method known informally as sneakernet. Unlike hard disks, floppy disks are handled and seen; even a novice user can identify a floppy disk. Because of these factors, a picture of a 3½-inch floppy disk became an interface metaphor for saving data. The floppy disk symbol is still used by software on user-interface elements related to saving files (such as LibreOffice) even though physical floppy disks are largely obsolete.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The 8-inch and 5¼-inch floppy disks contain a magnetically coated round plastic medium with a large circular hole in the center for a drive's spindle. The medium is contained in a square plastic cover that has a small oblong opening in both sides to allow the drive's heads to read and write data and a large hole in the center to allow the magnetic medium to spin by rotating it from its middle hole.",
"title": "Design"
},
{
"paragraph_id": 24,
"text": "Inside the cover are two layers of fabric with the magnetic medium sandwiched in the middle. The fabric is designed to reduce friction between the medium and the outer cover, and catch particles of debris abraded off the disk to keep them from accumulating on the heads. The cover is usually a one-part sheet, double-folded with flaps glued or spot-welded together.",
"title": "Design"
},
{
"paragraph_id": 25,
"text": "A small notch on the side of the disk identifies that it is writable, detected by a mechanical switch or phototransistor above it; if it is not present, the disk can be written; in the 8-inch disk the notch is covered to enable writing while in the 5¼-inch disk the notch is open to enable writing. Tape may be used over the notch to change the mode of the disk. Punch devices were sold to convert read-only disks to writable ones and enable writing on the unused side of single sided disks; such modified disks became known as flippy disks.",
"title": "Design"
},
{
"paragraph_id": 26,
"text": "Another LED/photo-transistor pair located near the center of the disk detects the index hole once per rotation in the magnetic disk; it is used to detect the angular start of each track and whether or not the disk is rotating at the correct speed. Early 8‑inch and 5¼‑inch disks had physical holes for each sector and were termed hard sectored disks. Later soft-sectored disks have only one index hole, and sector position is determined by the disk controller or low-level software from patterns marking the start of a sector. Generally, the same drives are used to read and write both types of disks, with only the disks and controllers differing. Some operating systems using soft sectors, such as Apple DOS, do not use the index hole, and the drives designed for such systems often lack the corresponding sensor; this was mainly a hardware cost-saving measure.",
"title": "Design"
},
{
"paragraph_id": 27,
"text": "The core of the 3½-inch disk is the same as the other two disks, but the front has only a label and a small opening for reading and writing data, protected by the shutter—a spring-loaded metal or plastic cover, pushed to the side on entry into the drive. Rather than having a hole in the center, it has a metal hub which mates to the spindle of the drive. Typical 3½-inch disk magnetic coating materials are:",
"title": "Design"
},
{
"paragraph_id": 28,
"text": "Two holes at the bottom left and right indicate whether the disk is write-protected and whether it is high-density; these holes are spaced as far apart as the holes in punched A4 paper, allowing write-protected high-density floppies to be clipped into standard ring binders. The dimensions of the disk shell are not quite square: its width is slightly less than its depth, so that it is impossible to insert the disk into a drive slot sideways (i.e. rotated 90 degrees from the correct shutter-first orientation). A diagonal notch at top right ensures that the disk is inserted into the drive in the correct orientation—not upside down or label-end first—and an arrow at top left indicates direction of insertion. The drive usually has a button that, when pressed, ejects the disk with varying degrees of force, the discrepancy due to the ejection force provided by the spring of the shutter. In IBM PC compatibles, Commodores, Apple II/IIIs, and other non-Apple-Macintosh machines with standard floppy disk drives, a disk may be ejected manually at any time. The drive has a disk-change switch that detects when a disk is ejected or inserted. Failure of this mechanical switch is a common source of disk corruption if a disk is changed and the drive (and hence the operating system) fails to notice.",
"title": "Design"
},
{
"paragraph_id": 29,
"text": "One of the chief usability problems of the floppy disk is its vulnerability; even inside a closed plastic housing, the disk medium is highly sensitive to dust, condensation and temperature extremes. As with all magnetic storage, it is vulnerable to magnetic fields. Blank disks have been distributed with an extensive set of warnings, cautioning the user not to expose it to dangerous conditions. Rough treatment or removing the disk from the drive while the magnetic media is still spinning is likely to cause damage to the disk, drive head, or stored data. On the other hand, the 3½‑inch floppy has been lauded for its mechanical usability by human–computer interaction expert Donald Norman:",
"title": "Design"
},
{
"paragraph_id": 30,
"text": "A simple example of a good design is the 3½-inch magnetic diskette for computers, a small circle of floppy magnetic material encased in hard plastic. Earlier types of floppy disks did not have this plastic case, which protects the magnetic material from abuse and damage. A sliding metal cover protects the delicate magnetic surface when the diskette is not in use and automatically opens when the diskette is inserted into the computer. The diskette has a square shape: there are apparently eight possible ways to insert it into the machine, only one of which is correct. What happens if I do it wrong? I try inserting the disk sideways. Ah, the designer thought of that. A little study shows that the case really isn't square: it's rectangular, so you can't insert a longer side. I try backward. The diskette goes in only part of the way. Small protrusions, indentations, and cutouts prevent the diskette from being inserted backward or upside down: of the eight ways one might try to insert the diskette, only one is correct, and only that one will fit. An excellent design.",
"title": "Design"
},
{
"paragraph_id": 31,
"text": "A spindle motor in the drive rotates the magnetic medium at a certain speed, while a stepper motor-operated mechanism moves the magnetic read/write heads radially along the surface of the disk. Both read and write operations require the media to be rotating and the head to contact the disk media, an action originally accomplished by a disk-load solenoid. Later drives held the heads out of contact until a front-panel lever was rotated (5¼-inch) or disk insertion was complete (3½-inch). To write data, current is sent through a coil in the head as the media rotates. The head's magnetic field aligns the magnetization of the particles directly below the head on the media. When the current is reversed the magnetization aligns in the opposite direction, encoding one bit of data. To read data, the magnetization of the particles in the media induce a tiny voltage in the head coil as they pass under it. This small signal is amplified and sent to the floppy disk controller, which converts the streams of pulses from the media into data, checks it for errors, and sends it to the host computer system.",
"title": "Design"
},
{
"paragraph_id": 32,
"text": "A blank unformatted diskette has a coating of magnetic oxide with no magnetic order to the particles. During formatting, the magnetizations of the particles are aligned forming tracks, each broken up into sectors, enabling the controller to properly read and write data. The tracks are concentric rings around the center, with spaces between tracks where no data is written; gaps with padding bytes are provided between the sectors and at the end of the track to allow for slight speed variations in the disk drive, and to permit better interoperability with disk drives connected to other similar systems.",
"title": "Design"
},
{
"paragraph_id": 33,
"text": "Each sector of data has a header that identifies the sector location on the disk. A cyclic redundancy check (CRC) is written into the sector headers and at the end of the user data so that the disk controller can detect potential errors.",
"title": "Design"
},
{
"paragraph_id": 34,
"text": "Some errors are soft and can be resolved by automatically re-trying the read operation; other errors are permanent and the disk controller will signal a failure to the operating system if multiple attempts to read the data still fail.",
"title": "Design"
},
{
"paragraph_id": 35,
"text": "After a disk is inserted, a catch or lever at the front of the drive is manually lowered to prevent the disk from accidentally emerging, engage the spindle clamping hub, and in two-sided drives, engage the second read/write head with the media.",
"title": "Design"
},
{
"paragraph_id": 36,
"text": "In some 5¼-inch drives, insertion of the disk compresses and locks an ejection spring which partially ejects the disk upon opening the catch or lever. This enables a smaller concave area for the thumb and fingers to grasp the disk during removal.",
"title": "Design"
},
{
"paragraph_id": 37,
"text": "Newer 5¼-inch drives and all 3½-inch drives automatically engage the spindle and heads when a disk is inserted, doing the opposite with the press of the eject button.",
"title": "Design"
},
{
"paragraph_id": 38,
"text": "On Apple Macintosh computers with built-in 3½-inch disk drives, the ejection button is replaced by software controlling an ejection motor which only does so when the operating system no longer needs to access the drive. The user could drag the image of the floppy drive to the trash can on the desktop to eject the disk. In the case of a power failure or drive malfunction, a loaded disk can be removed manually by inserting a straightened paper clip into a small hole at the drive's front panel, just as one would do with a CD-ROM drive in a similar situation. The Sharp X68000 featured soft-eject 5¼-inch drives. Some late-generation IBM PS/2 machines had soft-eject 3½-inch disk drives as well for which some issues of DOS (i.e. PC DOS 5.02 and higher) offered an EJECT command.",
"title": "Design"
},
{
"paragraph_id": 39,
"text": "Before a disk can be accessed, the drive needs to synchronize its head position with the disk tracks. In some drives, this is accomplished with a Track Zero Sensor, while for others it involves the drive head striking an immobile reference surface.",
"title": "Design"
},
{
"paragraph_id": 40,
"text": "In either case, the head is moved so that it is approaching track zero position of the disk. When a drive with the sensor has reached track zero, the head stops moving immediately and is correctly aligned. For a drive without the sensor, the mechanism attempts to move the head the maximum possible number of positions needed to reach track zero, knowing that once this motion is complete, the head will be positioned over track zero.",
"title": "Design"
},
{
"paragraph_id": 41,
"text": "Some drive mechanisms such as the Apple II 5¼-inch drive without a track zero sensor, produce characteristic mechanical noises when trying to move the heads past the reference surface. This physical striking is responsible for the 5¼-inch drive clicking during the boot of an Apple II, and the loud rattles of its DOS and ProDOS when disk errors occurred and track zero synchronization was attempted.",
"title": "Design"
},
{
"paragraph_id": 42,
"text": "All 8-inch and some 5¼-inch drives used a mechanical method to locate sectors, known as either hard sectors or soft sectors, and is the purpose of the small hole in the jacket, off to the side of the spindle hole. A light beam sensor detects when a punched hole in the disk is visible through the hole in the jacket.",
"title": "Design"
},
{
"paragraph_id": 43,
"text": "For a soft-sectored disk, there is only a single hole, which is used to locate the first sector of each track. Clock timing is then used to find the other sectors behind it, which requires precise speed regulation of the drive motor.",
"title": "Design"
},
{
"paragraph_id": 44,
"text": "For a hard-sectored disk, there are many holes, one for each sector row, plus an additional hole in a half-sector position, that is used to indicate sector zero.",
"title": "Design"
},
{
"paragraph_id": 45,
"text": "The Apple II computer system is notable in that it did not have an index hole sensor and ignored the presence of hard or soft sectoring. Instead, it used special repeating data synchronization patterns written to the disk between each sector, to assist the computer in finding and synchronizing with the data in each track.",
"title": "Design"
},
{
"paragraph_id": 46,
"text": "The later 3½-inch drives of the mid-1980s did not use sector index holes, but instead also used synchronization patterns.",
"title": "Design"
},
{
"paragraph_id": 47,
"text": "Most 3½-inch drives used a constant speed drive motor and contain the same number of sectors across all tracks. This is sometimes referred to as Constant Angular Velocity (CAV). In order to fit more data onto a disk, some 3½-inch drives (notably the Macintosh External 400K and 800K drives) instead use Constant Linear Velocity (CLV), which uses a variable speed drive motor that spins more slowly as the head moves away from the center of the disk, maintaining the same speed of the head(s) relative to the surface(s) of the disk. This allows more sectors to be written to the longer middle and outer tracks as the track length increases.",
"title": "Design"
},
{
"paragraph_id": 48,
"text": "While the original IBM 8-inch disk was actually so defined, the other sizes are defined in the metric system, their usual names being but rough approximations.",
"title": "Sizes"
},
{
"paragraph_id": 49,
"text": "Different sizes of floppy disks are mechanically incompatible, and disks can fit only one size of drive. Drive assemblies with both 3½-inch and 5¼-inch slots were available during the transition period between the sizes, but they contained two separate drive mechanisms. In addition, there are many subtle, usually software-driven incompatibilities between the two. 5¼-inch disks formatted for use with Apple II computers would be unreadable and treated as unformatted on a Commodore. As computer platforms began to form, attempts were made at interchangeability. For example, the \"SuperDrive\" included from the Macintosh SE to the Power Macintosh G3 could read, write and format IBM PC format 3½-inch disks, but few IBM-compatible computers had drives that did the reverse. 8-inch, 5¼-inch and 3½-inch drives were manufactured in a variety of sizes, most to fit standardized drive bays. Alongside the common disk sizes were non-classical sizes for specialized systems.",
"title": "Sizes"
},
{
"paragraph_id": 50,
"text": "Floppy disks of the first standard are 8 inches in diameter, protected by a flexible plastic jacket. It was a read-only device used by IBM as a way of loading microcode. Read/write floppy disks and their drives became available in 1972, but it was IBM's 1973 introduction of the 3740 data entry system that began the establishment of floppy disks, called by IBM the Diskette 1, as an industry standard for information interchange. Formatted diskette for this system store 242,944 bytes. Early microcomputers used for engineering, business, or word processing often used one or more 8-inch disk drives for removable storage; the CP/M operating system was developed for microcomputers with 8-inch drives.",
"title": "Sizes"
},
{
"paragraph_id": 51,
"text": "The family of 8-inch disks and drives increased over time and later versions could store up to 1.2 MB; many microcomputer applications did not need that much capacity on one disk, so a smaller size disk with lower-cost media and drives was feasible. The 5¼-inch drive succeeded the 8-inch size in many applications, and developed to about the same storage capacity as the original 8-inch size, using higher-density media and recording techniques.",
"title": "Sizes"
},
{
"paragraph_id": 52,
"text": "The head gap of an 80‑track high-density (1.2 MB in the MFM format) 5¼‑inch drive (a.k.a. Mini diskette, Mini disk, or Minifloppy) is smaller than that of a 40‑track double-density (360 KB if double-sided) drive but can also format, read and write 40‑track disks provided the controller supports double stepping or has a switch to do so. 5¼-inch 80-track drives were also called hyper drives. A blank 40‑track disk formatted and written on an 80‑track drive can be taken to its native drive without problems, and a disk formatted on a 40‑track drive can be used on an 80‑track drive. Disks written on a 40‑track drive and then updated on an 80 track drive become unreadable on any 40‑track drives due to track width incompatibility.",
"title": "Sizes"
},
{
"paragraph_id": 53,
"text": "Single-sided disks were coated on both sides, despite the availability of more expensive double sided disks. The reason usually given for the higher price was that double sided disks were certified error-free on both sides of the media. Double-sided disks could be used in some drives for single-sided disks, as long as an index signal was not needed. This was done one side at a time, by turning them over (flippy disks); more expensive dual-head drives which could read both sides without turning over were later produced, and eventually became used universally.",
"title": "Sizes"
},
{
"paragraph_id": 54,
"text": "In the early 1980s, many manufacturers introduced smaller floppy drives and media in various formats. A consortium of 21 companies eventually settled on a 3½-inch design known as the Micro diskette, Micro disk, or Micro floppy, similar to a Sony design but improved to support both single-sided and double-sided media, with formatted capacities generally of 360 KB and 720 KB respectively. Single-sided drives shipped in 1983, and double-sided in 1984. The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB) disk drive, which would become the most popular, first shipped in 1986. The first Macintosh computers used single-sided 3½-inch floppy disks, but with 400 KB formatted capacity. These were followed in 1986 by double-sided 800 KB floppies. The higher capacity was achieved at the same recording density by varying the disk-rotation speed with head position so that the linear speed of the disk was closer to constant. Later Macs could also read and write 1.44 MB HD disks in PC format with fixed rotation speed. Higher capacities were similarly achieved by Acorn's RISC OS (800 KB for DD, 1,600 KB for HD) and AmigaOS (880 KB for DD, 1,760 KB for HD).",
"title": "Sizes"
},
{
"paragraph_id": 55,
"text": "All 3½-inch disks have a rectangular hole in one corner which, if obstructed, write-enables the disk. A sliding detented piece can be moved to block or reveal the part of the rectangular hole that is sensed by the drive. The HD 1.44 MB disks have a second, unobstructed hole in the opposite corner that identifies them as being of that capacity.",
"title": "Sizes"
},
{
"paragraph_id": 56,
"text": "In IBM-compatible PCs, the three densities of 3½-inch floppy disks are backwards-compatible; higher-density drives can read, write and format lower-density media. It is also possible to format a disk at a lower density than that for which it was intended, but only if the disk is first thoroughly demagnetized with a bulk eraser, as the high-density format is magnetically stronger and will prevent the disk from working in lower-density modes.",
"title": "Sizes"
},
{
"paragraph_id": 57,
"text": "Writing at different densities than those at which disks were intended, sometimes by altering or drilling holes, was possible but not supported by manufacturers. A hole on one side of a 3½-inch disk can be altered as to make some disk drives and operating systems treat the disk as one of higher or lower density, for bidirectional compatibility or economical reasons. Some computers, such as the PS/2 and Acorn Archimedes, ignored these holes altogether.",
"title": "Sizes"
},
{
"paragraph_id": 58,
"text": "Other smaller floppy sizes were proposed, especially for portable or pocket-sized devices that needed a smaller storage device.",
"title": "Sizes"
},
{
"paragraph_id": 59,
"text": "None of these sizes achieved much market success.",
"title": "Sizes"
},
{
"paragraph_id": 60,
"text": "Floppy disk size is often referred to in inches, even in countries using metric and though the size is defined in metric. The ANSI specification of 3½-inch disks is entitled in part \"90 mm (3.5-inch)\" though 90 mm is closer to 3.54 inches. Formatted capacities are generally set in terms of kilobytes and megabytes.",
"title": "Sizes"
},
{
"paragraph_id": 61,
"text": "Data is generally written to floppy disks in sectors (angular blocks) and tracks (concentric rings at a constant radius). For example, the HD format of 3½-inch floppy disks uses 512 bytes per sector, 18 sectors per track, 80 tracks per side and two sides, for a total of 1,474,560 bytes per disk. Some disk controllers can vary these parameters at the user's request, increasing storage on the disk, although they may not be able to be read on machines with other controllers. For example, Microsoft applications were often distributed on 3½-inch 1.68 MB DMF disks formatted with 21 sectors instead of 18; they could still be recognized by a standard controller. On the IBM PC, MSX and most other microcomputer platforms, disks were written using a constant angular velocity (CAV) format, with the disk spinning at a constant speed and the sectors holding the same amount of information on each track regardless of radial location.",
"title": "Sizes"
},
{
"paragraph_id": 62,
"text": "Because the sectors have constant angular size, the 512 bytes in each sector are compressed more near the disk's center. A more space-efficient technique would be to increase the number of sectors per track toward the outer edge of the disk, from 18 to 30 for instance, thereby keeping nearly constant the amount of physical disk space used for storing each sector; an example is zone bit recording. Apple implemented this in early Macintosh computers by spinning the disk more slowly when the head was at the edge, while maintaining the data rate, allowing 400 KB of storage per side and an extra 80 KB on a double-sided disk. This higher capacity came with a disadvantage: the format used a unique drive mechanism and control circuitry, meaning that Mac disks could not be read on other computers. Apple eventually reverted to constant angular velocity on HD floppy disks with their later machines, still unique to Apple as they supported the older variable-speed formats.",
"title": "Sizes"
},
{
"paragraph_id": 63,
"text": "Disk formatting is usually done by a utility program supplied by the computer OS manufacturer; generally, it sets up a file storage directory system on the disk, and initializes its sectors and tracks. Areas of the disk unusable for storage due to flaws can be locked (marked as \"bad sectors\") so that the operating system does not attempt to use them. This was time-consuming so many environments had quick formatting which skipped the error checking process. When floppy disks were often used, disks pre-formatted for popular computers were sold. The unformatted capacity of a floppy disk does not include the sector and track headings of a formatted disk; the difference in storage between them depends on the drive's application. Floppy disk drive and media manufacturers specify the unformatted capacity (for example, 2 MB for a standard 3½-inch HD floppy). It is implied that this should not be exceeded, since doing so will most likely result in performance problems. DMF was introduced permitting 1.68 MB to fit onto an otherwise standard 3½-inch disk; utilities then appeared allowing disks to be formatted as such.",
"title": "Sizes"
},
{
"paragraph_id": 64,
"text": "Mixtures of decimal prefixes and binary sector sizes require care to properly calculate total capacity. Whereas semiconductor memory naturally favors powers of two (size doubles each time an address pin is added to the integrated circuit), the capacity of a disk drive is the product of sector size, sectors per track, tracks per side and sides (which in hard disk drives with multiple platters can be greater than 2). Although other sector sizes have been known in the past, formatted sector sizes are now almost always set to powers of two (256 bytes, 512 bytes, etc.), and, in some cases, disk capacity is calculated as multiples of the sector size rather than only in bytes, leading to a combination of decimal multiples of sectors and binary sector sizes. For example, 1.44 MB 3½-inch HD disks have the \"M\" prefix peculiar to their context, coming from their capacity of 2,880 512-byte sectors (1,440 KiB), consistent with neither a decimal megabyte nor a binary mebibyte (MiB). Hence, these disks hold 1.47 MB or 1.41 MiB. Usable data capacity is a function of the disk format used, which in turn is determined by the FDD controller and its settings. Differences between such formats can result in capacities ranging from approximately 1,300 to 1,760 KiB (1.80 MB) on a standard 3½-inch high-density floppy (and up to nearly 2 MB with utilities such as 2M/2MGUI). The highest capacity techniques require much tighter matching of drive head geometry between drives, something not always possible and unreliable. For example, the LS-240 drive supports a 32 MB capacity on standard 3½-inch HD disks, but this is a write-once technique, and requires its own drive.",
"title": "Sizes"
},
{
"paragraph_id": 65,
"text": "The raw maximum transfer rate of 3½-inch ED floppy drives (2.88 MB) is nominally 1,000 kilobits/s, or approximately 83% that of single-speed CD‑ROM (71% of audio CD). This represents the speed of raw data bits moving under the read head; however, the effective speed is somewhat less due to space used for headers, gaps and other format fields and can be even further reduced by delays to seek between tracks.",
"title": "Sizes"
}
]
| A floppy disk or floppy diskette is a type of disk storage composed of a thin and flexible disk of a magnetic storage medium in a square or nearly square plastic enclosure lined with a fabric that removes dust particles from the spinning disk. Floppy disks store digital data which can be read and written when the disk is inserted into a floppy disk drive (FDD) connected to or inside a computer or other device. The first floppy disks, invented and made by IBM, had a disk diameter of 8 inches (203.2 mm). Subsequently, the 5¼-inch and then the 3½-inch became a ubiquitous form of data storage and transfer into the first years of the 21st century. 3½-inch floppy disks can still be used with an external USB floppy disk drive. USB drives for 5¼-inch, 8-inch, and other-size floppy disks are rare to non-existent. Some individuals and organizations continue to use older equipment to read or transfer data from floppy disks. Floppy disks were so common in late 20th-century culture that many electronic and software programs continue to use save icons that look like floppy disks well into the 21st century, as a form of skeuomorphic design. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, they have been superseded by data storage methods with much greater data storage capacity and data transfer speed, such as USB flash drives, memory cards, optical discs, and storage available through local computer networks and cloud storage. | 2001-07-26T22:04:00Z | 2023-12-14T23:23:53Z | [
"Template:Short description",
"Template:Main",
"Template:Memory types",
"Template:Dubious",
"Template:Anchor",
"Template:Cite magazine",
"Template:Cite patent",
"Template:Blockquote",
"Template:Nowrap",
"Template:Multiple image",
"Template:Clarify",
"Template:Cite news",
"Template:Cite report",
"Template:Cite journal",
"Template:Ordered list",
"Template:Failed verification",
"Template:Cite web",
"Template:Dead link",
"Template:Commons",
"Template:Magnetic storage media",
"Template:Convert",
"Template:Clear",
"Template:Refn",
"Template:N/a",
"Template:Portal",
"Template:Notelist",
"Template:Reflist",
"Template:ISBN",
"Template:Basic computer components",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Inflation",
"Template:Cite book",
"Template:Webarchive",
"Template:Ecma International Standards",
"Template:Circa",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Floppy_disk |
10,893 | Fencing | Fencing is a combat sport that features sword fighting. The three disciplines of modern fencing are the foil, the épée, and the sabre (also saber); each discipline uses a different kind of blade, which shares the same name, and employs its own rules. Most competitive fencers specialise in one discipline. The modern sport gained prominence near the end of the 19th century and is based on the traditional skill set of swordsmanship. The Italian school altered the historical European martial art of classical fencing, and the French school later refined that system. Scoring points in a fencing competition is done by making contact with an opponent.
The 1904 Olympics Games featured a fourth discipline of fencing known as singlestick, but it was dropped after that year and is not a part of modern fencing. Competitive fencing was one of the first sports to be featured in the Olympics and, along with athletics, cycling, swimming, and gymnastics, has been featured in every modern Olympics.
Fencing is governed by the Fédération Internationale d'Escrime (FIE), headquartered in Lausanne, Switzerland. The FIE is composed of 155 national federations, each of which is recognised by its state Olympic Committee as the sole representative of Olympic-style fencing in that country.
The FIE maintains the current rules used by major international events, including world cups, world championships and the Olympic Games. The FIE handles proposals to change the rules at an annual congress.
University students compete internationally at the World University Games. The United States holds two national-level university tournaments (the NCAA championship and the USACFC National Championships). The BUCS holds fencing tournaments in the United Kingdom. Many universities in Ontario, Canada have fencing teams that participate in an annual inter-university competition called the OUA Finals.
National fencing organisations have set up programmes to encourage more students to fence. Examples include the Regional Youth Circuit program in the US and the Leon Paul Youth Development series in the UK.
The UK hosts two national competitions in which schools compete against each other directly: the Public Schools Fencing Championship, a competition only open to Independent Schools, and the Scottish Secondary Schools Championships, open to all secondary schools in Scotland. It contains both teams and individual events and is highly anticipated. Schools organise matches directly against one another and school age pupils can compete individually in the British Youth Championships.
In recent years, attempts have been made to introduce fencing to a wider and younger audience, by using foam and plastic swords, which require much less protective equipment. This makes it much less expensive to provide classes, and thus easier to take fencing to a wider range of schools than traditionally has been the case. There is even a competition series in Scotland – the Plastic-and-Foam Fencing FunLeague – specifically for Primary and early Secondary school-age children using this equipment.
Fencing traces its roots to the development of swordsmanship for duels and self-defence. The oldest surviving treatise on western fencing is the Royal Armouries Ms. I.33, also known as the Tower manuscript, written c. 1300 in present-day Germany, which discusses the usage of the arming sword together with the buckler. It was followed by a number of treatises, primarily from Germany and Italy, with the oldest surviving Italian treatise being Fior di Battaglia by Fiore dei Liberi, written c. 1400. However, because they were written for the context of a knightly duel with a primary focus on archaic weapons such as the arming sword, longsword, or poleaxe, these older treatises do not really stand in continuity with modern fencing.
From the 16th century onward, the Italian school of fencing would be dominated by the Bolognese or Dardi-School of fencing, named after its founder, Filippo Dardi, a Bolognese fencing master and Professor of Geometry at the University of Bologna. Unlike the previous traditions, the Bolognese school would primarily focus on the sidesword being either used alone or in combination with a buckler, a cape, a parrying dagger, or dual-wielded with another sidesword, though some Bolognese masters, such as Achille Marozo, would still cover the usage of the two-handed greatsword or spadone. The Bolognese school would eventually spread outside of Italy and lay the foundation for modern fencing, eclipsing both older Italian and German traditions. This was partially due to the German schools' focus on archaic weapons such as the longsword, but also due to a general decline in fencing within Germany.
The mechanics of modern fencing originated in the 18th century in an Italian school of fencing of the Renaissance, and under their influence, were improved by the French school of fencing. The Spanish school of fencing stagnated and was replaced by the Italian and French schools.
The shift towards fencing as a sport rather than as military training happened from the mid-18th century, and was led by Domenico Angelo, who established a fencing academy, Angelo's School of Arms, in Carlisle House, Soho, London in 1763. There, he taught the aristocracy the fashionable art of swordsmanship. His school was run by three generations of his family and dominated the art of European fencing for almost a century.
He established the essential rules of posture and footwork that still govern modern sport fencing, although his attacking and parrying methods were still much different from current practice. Although he intended to prepare his students for real combat, he was the first fencing master to emphasise the health and sporting benefits of fencing more than its use as a killing art, particularly in his influential book L'École des armes (The School of Fencing), published in 1763.
Basic conventions were collated and set down during the 1880s by the French fencing master Camille Prévost. It was during this time that many officially recognised fencing associations began to appear in different parts of the world, such as the Amateur Fencers League of America was founded in 1891, the Amateur Fencing Association of Great Britain in 1902, and the Fédération Nationale des Sociétés d’Escrime et Salles d’Armes de France in 1906.
The first regularised fencing competition was held at the inaugural Grand Military Tournament and Assault at Arms in 1880, held at the Royal Agricultural Hall, in Islington in June. The Tournament featured a series of competitions between army officers and soldiers. Each bout was fought for five hits and the foils were pointed with black to aid the judges. The Amateur Gymnastic & Fencing Association drew up an official set of fencing regulations in 1896.
Fencing was part of the Olympic Games in the summer of 1896. Sabre events have been held at every Summer Olympics; foil events have been held at every Summer Olympics except 1908; épée events have been held at every Summer Olympics except in the summer of 1896 because of unknown reasons.
Starting with épée in 1933, side judges were replaced by the Laurent-Pagan electrical scoring apparatus, with an audible tone and a red or green light indicating when a touch landed. Foil was automated in 1956, sabre in 1988. The scoring box reduced the bias in judging, and permitted more accurate scoring of faster actions, lighter touches, and more touches to the back and flank than before.
Each of the three weapons in fencing has its own rules and strategies.
The foil is a light thrusting weapon with a maximum weight of 500 grams. The foil targets the torso, but not the arms or legs. The foil has a small circular hand guard that serves to protect the hand from direct stabs. As the hand is not a valid target in foil, this is primarily for safety. Touches are scored only with the tip; hits with the side of the blade do not register on the electronic scoring apparatus (and do not halt the action). Touches that land outside the target area (called an off-target touch and signalled by a distinct color on the scoring apparatus) stop the action, but are not scored. Only a single touch can be awarded to either fencer at the end of a phrase. If both fencers land touches within 300 ms (± 25 ms tolerance)Material rules to register two lights on the machine, the referee uses the rules of "right of way" to determine which fencer is awarded the touch, or if an off-target hit has priority over a valid hit, in which case no touch is awarded. If the referee is unable to determine which fencer has right of way, no touch is awarded.
The épée is a thrusting weapon like the foil, but heavier, with a maximum total weight of 775 grams. In épée, the entire body is a valid target. The hand guard on the épée is a large circle that extends towards the pommel, effectively covering the hand, which is a valid target in épée. Like foil, all hits must be with the tip and not the sides of the blade. Hits with the side of the blade do not register on the electronic scoring apparatus (and do not halt the action). As the entire body is a legal target, there is no concept of an off-target touch, except if the fencer accidentally strikes the floor, setting off the light and tone on the scoring apparatus. Unlike foil and sabre, épée does not use "right of way", simultaneous touches to both fencers, known as "double touches." However, if the score is tied in a match at the last point and a double touch is scored, the point is null and void.
The sabre is a light cutting and thrusting weapon that targets the entire body above the waist, including the head and both the hands. Sabre is the newest weapon to be used. Like the foil, the maximum legal weight of a sabre is 500 grams. The hand guard on the sabre extends from hilt to the point at which the blade connects to the pommel. This guard is generally turned outwards during sport to protect the sword arm from touches. Hits with the entire blade or point are valid. As in foil, touches that land outside the target area are not scored. However, unlike foil, these off-target touches do not stop the action, and the fencing continues. In the case of both fencers landing a scoring touch, the referee determines which fencer receives the point for the action, again through the use of "right of way".
Most personal protective equipment for fencing is made of tough cotton or nylon. Kevlar was added to top level uniform pieces (jacket, breeches, underarm protector, lamé, and the bib of the mask) following the death of Vladimir Smirnov at the 1982 World Championships in Rome. However, Kevlar is degraded by both ultraviolet light and chlorine, which can complicate cleaning.
Other ballistic fabrics, such as Dyneema, have been developed that resist puncture, and which do not degrade the way that Kevlar does. FIE rules state that tournament wear must be made of fabric that resists a force of 800 newtons (180 lbf), and that the mask bib must resist twice that amount.
The complete fencing kit includes:
Traditionally, the fencer's uniform is white, and an instructor's uniform is black. This may be due to the occasional pre-electric practice of covering the point of the weapon in dye, soot, or coloured chalk in order to make it easier for the referee to determine the placing of the touches. As this is no longer a factor in the electric era, the FIE rules have been relaxed to allow coloured uniforms (save black). The guidelines also limit the permitted size and positioning of sponsorship logos.
Some pistol grips used by foil and épée fencers
A set of electric fencing equipment is required to participate in electric fencing. Electric equipment in fencing varies depending on the weapon with which it is used in accordance. The main component of a set of electric equipment is the body cord. The body cord serves as the connection between a fencer and a reel of wire that is part of a system for electrically detecting that the weapon has touched the opponent. There are two types: one for épée, and one for foil and sabre.
Épée body cords consist of two sets of three prongs each connected by a wire. One set plugs into the fencer's weapon, with the other connecting to the reel. Foil and sabre body cords have only two prongs (or a twist-lock bayonet connector) on the weapon side, with the third wire connecting instead to the fencer's lamé. The need in foil and sabre to distinguish between on and off-target touches requires a wired connection to the valid target area.
A body cord consists of three wires known as the A, B, and C lines. At the reel connector (and both connectors for Épée cords) The B pin is in the middle, the A pin is 1.5 cm to one side of B, and the C pin is 2 cm to the other side of B. This asymmetrical arrangement ensures that the cord cannot be plugged in the wrong way around.
In foil, the A line is connected to the lamé and the B line runs up a wire to the tip of the weapon. The B line is normally connected to the C line through the tip. When the tip is depressed, the circuit is broken and one of three things can happen:
In Épée, the A and B lines run up separate wires to the tip (there is no lamé). When the tip is depressed, it connects the A and B lines, resulting in a valid touch. However, if the tip is touching the opponents weapon (their C line) or the grounded strip, nothing happens when it is depressed, as the current is redirected to the C line. Grounded strips are particularly important in Épée, as without one, a touch to the floor registers as a valid touch (rather than off-target as in Foil).
In Sabre, similarly to Foil, the A line is connected to the lamé, but both the B and C lines are connected to the body of the weapon. Any contact between one's B/C line (either one, as they are always connected) and the opponent's A line (their lamé) results in a valid touch. There is no need for grounded strips in Sabre, as hitting something other than the opponent's lame does nothing.
In a professional fencing competition, a complete set of electric equipment is needed.
A complete set of foil electric equipment includes:
The electric equipment of sabre is very similar to that of foil. In addition, equipment used in sabre includes:
Épée fencers lack a lamé, conductive bib, and head cord due to their target area. Also, their body cords are constructed differently as described above. However, they possess all of the other components of a foil fencer's equipment.
Techniques or movements in fencing can be divided into two categories: offensive and defensive. Some techniques can fall into both categories (e.g. the beat). Certain techniques are used offensively, with the purpose of landing a hit on one's opponent while holding the right of way (foil and sabre). Others are used defensively, to protect against a hit or obtain the right of way.
The attacks and defences may be performed in countless combinations of feet and hand actions. For example, fencer A attacks the arm of fencer B, drawing a high outside parry; fencer B then follows the parry with a high line riposte. Fencer A, expecting that, then makes his own parry by pivoting his blade under fencer B's weapon (from straight out to more or less straight down), putting fencer B's tip off target and fencer A now scoring against the low line by angulating the hand upwards.
Other variants include wheelchair fencing for those with disabilities, chair fencing, one-hit épée (one of the five events which constitute modern pentathlon) and the various types of non-Olympic competitive fencing. Chair fencing is similar to wheelchair fencing, but for the able bodied. The opponents set up opposing chairs and fence while seated; all the usual rules of fencing are applied. An example of the latter is the American Fencing League (distinct from the United States Fencing Association): the format of competitions is different and the right of way rules are interpreted in a different way. In a number of countries, school and university matches deviate slightly from the FIE format. A variant of the sport using toy lightsabers earned national attention when ESPN2 acquired the rights to a selection of matches and included it as part of its "ESPN8: The Ocho" programming block in August 2018.
One of the most notable films related to fencing is the 2015 Finnish-Estonian-German film The Fencer, directed by Klaus Härö, which is loosely based on the life of Endel Nelis, an accomplished Estonian fencer and coach. The film was nominated for the 73rd Golden Globe Awards in the Foreign Language Film Category.
In 2017, the first issue of the Fence comic book series, which follows a fictional team of young fencers, was published by the US-based Boom! Studios. | [
{
"paragraph_id": 0,
"text": "Fencing is a combat sport that features sword fighting. The three disciplines of modern fencing are the foil, the épée, and the sabre (also saber); each discipline uses a different kind of blade, which shares the same name, and employs its own rules. Most competitive fencers specialise in one discipline. The modern sport gained prominence near the end of the 19th century and is based on the traditional skill set of swordsmanship. The Italian school altered the historical European martial art of classical fencing, and the French school later refined that system. Scoring points in a fencing competition is done by making contact with an opponent.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 1904 Olympics Games featured a fourth discipline of fencing known as singlestick, but it was dropped after that year and is not a part of modern fencing. Competitive fencing was one of the first sports to be featured in the Olympics and, along with athletics, cycling, swimming, and gymnastics, has been featured in every modern Olympics.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fencing is governed by the Fédération Internationale d'Escrime (FIE), headquartered in Lausanne, Switzerland. The FIE is composed of 155 national federations, each of which is recognised by its state Olympic Committee as the sole representative of Olympic-style fencing in that country.",
"title": "Competitive fencing"
},
{
"paragraph_id": 3,
"text": "The FIE maintains the current rules used by major international events, including world cups, world championships and the Olympic Games. The FIE handles proposals to change the rules at an annual congress.",
"title": "Competitive fencing"
},
{
"paragraph_id": 4,
"text": "University students compete internationally at the World University Games. The United States holds two national-level university tournaments (the NCAA championship and the USACFC National Championships). The BUCS holds fencing tournaments in the United Kingdom. Many universities in Ontario, Canada have fencing teams that participate in an annual inter-university competition called the OUA Finals.",
"title": "Competitive fencing"
},
{
"paragraph_id": 5,
"text": "National fencing organisations have set up programmes to encourage more students to fence. Examples include the Regional Youth Circuit program in the US and the Leon Paul Youth Development series in the UK.",
"title": "Competitive fencing"
},
{
"paragraph_id": 6,
"text": "The UK hosts two national competitions in which schools compete against each other directly: the Public Schools Fencing Championship, a competition only open to Independent Schools, and the Scottish Secondary Schools Championships, open to all secondary schools in Scotland. It contains both teams and individual events and is highly anticipated. Schools organise matches directly against one another and school age pupils can compete individually in the British Youth Championships.",
"title": "Competitive fencing"
},
{
"paragraph_id": 7,
"text": "In recent years, attempts have been made to introduce fencing to a wider and younger audience, by using foam and plastic swords, which require much less protective equipment. This makes it much less expensive to provide classes, and thus easier to take fencing to a wider range of schools than traditionally has been the case. There is even a competition series in Scotland – the Plastic-and-Foam Fencing FunLeague – specifically for Primary and early Secondary school-age children using this equipment.",
"title": "Competitive fencing"
},
{
"paragraph_id": 8,
"text": "Fencing traces its roots to the development of swordsmanship for duels and self-defence. The oldest surviving treatise on western fencing is the Royal Armouries Ms. I.33, also known as the Tower manuscript, written c. 1300 in present-day Germany, which discusses the usage of the arming sword together with the buckler. It was followed by a number of treatises, primarily from Germany and Italy, with the oldest surviving Italian treatise being Fior di Battaglia by Fiore dei Liberi, written c. 1400. However, because they were written for the context of a knightly duel with a primary focus on archaic weapons such as the arming sword, longsword, or poleaxe, these older treatises do not really stand in continuity with modern fencing.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "From the 16th century onward, the Italian school of fencing would be dominated by the Bolognese or Dardi-School of fencing, named after its founder, Filippo Dardi, a Bolognese fencing master and Professor of Geometry at the University of Bologna. Unlike the previous traditions, the Bolognese school would primarily focus on the sidesword being either used alone or in combination with a buckler, a cape, a parrying dagger, or dual-wielded with another sidesword, though some Bolognese masters, such as Achille Marozo, would still cover the usage of the two-handed greatsword or spadone. The Bolognese school would eventually spread outside of Italy and lay the foundation for modern fencing, eclipsing both older Italian and German traditions. This was partially due to the German schools' focus on archaic weapons such as the longsword, but also due to a general decline in fencing within Germany.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The mechanics of modern fencing originated in the 18th century in an Italian school of fencing of the Renaissance, and under their influence, were improved by the French school of fencing. The Spanish school of fencing stagnated and was replaced by the Italian and French schools.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The shift towards fencing as a sport rather than as military training happened from the mid-18th century, and was led by Domenico Angelo, who established a fencing academy, Angelo's School of Arms, in Carlisle House, Soho, London in 1763. There, he taught the aristocracy the fashionable art of swordsmanship. His school was run by three generations of his family and dominated the art of European fencing for almost a century.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "He established the essential rules of posture and footwork that still govern modern sport fencing, although his attacking and parrying methods were still much different from current practice. Although he intended to prepare his students for real combat, he was the first fencing master to emphasise the health and sporting benefits of fencing more than its use as a killing art, particularly in his influential book L'École des armes (The School of Fencing), published in 1763.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Basic conventions were collated and set down during the 1880s by the French fencing master Camille Prévost. It was during this time that many officially recognised fencing associations began to appear in different parts of the world, such as the Amateur Fencers League of America was founded in 1891, the Amateur Fencing Association of Great Britain in 1902, and the Fédération Nationale des Sociétés d’Escrime et Salles d’Armes de France in 1906.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The first regularised fencing competition was held at the inaugural Grand Military Tournament and Assault at Arms in 1880, held at the Royal Agricultural Hall, in Islington in June. The Tournament featured a series of competitions between army officers and soldiers. Each bout was fought for five hits and the foils were pointed with black to aid the judges. The Amateur Gymnastic & Fencing Association drew up an official set of fencing regulations in 1896.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Fencing was part of the Olympic Games in the summer of 1896. Sabre events have been held at every Summer Olympics; foil events have been held at every Summer Olympics except 1908; épée events have been held at every Summer Olympics except in the summer of 1896 because of unknown reasons.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Starting with épée in 1933, side judges were replaced by the Laurent-Pagan electrical scoring apparatus, with an audible tone and a red or green light indicating when a touch landed. Foil was automated in 1956, sabre in 1988. The scoring box reduced the bias in judging, and permitted more accurate scoring of faster actions, lighter touches, and more touches to the back and flank than before.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Each of the three weapons in fencing has its own rules and strategies.",
"title": "Weapons"
},
{
"paragraph_id": 18,
"text": "The foil is a light thrusting weapon with a maximum weight of 500 grams. The foil targets the torso, but not the arms or legs. The foil has a small circular hand guard that serves to protect the hand from direct stabs. As the hand is not a valid target in foil, this is primarily for safety. Touches are scored only with the tip; hits with the side of the blade do not register on the electronic scoring apparatus (and do not halt the action). Touches that land outside the target area (called an off-target touch and signalled by a distinct color on the scoring apparatus) stop the action, but are not scored. Only a single touch can be awarded to either fencer at the end of a phrase. If both fencers land touches within 300 ms (± 25 ms tolerance)Material rules to register two lights on the machine, the referee uses the rules of \"right of way\" to determine which fencer is awarded the touch, or if an off-target hit has priority over a valid hit, in which case no touch is awarded. If the referee is unable to determine which fencer has right of way, no touch is awarded.",
"title": "Weapons"
},
{
"paragraph_id": 19,
"text": "The épée is a thrusting weapon like the foil, but heavier, with a maximum total weight of 775 grams. In épée, the entire body is a valid target. The hand guard on the épée is a large circle that extends towards the pommel, effectively covering the hand, which is a valid target in épée. Like foil, all hits must be with the tip and not the sides of the blade. Hits with the side of the blade do not register on the electronic scoring apparatus (and do not halt the action). As the entire body is a legal target, there is no concept of an off-target touch, except if the fencer accidentally strikes the floor, setting off the light and tone on the scoring apparatus. Unlike foil and sabre, épée does not use \"right of way\", simultaneous touches to both fencers, known as \"double touches.\" However, if the score is tied in a match at the last point and a double touch is scored, the point is null and void.",
"title": "Weapons"
},
{
"paragraph_id": 20,
"text": "The sabre is a light cutting and thrusting weapon that targets the entire body above the waist, including the head and both the hands. Sabre is the newest weapon to be used. Like the foil, the maximum legal weight of a sabre is 500 grams. The hand guard on the sabre extends from hilt to the point at which the blade connects to the pommel. This guard is generally turned outwards during sport to protect the sword arm from touches. Hits with the entire blade or point are valid. As in foil, touches that land outside the target area are not scored. However, unlike foil, these off-target touches do not stop the action, and the fencing continues. In the case of both fencers landing a scoring touch, the referee determines which fencer receives the point for the action, again through the use of \"right of way\".",
"title": "Weapons"
},
{
"paragraph_id": 21,
"text": "Most personal protective equipment for fencing is made of tough cotton or nylon. Kevlar was added to top level uniform pieces (jacket, breeches, underarm protector, lamé, and the bib of the mask) following the death of Vladimir Smirnov at the 1982 World Championships in Rome. However, Kevlar is degraded by both ultraviolet light and chlorine, which can complicate cleaning.",
"title": "Equipment"
},
{
"paragraph_id": 22,
"text": "Other ballistic fabrics, such as Dyneema, have been developed that resist puncture, and which do not degrade the way that Kevlar does. FIE rules state that tournament wear must be made of fabric that resists a force of 800 newtons (180 lbf), and that the mask bib must resist twice that amount.",
"title": "Equipment"
},
{
"paragraph_id": 23,
"text": "The complete fencing kit includes:",
"title": "Equipment"
},
{
"paragraph_id": 24,
"text": "Traditionally, the fencer's uniform is white, and an instructor's uniform is black. This may be due to the occasional pre-electric practice of covering the point of the weapon in dye, soot, or coloured chalk in order to make it easier for the referee to determine the placing of the touches. As this is no longer a factor in the electric era, the FIE rules have been relaxed to allow coloured uniforms (save black). The guidelines also limit the permitted size and positioning of sponsorship logos.",
"title": "Equipment"
},
{
"paragraph_id": 25,
"text": "Some pistol grips used by foil and épée fencers",
"title": "Equipment"
},
{
"paragraph_id": 26,
"text": "A set of electric fencing equipment is required to participate in electric fencing. Electric equipment in fencing varies depending on the weapon with which it is used in accordance. The main component of a set of electric equipment is the body cord. The body cord serves as the connection between a fencer and a reel of wire that is part of a system for electrically detecting that the weapon has touched the opponent. There are two types: one for épée, and one for foil and sabre.",
"title": "Equipment"
},
{
"paragraph_id": 27,
"text": "Épée body cords consist of two sets of three prongs each connected by a wire. One set plugs into the fencer's weapon, with the other connecting to the reel. Foil and sabre body cords have only two prongs (or a twist-lock bayonet connector) on the weapon side, with the third wire connecting instead to the fencer's lamé. The need in foil and sabre to distinguish between on and off-target touches requires a wired connection to the valid target area.",
"title": "Equipment"
},
{
"paragraph_id": 28,
"text": "A body cord consists of three wires known as the A, B, and C lines. At the reel connector (and both connectors for Épée cords) The B pin is in the middle, the A pin is 1.5 cm to one side of B, and the C pin is 2 cm to the other side of B. This asymmetrical arrangement ensures that the cord cannot be plugged in the wrong way around.",
"title": "Equipment"
},
{
"paragraph_id": 29,
"text": "In foil, the A line is connected to the lamé and the B line runs up a wire to the tip of the weapon. The B line is normally connected to the C line through the tip. When the tip is depressed, the circuit is broken and one of three things can happen:",
"title": "Equipment"
},
{
"paragraph_id": 30,
"text": "In Épée, the A and B lines run up separate wires to the tip (there is no lamé). When the tip is depressed, it connects the A and B lines, resulting in a valid touch. However, if the tip is touching the opponents weapon (their C line) or the grounded strip, nothing happens when it is depressed, as the current is redirected to the C line. Grounded strips are particularly important in Épée, as without one, a touch to the floor registers as a valid touch (rather than off-target as in Foil).",
"title": "Equipment"
},
{
"paragraph_id": 31,
"text": "In Sabre, similarly to Foil, the A line is connected to the lamé, but both the B and C lines are connected to the body of the weapon. Any contact between one's B/C line (either one, as they are always connected) and the opponent's A line (their lamé) results in a valid touch. There is no need for grounded strips in Sabre, as hitting something other than the opponent's lame does nothing.",
"title": "Equipment"
},
{
"paragraph_id": 32,
"text": "In a professional fencing competition, a complete set of electric equipment is needed.",
"title": "Equipment"
},
{
"paragraph_id": 33,
"text": "A complete set of foil electric equipment includes:",
"title": "Equipment"
},
{
"paragraph_id": 34,
"text": "The electric equipment of sabre is very similar to that of foil. In addition, equipment used in sabre includes:",
"title": "Equipment"
},
{
"paragraph_id": 35,
"text": "Épée fencers lack a lamé, conductive bib, and head cord due to their target area. Also, their body cords are constructed differently as described above. However, they possess all of the other components of a foil fencer's equipment.",
"title": "Equipment"
},
{
"paragraph_id": 36,
"text": "Techniques or movements in fencing can be divided into two categories: offensive and defensive. Some techniques can fall into both categories (e.g. the beat). Certain techniques are used offensively, with the purpose of landing a hit on one's opponent while holding the right of way (foil and sabre). Others are used defensively, to protect against a hit or obtain the right of way.",
"title": "Techniques"
},
{
"paragraph_id": 37,
"text": "The attacks and defences may be performed in countless combinations of feet and hand actions. For example, fencer A attacks the arm of fencer B, drawing a high outside parry; fencer B then follows the parry with a high line riposte. Fencer A, expecting that, then makes his own parry by pivoting his blade under fencer B's weapon (from straight out to more or less straight down), putting fencer B's tip off target and fencer A now scoring against the low line by angulating the hand upwards.",
"title": "Techniques"
},
{
"paragraph_id": 38,
"text": "Other variants include wheelchair fencing for those with disabilities, chair fencing, one-hit épée (one of the five events which constitute modern pentathlon) and the various types of non-Olympic competitive fencing. Chair fencing is similar to wheelchair fencing, but for the able bodied. The opponents set up opposing chairs and fence while seated; all the usual rules of fencing are applied. An example of the latter is the American Fencing League (distinct from the United States Fencing Association): the format of competitions is different and the right of way rules are interpreted in a different way. In a number of countries, school and university matches deviate slightly from the FIE format. A variant of the sport using toy lightsabers earned national attention when ESPN2 acquired the rights to a selection of matches and included it as part of its \"ESPN8: The Ocho\" programming block in August 2018.",
"title": "Other variants"
},
{
"paragraph_id": 39,
"text": "One of the most notable films related to fencing is the 2015 Finnish-Estonian-German film The Fencer, directed by Klaus Härö, which is loosely based on the life of Endel Nelis, an accomplished Estonian fencer and coach. The film was nominated for the 73rd Golden Globe Awards in the Foreign Language Film Category.",
"title": "In popular culture"
},
{
"paragraph_id": 40,
"text": "In 2017, the first issue of the Fence comic book series, which follows a fictional team of young fencers, was published by the US-based Boom! Studios.",
"title": "In popular culture"
}
]
| Fencing is a combat sport that features sword fighting. The three disciplines of modern fencing are the foil, the épée, and the sabre; each discipline uses a different kind of blade, which shares the same name, and employs its own rules. Most competitive fencers specialise in one discipline. The modern sport gained prominence near the end of the 19th century and is based on the traditional skill set of swordsmanship. The Italian school altered the historical European martial art of classical fencing, and the French school later refined that system. Scoring points in a fencing competition is done by making contact with an opponent. The 1904 Olympics Games featured a fourth discipline of fencing known as singlestick, but it was dropped after that year and is not a part of modern fencing. Competitive fencing was one of the first sports to be featured in the Olympics and, along with athletics, cycling, swimming, and gymnastics, has been featured in every modern Olympics. | 2001-09-12T14:54:13Z | 2023-11-15T06:00:02Z | [
"Template:Cite web",
"Template:National members of the International Fencing Federation",
"Template:Further",
"Template:Clarify",
"Template:Cite journal",
"Template:Fencing",
"Template:Pp",
"Template:See also",
"Template:Circa",
"Template:Portal",
"Template:Cite book",
"Template:Cite news",
"Template:Commons category",
"Template:Infobox martial art",
"Template:Wiktionary",
"Template:Short description",
"Template:Authority control",
"Template:Main",
"Template:Convert",
"Template:Summer Olympic sports",
"Template:EngvarB",
"Template:About",
"Template:Webarchive",
"Template:ISBN",
"Template:Curlie",
"Template:Infobox sport",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Fencing |
10,894 | The Free Software Definition | The Free Software Definition written by Richard Stallman and published by the Free Software Foundation (FSF), defines free software as being software that ensures that the users have freedom in using, studying, sharing and modifying that software. The term "free" is used in the sense of "free speech," not of "free of charge." The earliest-known publication of the definition was in the February 1986 edition of the now-discontinued GNU's Bulletin publication by the FSF. The canonical source for the document is in the philosophy section of the GNU Project website. As of April 2008, it is published in 39 languages. The FSF publishes a list of licences that meet this definition.
The definition published by the FSF in February 1986 had two points:
The word "free" in our name does not refer to price; it refers to freedom. First, the freedom to copy a program and redistribute it to your neighbors, so that they can use it as well as you. Second, the freedom to change a program, so that you can control it instead of it controlling you; for this, the source code must be made available to you.
In 1996, when the gnu.org website was launched, "free software" was defined referring to "three levels of freedom" by adding an explicit mention of the freedom to study the software (which could be read in the two-point definition as being part of the freedom to change the program). Stallman later avoided the word "levels", saying that all of the freedoms are needed, so it is misleading to think in terms of levels.
Finally, another freedom was added, to explicitly say that users should be able to run the program. The existing freedoms were already numbered one to three, but this freedom should come before the others, so it was added as "freedom zero".
The modern definition defines free software by whether or not the recipient has the following four freedoms:
Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code is highly impractical.
In July 1997, Bruce Perens published the Debian Free Software Guidelines. A definition based on the DFSG was also used by the Open Source Initiative (OSI) under the name "The Open Source Definition".
Despite the philosophical differences between the free software movement and the open-source-software movement, the official definitions of free software by the FSF and of open-source software by the OSI basically refer to the same software licences, with a few minor exceptions. While stressing these philosophical differences, the Free Software Foundation comments:
The term "open source" software is used by some people to mean more or less the same category as free software. It is not exactly the same class of software: they accept some licences that we consider too restrictive, and there are free software licences they have not accepted. However, the differences in extension of the category are small: nearly all free software is open source, and nearly all open source software is free. | [
{
"paragraph_id": 0,
"text": "The Free Software Definition written by Richard Stallman and published by the Free Software Foundation (FSF), defines free software as being software that ensures that the users have freedom in using, studying, sharing and modifying that software. The term \"free\" is used in the sense of \"free speech,\" not of \"free of charge.\" The earliest-known publication of the definition was in the February 1986 edition of the now-discontinued GNU's Bulletin publication by the FSF. The canonical source for the document is in the philosophy section of the GNU Project website. As of April 2008, it is published in 39 languages. The FSF publishes a list of licences that meet this definition.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The definition published by the FSF in February 1986 had two points:",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 2,
"text": "The word \"free\" in our name does not refer to price; it refers to freedom. First, the freedom to copy a program and redistribute it to your neighbors, so that they can use it as well as you. Second, the freedom to change a program, so that you can control it instead of it controlling you; for this, the source code must be made available to you.",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 3,
"text": "In 1996, when the gnu.org website was launched, \"free software\" was defined referring to \"three levels of freedom\" by adding an explicit mention of the freedom to study the software (which could be read in the two-point definition as being part of the freedom to change the program). Stallman later avoided the word \"levels\", saying that all of the freedoms are needed, so it is misleading to think in terms of levels.",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 4,
"text": "Finally, another freedom was added, to explicitly say that users should be able to run the program. The existing freedoms were already numbered one to three, but this freedom should come before the others, so it was added as \"freedom zero\".",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 5,
"text": "The modern definition defines free software by whether or not the recipient has the following four freedoms:",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 6,
"text": "Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code is highly impractical.",
"title": "The Four Essential Freedoms of Free Software"
},
{
"paragraph_id": 7,
"text": "In July 1997, Bruce Perens published the Debian Free Software Guidelines. A definition based on the DFSG was also used by the Open Source Initiative (OSI) under the name \"The Open Source Definition\".",
"title": "Later definitions"
},
{
"paragraph_id": 8,
"text": "Despite the philosophical differences between the free software movement and the open-source-software movement, the official definitions of free software by the FSF and of open-source software by the OSI basically refer to the same software licences, with a few minor exceptions. While stressing these philosophical differences, the Free Software Foundation comments:",
"title": "Comparison with The Open Source Definition"
},
{
"paragraph_id": 9,
"text": "The term \"open source\" software is used by some people to mean more or less the same category as free software. It is not exactly the same class of software: they accept some licences that we consider too restrictive, and there are free software licences they have not accepted. However, the differences in extension of the category are small: nearly all free software is open source, and nearly all open source software is free.",
"title": "Comparison with The Open Source Definition"
}
]
| The Free Software Definition written by Richard Stallman and published by the Free Software Foundation (FSF), defines free software as being software that ensures that the users have freedom in using, studying, sharing and modifying that software. The term "free" is used in the sense of "free speech," not of "free of charge." The earliest-known publication of the definition was in the February 1986 edition of the now-discontinued GNU's Bulletin publication by the FSF. The canonical source for the document is in the philosophy section of the GNU Project website. As of April 2008, it is published in 39 languages. The FSF publishes a list of licences that meet this definition. | 2001-07-26T23:26:47Z | 2023-12-30T22:50:58Z | [
"Template:Blockquote",
"Template:Quote",
"Template:Main",
"Template:Portal",
"Template:Reflist",
"Template:Cite web",
"Template:Dead link",
"Template:Short description",
"Template:As of",
"Template:See also",
"Template:FOSS"
]
| https://en.wikipedia.org/wiki/The_Free_Software_Definition |
10,896 | Felix Bloch | Felix Bloch (23 October 1905 – 10 September 1983) was a Swiss-American physicist and Nobel physics laureate who worked mainly in the U.S. He and Edward Mills Purcell were awarded the 1952 Nobel Prize for Physics for "their development of new ways and methods for nuclear magnetic precision measurements." In 1954–1955, he served for one year as the first director-general of CERN. Felix Bloch made fundamental theoretical contributions to the understanding of ferromagnetism and electron behavior in crystal lattices. He is also considered one of the developers of nuclear magnetic resonance.
Bloch was born in Zürich, Switzerland to Jewish parents Gustav and Agnes Bloch. Gustav Bloch, his father, was financially unable to attend University and worked as a wholesale grain dealer in Zürich. Gustav moved to Zürich from Moravia in 1890 to become a Swiss citizen. Their first child was a girl born in 1902 while Felix was born three years later.
Bloch entered public elementary school at the age of six and is said to have been teased, in part because he "spoke Swiss German with a somewhat different accent than most members of the class". He received support from his older sister during much of this time, but she died at the age of twelve, devastating Felix, who is said to have lived a "depressed and isolated life" in the following years. Bloch learned to play the piano by the age of eight and was drawn to arithmetic for its "clarity and beauty". Bloch graduated from elementary school at twelve and enrolled in the Cantonal Gymnasium in Zürich for secondary school in 1918. He was placed on a six-year curriculum here to prepare him for University. He continued his curriculum through 1924, even through his study of engineering and physics in other schools, though it was limited to mathematics and languages after the first three years. After these first three years at the Gymnasium, at age fifteen Bloch began to study at the Eidgenössische Technische Hochschule (ETHZ), also in Zürich. Although he initially studied engineering he soon changed to physics. During this time he attended lectures and seminars given by Peter Debye and Hermann Weyl at ETH Zürich and Erwin Schrödinger at the neighboring University of Zürich. A fellow student in these seminars was John von Neumann.
Bloch graduated in 1927, and was encouraged by Debye to go to Leipzig to study with Werner Heisenberg. Bloch became Heisenberg's first graduate student, and gained his doctorate in 1928. His doctoral thesis established the quantum theory of solids, using waves to describe electrons in periodic lattices.
On March 14, 1940, Bloch married Lore Clara Misch (1911–1996), a fellow physicist working on X-ray crystallography, whom he had met at an American Physical Society meeting. They had four children, twins George Jacob Bloch and Daniel Arthur Bloch (born January 15, 1941), son Frank Samuel Bloch (born January 16, 1945), and daughter Ruth Hedy Bloch (born September 15, 1949).
Bloch remained in European academia, working on superconductivity with Wolfgang Pauli in Zürich; with Hans Kramers and Adriaan Fokker in Holland; with Heisenberg on ferromagnetism, where he developed a description of boundaries between magnetic domains, now known as "Bloch walls", and theoretically proposed a concept of spin waves, excitations of magnetic structure; with Niels Bohr in Copenhagen, where he worked on a theoretical description of the stopping of charged particles traveling through matter; and with Enrico Fermi in Rome. In 1932, Bloch returned to Leipzig to assume a position as "Privatdozent" (lecturer). In 1933, immediately after Hitler came to power, he left Germany because he was Jewish, returning to Zürich, before traveling to Paris to lecture at the Institut Henri Poincaré.
In 1934, the chairman of Stanford Physics invited Bloch to join the faculty. Bloch accepted the offer and emigrated to the United States. In the fall of 1938, Bloch began working with the 37 inch cyclotron at the University of California, Berkeley to determine the magnetic moment of the neutron. Bloch went on to become the first professor for theoretical physics at Stanford. In 1939, he became a naturalized citizen of the United States.
During WWII, Bloch briefly worked on the atomic bomb project at Los Alamos. Disliking the military atmosphere of the laboratory and uninterested in the theoretical work there, Bloch left to join the radar project at Harvard University.
After the war, he concentrated on investigations into nuclear induction and nuclear magnetic resonance, which are the underlying principles of MRI. In 1946 he proposed the Bloch equations which determine the time evolution of nuclear magnetization. He was elected to the United States National Academy of Sciences in 1948. Along with Edward Purcell, Bloch was awarded the 1952 Nobel Prize in Physics for his work on nuclear magnetic induction.
When CERN was being set up in the early 1950s, its founders were searching for someone of stature and international prestige to head the fledgling international laboratory, and in 1954 Professor Bloch became CERN's first director-general, at the time when construction was getting under way on the present Meyrin site and plans for the first machines were being drawn up. After leaving CERN, he returned to Stanford University, where he in 1961 was made Max Stein Professor of Physics.
In 1964, he was elected a foreign member of the Royal Netherlands Academy of Arts and Sciences. He was also a member of the American Academy of Arts and Sciences and the American Philosophical Society.
Bloch died in Zürich in 1983. | [
{
"paragraph_id": 0,
"text": "Felix Bloch (23 October 1905 – 10 September 1983) was a Swiss-American physicist and Nobel physics laureate who worked mainly in the U.S. He and Edward Mills Purcell were awarded the 1952 Nobel Prize for Physics for \"their development of new ways and methods for nuclear magnetic precision measurements.\" In 1954–1955, he served for one year as the first director-general of CERN. Felix Bloch made fundamental theoretical contributions to the understanding of ferromagnetism and electron behavior in crystal lattices. He is also considered one of the developers of nuclear magnetic resonance.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bloch was born in Zürich, Switzerland to Jewish parents Gustav and Agnes Bloch. Gustav Bloch, his father, was financially unable to attend University and worked as a wholesale grain dealer in Zürich. Gustav moved to Zürich from Moravia in 1890 to become a Swiss citizen. Their first child was a girl born in 1902 while Felix was born three years later.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "Bloch entered public elementary school at the age of six and is said to have been teased, in part because he \"spoke Swiss German with a somewhat different accent than most members of the class\". He received support from his older sister during much of this time, but she died at the age of twelve, devastating Felix, who is said to have lived a \"depressed and isolated life\" in the following years. Bloch learned to play the piano by the age of eight and was drawn to arithmetic for its \"clarity and beauty\". Bloch graduated from elementary school at twelve and enrolled in the Cantonal Gymnasium in Zürich for secondary school in 1918. He was placed on a six-year curriculum here to prepare him for University. He continued his curriculum through 1924, even through his study of engineering and physics in other schools, though it was limited to mathematics and languages after the first three years. After these first three years at the Gymnasium, at age fifteen Bloch began to study at the Eidgenössische Technische Hochschule (ETHZ), also in Zürich. Although he initially studied engineering he soon changed to physics. During this time he attended lectures and seminars given by Peter Debye and Hermann Weyl at ETH Zürich and Erwin Schrödinger at the neighboring University of Zürich. A fellow student in these seminars was John von Neumann.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "Bloch graduated in 1927, and was encouraged by Debye to go to Leipzig to study with Werner Heisenberg. Bloch became Heisenberg's first graduate student, and gained his doctorate in 1928. His doctoral thesis established the quantum theory of solids, using waves to describe electrons in periodic lattices.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "On March 14, 1940, Bloch married Lore Clara Misch (1911–1996), a fellow physicist working on X-ray crystallography, whom he had met at an American Physical Society meeting. They had four children, twins George Jacob Bloch and Daniel Arthur Bloch (born January 15, 1941), son Frank Samuel Bloch (born January 16, 1945), and daughter Ruth Hedy Bloch (born September 15, 1949).",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Bloch remained in European academia, working on superconductivity with Wolfgang Pauli in Zürich; with Hans Kramers and Adriaan Fokker in Holland; with Heisenberg on ferromagnetism, where he developed a description of boundaries between magnetic domains, now known as \"Bloch walls\", and theoretically proposed a concept of spin waves, excitations of magnetic structure; with Niels Bohr in Copenhagen, where he worked on a theoretical description of the stopping of charged particles traveling through matter; and with Enrico Fermi in Rome. In 1932, Bloch returned to Leipzig to assume a position as \"Privatdozent\" (lecturer). In 1933, immediately after Hitler came to power, he left Germany because he was Jewish, returning to Zürich, before traveling to Paris to lecture at the Institut Henri Poincaré.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "In 1934, the chairman of Stanford Physics invited Bloch to join the faculty. Bloch accepted the offer and emigrated to the United States. In the fall of 1938, Bloch began working with the 37 inch cyclotron at the University of California, Berkeley to determine the magnetic moment of the neutron. Bloch went on to become the first professor for theoretical physics at Stanford. In 1939, he became a naturalized citizen of the United States.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "During WWII, Bloch briefly worked on the atomic bomb project at Los Alamos. Disliking the military atmosphere of the laboratory and uninterested in the theoretical work there, Bloch left to join the radar project at Harvard University.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "After the war, he concentrated on investigations into nuclear induction and nuclear magnetic resonance, which are the underlying principles of MRI. In 1946 he proposed the Bloch equations which determine the time evolution of nuclear magnetization. He was elected to the United States National Academy of Sciences in 1948. Along with Edward Purcell, Bloch was awarded the 1952 Nobel Prize in Physics for his work on nuclear magnetic induction.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "When CERN was being set up in the early 1950s, its founders were searching for someone of stature and international prestige to head the fledgling international laboratory, and in 1954 Professor Bloch became CERN's first director-general, at the time when construction was getting under way on the present Meyrin site and plans for the first machines were being drawn up. After leaving CERN, he returned to Stanford University, where he in 1961 was made Max Stein Professor of Physics.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "In 1964, he was elected a foreign member of the Royal Netherlands Academy of Arts and Sciences. He was also a member of the American Academy of Arts and Sciences and the American Philosophical Society.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Bloch died in Zürich in 1983.",
"title": "Biography"
}
]
| Felix Bloch was a Swiss-American physicist and Nobel physics laureate who worked mainly in the U.S. He and Edward Mills Purcell were awarded the 1952 Nobel Prize for Physics for "their development of new ways and methods for nuclear magnetic precision measurements." In 1954–1955, he served for one year as the first director-general of CERN. Felix Bloch made fundamental theoretical contributions to the understanding of ferromagnetism and electron behavior in crystal lattices. He is also considered one of the developers of nuclear magnetic resonance. | 2001-07-27T00:07:57Z | 2023-12-02T13:25:29Z | [
"Template:Authority control",
"Template:Cite book",
"Template:Wikiquote",
"Template:S-bef",
"Template:Commons category inline",
"Template:S-ttl",
"Template:S-end",
"Template:Nobel Prize in Physics Laureates 1951-1975",
"Template:About",
"Template:Cite journal",
"Template:Webarchive",
"Template:1952 Nobel Prize winners",
"Template:Use dmy dates",
"Template:Cite web",
"Template:S-start",
"Template:Nobelprize",
"Template:S-aca",
"Template:S-aft",
"Template:Presidents of the American Physical Society",
"Template:Short description",
"Template:Infobox scientist",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Felix_Bloch |
10,897 | Fugue | In classical music, a fugue (/fjuːɡ/) is a contrapuntal, polyphonic compositional technique in two or more voices, built on a subject (a musical theme) that is introduced at the beginning in imitation (repetition at different pitches), which recurs frequently throughout the course of the composition. It is not to be confused with a fuguing tune, which is a style of song popularized by and mostly limited to early American (i.e. shape note or "Sacred Harp") music and West Gallery music. A fugue usually has three main sections: an exposition, a development and a final entry that contains the return of the subject in the fugue's tonic key. Fugues can also have episodes—parts of the fugue where new material is heard, based on the subject—a stretto, when the fugue's subject "overlaps" itself in different voices, or a recapitulation. A popular compositional technique in the Baroque era, the fugue was fundamental in showing mastery of harmony and tonality as it presented counterpoint.
In the Middle Ages, the term was widely used to denote any works in canonic style; by the Renaissance, it had come to denote specifically imitative works. Since the 17th century, the term fugue has described what is commonly regarded as the most fully developed procedure of imitative counterpoint.
Most fugues open with a short main theme, the subject, which then sounds successively in each voice (after the first voice is finished stating the subject, a second voice repeats the subject at a different pitch, and other voices repeat in the same way); when each voice has completed the subject, the exposition is complete. This is often followed by a connecting passage, or episode, developed from previously heard material; further "entries" of the subject then are heard in related keys. Episodes (if applicable) and entries are usually alternated until the "final entry" of the subject, by which point the music has returned to the opening key, or tonic, which is often followed by closing material, the coda. In this sense, a fugue is a style of composition, rather than a fixed structure.
The form evolved during the 18th century from several earlier types of contrapuntal compositions, such as imitative ricercars, capriccios, canzonas, and fantasias. The famous fugue composer Johann Sebastian Bach (1685–1750) shaped his own works after those of Jan Pieterszoon Sweelinck (1562-1621), Johann Jakob Froberger (1616–1667), Johann Pachelbel (1653–1706), Girolamo Frescobaldi (1583–1643), Dieterich Buxtehude (c. 1637–1707) and others. With the decline of sophisticated styles at the end of the baroque period, the fugue's central role waned, eventually giving way as sonata form and the symphony orchestra rose to a dominant position. Nevertheless, composers continued to write and study fugues for various purposes; they appear in the works of Wolfgang Amadeus Mozart (1756–1791) and Ludwig van Beethoven (1770–1827), as well as modern composers such as Dmitri Shostakovich (1906–1975) and Paul Hindemith (1895-1963).
The English term fugue originated in the 16th century and is derived from the French word fugue or the Italian fuga. This in turn comes from Latin, also fuga, which is itself related to both fugere ("to flee") and fugare ("to chase"). The adjectival form is fugal. Variants include fughetta ("a small fugue") and fugato (a passage in fugal style within another work that is not a fugue).
A fugue begins with the exposition and is written according to certain predefined rules; in later portions the composer has more freedom, though a logical key structure is usually followed. Further entries of the subject will occur throughout the fugue, repeating the accompanying material at the same time. The various entries may or may not be separated by episodes.
What follows is a chart displaying a fairly typical fugal outline, and an explanation of the processes involved in creating this structure.
A fugue begins with the exposition of its subject in one of the voices alone in the tonic key. After the statement of the subject, a second voice enters and states the subject with the subject transposed to another key (usually the dominant or subdominant), which is known as the answer. To make the music run smoothly, it may also have to be altered slightly. When the answer is an exact copy of the subject to the new key, with identical intervals to the first statement, it is classified as a real answer; if the intervals are altered to maintain the key it is a tonal answer.
A tonal answer is usually called for when the subject begins with a prominent dominant note, or where there is a prominent dominant note very close to the beginning of the subject. To prevent an undermining of the music's sense of key, this note is transposed up a fourth to the tonic rather than up a fifth to the supertonic. Answers in the subdominant are also employed for the same reason.
While the answer is being stated, the voice in which the subject was previously heard continues with new material. If this new material is reused in later statements of the subject, it is called a countersubject; if this accompanying material is only heard once, it is simply referred to as free counterpoint.
The countersubject is written in invertible counterpoint at the octave or fifteenth. The distinction is made between the use of free counterpoint and regular countersubjects accompanying the fugue subject/answer, because in order for a countersubject to be heard accompanying the subject in more than one instance, it must be capable of sounding correctly above or below the subject, and must be conceived, therefore, in invertible (double) counterpoint.
In tonal music, invertible contrapuntal lines must be written according to certain rules because several intervallic combinations, while acceptable in one particular orientation, are no longer permissible when inverted. For example, when the note "G" sounds in one voice above the note "C" in lower voice, the interval of a fifth is formed, which is considered consonant and entirely acceptable. When this interval is inverted ("C" in the upper voice above "G" in the lower), it forms a fourth, considered a dissonance in tonal contrapuntal practice, and requires special treatment, or preparation and resolution, if it is to be used. The countersubject, if sounding at the same time as the answer, is transposed to the pitch of the answer. Each voice then responds with its own subject or answer, and further countersubjects or free counterpoint may be heard.
When a tonal answer is used, it is customary for the exposition to alternate subjects (S) with answers (A), however, in some fugues this order is occasionally varied: e.g., see the SAAS arrangement of Fugue No. 1 in C Major, BWV 846, from J.S. Bach's Well-Tempered Clavier, Book 1. A brief codetta is often heard connecting the various statements of the subject and answer. This allows the music to run smoothly. The codetta, just as the other parts of the exposition, can be used throughout the rest of the fugue.
The first answer must occur as soon after the initial statement of the subject as possible; therefore the first codetta is often extremely short, or not needed. In the above example, this is the case: the subject finishes on the quarter note (or crotchet) B♭ of the third beat of the second bar which harmonizes the opening G of the answer. The later codettas may be considerably longer, and often serve to (a) develop the material heard so far in the subject/answer and countersubject and possibly introduce ideas heard in the second countersubject or free counterpoint that follows (b) delay, and therefore heighten the impact of the reentry of the subject in another voice as well as modulating back to the tonic.
The exposition usually concludes when all voices have given a statement of the subject or answer. In some fugues, the exposition will end with a redundant entry, or an extra presentation of the theme. Furthermore, in some fugues the entry of one of the voices may be reserved until later, for example in the pedals of an organ fugue (see J.S. Bach's Fugue in C major for Organ, BWV 547).
Further entries of the subject follow this initial exposition, either immediately (as for example in Fugue No. 1 in C major, BWV 846 of the Well-Tempered Clavier) or separated by episodes. Episodic material is always modulatory and is usually based upon some element heard in the exposition. Each episode has the primary function of transitioning for the next entry of the subject in a new key, and may also provide release from the strictness of form employed in the exposition, and middle-entries. André Gedalge states that the episode of the fugue is generally based on a series of imitations of the subject that have been fragmented.
Further entries of the subject, or middle entries, occur throughout the fugue. They must state the subject or answer at least once in its entirety, and may also be heard in combination with the countersubject(s) from the exposition, new countersubjects, free counterpoint, or any of these in combination. It is uncommon for the subject to enter alone in a single voice in the middle entries as in the exposition; rather, it is usually heard with at least one of the countersubjects and/or other free contrapuntal accompaniments.
Middle entries tend to occur at pitches other than the initial. As shown in the typical structure above, these are often closely related keys such as the relative dominant and subdominant, although the key structure of fugues varies greatly. In the fugues of J.S. Bach, the first middle entry occurs most often in the relative major or minor of the work's overall key, and is followed by an entry in the dominant of the relative major or minor when the fugue's subject requires a tonal answer. In the fugues of earlier composers (notably, Buxtehude and Pachelbel), middle entries in keys other than the tonic and dominant tend to be the exception, and non-modulation the norm. One of the famous examples of such non-modulating fugue occurs in Buxtehude's Praeludium (Fugue and Chaconne) in C, BuxWV 137.
When there is no entrance of the subject and answer material, the composer can develop the subject by altering the subject. This is called an episode, often by inversion, although the term is sometimes used synonymously with middle entry and may also describe the exposition of completely new subjects, as in a double fugue for example (see below). In any of the entries within a fugue, the subject may be altered, by inversion, retrograde (a less common form where the entire subject is heard back-to-front) and diminution (the reduction of the subject's rhythmic values by a certain factor), augmentation (the increase of the subject's rhythmic values by a certain factor) or any combination of them.
The excerpt below, bars 7–12 of J.S. Bach's Fugue No. 2 in C minor, BWV 847, from the Well-Tempered Clavier, Book 1 illustrates the application of most of the characteristics described above. The fugue is for keyboard and in three voices, with regular countersubjects. This excerpt opens at last entry of the exposition: the subject is sounding in the bass, the first countersubject in the treble, while the middle-voice is stating a second version of the second countersubject, which concludes with the characteristic rhythm of the subject, and is always used together with the first version of the second countersubject. Following this an episode modulates from the tonic to the relative major by means of sequence, in the form of an accompanied canon at the fourth. Arrival in E♭ major is marked by a quasi perfect cadence across the bar line, from the last quarter note beat of the first bar to the first beat of the second bar in the second system, and the first middle entry. Here, Bach has altered the second countersubject to accommodate the change of mode.
At any point in the fugue there may be "false entries" of the subject, which include the start of the subject but are not completed. False entries are often abbreviated to the head of the subject, and anticipate the "true" entry of the subject, heightening the impact of the subject proper.
The counter-exposition is a second exposition. However, there are only two entries, and the entries occur in reverse order. The counter-exposition in a fugue is separated from the exposition by an episode and is in the same key as the original exposition.
Sometimes counter-expositions or the middle entries take place in stretto, whereby one voice responds with the subject/answer before the first voice has completed its entry of the subject/answer, usually increasing the intensity of the music.
Only one entry of the subject must be heard in its completion in a stretto. However, a stretto in which the subject/answer is heard in completion in all voices is known as stretto maestrale or grand stretto. Strettos may also occur by inversion, augmentation and diminution. A fugue in which the opening exposition takes place in stretto form is known as a close fugue or stretto fugue (see for example, the Gratias agimus tibi and Dona nobis pacem choruses from J.S. Bach's Mass in B minor).
The closing section of a fugue often includes one or two counter-expositions, and possibly a stretto, in the tonic; sometimes over a tonic or dominant pedal note. Any material that follows the final entry of the subject is considered to be the final coda and is normally cadential.
A simple fugue has only one subject, and does not utilize invertible counterpoint.
A double fugue has two subjects that are often developed simultaneously. Similarly, a triple fugue has three subjects. There are two kinds of double (triple) fugue: (a) a fugue in which the second (third) subject is (are) presented simultaneously with the subject in the exposition (e.g. as in Kyrie Eleison of Mozart's Requiem in D minor or the fugue of Bach's Passacaglia and Fugue in C minor, BWV 582), and (b) a fugue in which all subjects have their own expositions at some point, and they are not combined until later (see for example, the three-subject Fugue No. 14 in F♯ minor from Bach's Well-Tempered Clavier Book 2, or more famously, Bach's "St. Anne" Fugue in E♭ major, BWV 552, a triple fugue for organ.)
A counter-fugue is a fugue in which the first answer is presented as the subject in inversion (upside down), and the inverted subject continues to feature prominently throughout the fugue. Examples include Contrapunctus V through Contrapunctus VII, from Bach's The Art of Fugue.
Permutation fugue describes a type of composition (or technique of composition) in which elements of fugue and strict canon are combined. Each voice enters in succession with the subject, each entry alternating between tonic and dominant, and each voice, having stated the initial subject, continues by stating two or more themes (or countersubjects), which must be conceived in correct invertible counterpoint. (In other words, the subject and countersubjects must be capable of being played both above and below all the other themes without creating any unacceptable dissonances.) Each voice takes this pattern and states all the subjects/themes in the same order (and repeats the material when all the themes have been stated, sometimes after a rest).
There is usually very little non-structural/thematic material. During the course of a permutation fugue, it is quite uncommon, actually, for every single possible voice-combination (or "permutation") of the themes to be heard. This limitation exists in consequence of sheer proportionality: the more voices in a fugue, the greater the number of possible permutations. In consequence, composers exercise editorial judgment as to the most musical of permutations and processes leading thereto. One example of permutation fugue can be seen in the eighth and final chorus of J.S. Bach's cantata, Himmelskönig, sei willkommen, BWV 182.
Permutation fugues differ from conventional fugue in that there are no connecting episodes, nor statement of the themes in related keys. So for example, the fugue of Bach's Passacaglia and Fugue in C minor, BWV 582 is not purely a permutation fugue, as it does have episodes between permutation expositions. Invertible counterpoint is essential to permutation fugues but is not found in simple fugues.
A fughetta is a short fugue that has the same characteristics as a fugue. Often the contrapuntal writing is not strict, and the setting less formal. See for example, variation 24 of Beethoven's Diabelli Variations Op. 120.
The term fuga was used as far back as the Middle Ages, but was initially used to refer to any kind of imitative counterpoint, including canons, which are now thought of as distinct from fugues. Prior to the 16th century, fugue was originally a genre. It was not until the 16th century that fugal technique as it is understood today began to be seen in pieces, both instrumental and vocal. Fugal writing is found in works such as fantasias, ricercares and canzonas.
"Fugue" as a theoretical term first occurred in 1330 when Jacobus of Liege wrote about the fuga in his Speculum musicae. The fugue arose from the technique of "imitation", where the same musical material was repeated starting on a different note.
Gioseffo Zarlino, a composer, author, and theorist in the Renaissance, was one of the first to distinguish between the two types of imitative counterpoint: fugues and canons (which he called imitations). Originally, this was to aid improvisation, but by the 1550s, it was considered a technique of composition. The composer Giovanni Pierluigi da Palestrina (1525?–1594) wrote masses using modal counterpoint and imitation, and fugal writing became the basis for writing motets as well. Palestrina's imitative motets differed from fugues in that each phrase of the text had a different subject which was introduced and worked out separately, whereas a fugue continued working with the same subject or subjects throughout the entire length of the piece.
It was in the Baroque period that the writing of fugues became central to composition, in part as a demonstration of compositional expertise. Fugues were incorporated into a variety of musical forms. Jan Pieterszoon Sweelinck, Girolamo Frescobaldi, Johann Jakob Froberger and Dieterich Buxtehude all wrote fugues, and George Frideric Handel included them in many of his oratorios. Keyboard suites from this time often conclude with a fugal gigue. Domenico Scarlatti has only a few fugues among his corpus of over 500 harpsichord sonatas. The French overture featured a quick fugal section after a slow introduction. The second movement of a sonata da chiesa, as written by Arcangelo Corelli and others, was usually fugal.
The Baroque period also saw a rise in the importance of music theory. Some fugues during the Baroque period were pieces designed to teach contrapuntal technique to students. The most influential text was Johann Joseph Fux's Gradus Ad Parnassum ("Steps to Parnassus"), which appeared in 1725. This work laid out the terms of "species" of counterpoint, and offered a series of exercises to learn fugue writing. Fux's work was largely based on the practice of Palestrina's modal fugues. Mozart studied from this book, and it remained influential into the nineteenth century. Haydn, for example, taught counterpoint from his own summary of Fux and thought of it as the basis for formal structure.
Bach's most famous fugues are those for the harpsichord in The Well-Tempered Clavier, which many composers and theorists look at as the greatest model of fugue. The Well-Tempered Clavier comprises two volumes written in different times of Bach's life, each comprising 24 prelude and fugue pairs, one for each major and minor key. Bach is also known for his organ fugues, which are usually preceded by a prelude or toccata. The Art of Fugue, BWV 1080, is a collection of fugues (and four canons) on a single theme that is gradually transformed as the cycle progresses. Bach also wrote smaller single fugues and put fugal sections or movements into many of his more general works. J.S. Bach's influence extended forward through his son C.P.E. Bach and through the theorist Friedrich Wilhelm Marpurg (1718–1795) whose Abhandlung von der Fuge ("Treatise on the fugue", 1753) was largely based on J.S. Bach's work.
During the Classical era, the fugue was no longer a central or even fully natural mode of musical composition. Nevertheless, both Haydn and Mozart had periods of their careers in which they in some sense "rediscovered" fugal writing and used it frequently in their work.
Joseph Haydn was the leader of fugal composition and technique in the Classical era. Haydn's most famous fugues can be found in his "Sun" Quartets (op. 20, 1772), of which three have fugal finales. This was a practice that Haydn repeated only once later in his quartet-writing career, with the finale of his String Quartet, Op. 50 No. 4 (1787). Some of the earliest examples of Haydn's use of counterpoint, however, are in three symphonies (No. 3, No. 13, and No. 40) that date from 1762 to 1763. The earliest fugues, in both the symphonies and in the Baryton trios, exhibit the influence of Joseph Fux's treatise on counterpoint, Gradus ad Parnassum (1725), which Haydn studied carefully.
Haydn's second fugal period occurred after he heard, and was greatly inspired by, the oratorios of Handel during his visits to London (1791–1793, 1794–1795). Haydn then studied Handel's techniques and incorporated Handelian fugal writing into the choruses of his mature oratorios The Creation and The Seasons, as well as several of his later symphonies, including No. 88, No. 95, and No. 101; and the late string quartets, Opus 71 no. 3 and (especially) Opus 76 no. 6.
The young Wolfgang Amadeus Mozart studied counterpoint with Padre Martini in Bologna. Under the employment of Archbishop Colloredo, and the musical influence of his predecessors and colleagues such as Johann Ernst Eberlin, Anton Cajetan Adlgasser, Michael Haydn, and his own father, Leopold Mozart at the Salzburg Cathedral, the young Mozart composed ambitious fugues and contrapuntal passages in Catholic choral works such as Mass in C minor, K. 139 "Waisenhaus" (1768), Mass in C major, K. 66 "Dominicus" (1769), Mass in C major, K. 167 "in honorem Sanctissimae Trinitatis" (1773), Mass in C major, K. 262 "Missa longa" (1775), Mass in C major, K. 337 "Solemnis" (1780), various litanies, and vespers. Leopold admonished his son openly in 1777 that he not forget to make public demonstration of his abilities in "fugue, canon, and contrapunctus". Later in life, the major impetus to fugal writing for Mozart was the influence of Baron Gottfried van Swieten in Vienna around 1782. Van Swieten, during diplomatic service in Berlin, had taken the opportunity to collect as many manuscripts by Bach and Handel as he could, and he invited Mozart to study his collection and encouraged him to transcribe various works for other combinations of instruments. Mozart was evidently fascinated by these works and wrote a set of five transcriptions for string quartet, K. 405 (1782), of fugues from Bach's Well-Tempered Clavier, introducing them with preludes of his own. In a letter to his sister Nannerl Mozart, dated in Vienna on 20 April 1782, Mozart recognizes that he had not written anything in this form, but moved by his wife's interest he composed one piece, which is sent with the letter. He begs her not to let anybody see the fugue and manifests the hope to write five more and then present them to Baron van Swieten. Regarding the piece, he said "I have taken particular care to write andante maestoso upon it, so that it should not be played fast – for if a fugue is not played slowly the ear cannot clearly distinguish the new subject as it is introduced and the effect is missed". Mozart then set to writing fugues on his own, mimicking the Baroque style. These included a fugue in C minor, K. 426, for two pianos (1783). Later, Mozart incorporated fugal writing into his opera Die Zauberflöte and the finale of his Symphony No. 41.
The parts of the Requiem he completed also contain several fugues (most notably the Kyrie, and the three fugues in the Domine Jesu; he also left behind a sketch for an Amen fugue which, some believe, would have come at the end of the Sequentia).
Ludwig van Beethoven was familiar with fugal writing from childhood, as an important part of his training was playing from The Well-Tempered Clavier. During his early career in Vienna, Beethoven attracted notice for his performance of these fugues. There are fugal sections in Beethoven's early piano sonatas, and fugal writing is to be found in the second and fourth movements of the Eroica Symphony (1805). Beethoven incorporated fugues in his sonatas, and reshaped the episode's purpose and compositional technique for later generations of composers.
Nevertheless, fugues did not take on a truly central role in Beethoven's work until his late period. The finale of Beethoven's Hammerklavier Sonata contains a fugue, which was practically unperformed until the late 19th century, due to its tremendous technical difficulty and length. The last movement of his Cello Sonata, Op. 102 No. 2 is a fugue, and there are fugal passages in the last movements of his Piano Sonatas in A major, Op. 101 and A♭ major Op. 110. According to Charles Rosen, "With the finale of 110, Beethoven re-conceived the significance of the most traditional elements of fugue writing."
Fugal passages are also found in the Missa Solemnis and all movements of the Ninth Symphony, except the third. A massive, dissonant fugue forms the finale of his String Quartet, Op. 130 (1825); the latter was later published separately as Op. 133, the Große Fuge ("Great Fugue"). However, it is the fugue that opens Beethoven's String Quartet in C♯ minor, Op. 131 that several commentators regard as one of the composer's greatest achievements. Joseph Kerman (1966, p. 330) calls it "this most moving of all fugues". J. W. N. Sullivan (1927, p. 235) hears it as "the most superhuman piece of music that Beethoven has ever written." Philip Radcliffe (1965, p. 149) says "[a] bare description of its formal outline can give but little idea of the extraordinary profundity of this fugue ."
By the beginning of the Romantic era, fugue writing had become specifically attached to the norms and styles of the Baroque. Felix Mendelssohn wrote many fugues inspired by his study of the music of Johann Sebastian Bach.
Johannes Brahms' Variations and Fugue on a Theme by Handel, Op. 24, is a work for solo piano written in 1861. It consists of a set of twenty-five variations and a concluding fugue, all based on a theme from George Frideric Handel's Harpsichord Suite No. 1 in B♭ major, HWV 434.
Franz Liszt's Piano Sonata in B minor (1853) contains a powerful fugue, demanding incisive virtuosity from its player:
Richard Wagner included several fugues in his opera Die Meistersinger von Nürnberg. Giuseppe Verdi included a whimsical example at the end of his opera Falstaff and his setting of the Requiem Mass contained two (originally three) choral fugues. Anton Bruckner and Gustav Mahler also included them in their respective symphonies. The exposition of the finale of Bruckner's Symphony No. 5 begins with a fugal exposition. The exposition ends with a chorale, the melody of which is then used as a second fugal exposition at the beginning of the development. The recapitulation features both fugal subjects concurrently. The finale of Mahler's Symphony No. 5 features a "fugue-like" passage early in the movement, though this is not actually an example of a fugue.
Twentieth-century composers brought fugue back to its position of prominence, realizing its uses in full instrumental works, its importance in development and introductory sections, and the developmental capabilities of fugal composition.
The second movement of Maurice Ravel's piano suite Le Tombeau de Couperin (1917) is a fugue that Roy Howat (200, p. 88) describes as having "a subtle glint of jazz". Béla Bartók's Music for Strings, Percussion and Celesta (1936) opens with a slow fugue that Pierre Boulez (1986, pp. 346–47) regards as "certainly the finest and most characteristic example of Bartók's subtle style... probably the most timeless of all Bartók's works – a fugue that unfolds like a fan to a point of maximum intensity and then closes, returning to the mysterious atmosphere of the opening." The second movement of Bartók's Sonata for Solo Violin is a fugue, and the first movement of his Sonata for Two Pianos and Percussion contains a fugato.
Schwanda the Bagpiper (Czech: Švanda dudák), written in 1926, an opera in two acts (five scenes), with music by Jaromír Weinberger, includes a Polka followed by a powerful Fugue based on the Polka theme.
Igor Stravinsky also incorporated fugues into his works, including the Symphony of Psalms and the Dumbarton Oaks concerto. Stravinsky recognized the compositional techniques of Bach, and in the second movement of his Symphony of Psalms (1930), he lays out a fugue that is much like that of the Baroque era. It employs a double fugue with two distinct subjects, the first beginning in C and the second in E♭. Techniques such as stretto, sequencing, and the use of subject incipits are frequently heard in the movement. Dmitri Shostakovich's 24 Preludes and Fugues is the composer's homage to Bach's two volumes of The Well-Tempered Clavier. In the first movement of his Fourth Symphony, starting at rehearsal mark 63, is a gigantic fugue in which the 20-bar subject (and tonal answer) consist entirely of semiquavers, played at the speed of quaver = 168.
Olivier Messiaen, writing about his Vingt regards sur l'enfant-Jésus (1944) wrote of the sixth piece of that collection, "Par Lui tout a été fait" ("By Him were all things made"):
It expresses the Creation of All Things: space, time, stars, planets – and the Countenance (or rather, the Thought) of God behind the flames and the seething – impossible even to speak of it, I have not attempted to describe it ... Instead, I have sheltered behind the form of the Fugue. Bach's Art of Fugue and the fugue from Beethoven's Opus 106 (the Hammerklavier sonata) have nothing to do with the academic fugue. Like those great models, this one is an anti-scholastic fugue.
György Ligeti wrote a five-part double fugue for his Requiem's second movement, the Kyrie, in which each part (SMATB) is subdivided in four-voice "bundles" that make a canon. The melodic material in this fugue is totally chromatic, with melismatic (running) parts overlaid onto skipping intervals, and use of polyrhythm (multiple simultaneous subdivisions of the measure), blurring everything both harmonically and rhythmically so as to create an aural aggregate, thus highlighting the theoretical/aesthetic question of the next section as to whether fugue is a form or a texture. According to Tom Service, in this work, Ligeti
takes the logic of the fugal idea and creates something that's meticulously built on precise contrapuntal principles of imitation and fugality, but he expands them into a different region of musical experience. Ligeti doesn't want us to hear individual entries of the subject or any subject, or to allow us access to the labyrinth through listening in to individual lines… He creates instead a vastly dense texture of voices in his choir and orchestra, a huge stratified slab of terrifying visionary power. Yet this is music that's made with a fine craft and detail of a Swiss clock maker. Ligeti's so-called 'micro-polyphony': the many voicedness of small intervals at small distances in time from one another is a kind of conjuring trick. At the micro level of the individual lines, and there are dozens and dozens of them in this music...there's an astonishing detail and finesse, but the overall macro effect is a huge overwhelming and singular experience.
Benjamin Britten used a fugue in the final part of The Young Person's Guide to the Orchestra (1946). The Henry Purcell theme is triumphantly cited at the end, making it a choral fugue.
Canadian pianist and musical thinker Glenn Gould composed So You Want to Write a Fugue?, a full-scale fugue set to a text that cleverly explicates its own musical form.
Fugues (or fughettas/fugatos) have been incorporated into genres outside Western classical music. Several examples exist within jazz, such as Bach goes to Town, composed by the Welsh composer Alec Templeton and recorded by Benny Goodman in 1938, and Concorde composed by John Lewis and recorded by the Modern Jazz Quartet in 1955.
In "Fugue for Tinhorns" from the Broadway musical Guys and Dolls, written by Frank Loesser, the characters Nicely-Nicely, Benny, and Rusty sing simultaneously about hot tips they each have in an upcoming horse race.
In "West Side Story", the dance sequence following the song "Cool" is structured as a fugue. Interestingly, Leonard Bernstein quotes Beethoven's monumental "Große Fuge" for string quartet and employs Arnold Schoenberg's twelve tone technique, all in the context of a jazz infused Broadway show stopper.
A few examples also exist within progressive rock, such as the central movement of "The Endless Enigma" by Emerson, Lake & Palmer and "On Reflection" by Gentle Giant.
On their EP of the same name, Vulfpeck has a composition called "Fugue State", which incorporates a fugue-like section between Theo Katzman (guitar), Joe Dart (bass), and Woody Goss (Wurlitzer keyboard).
The composer Matyas Seiber included an atonal or twelve-tone fugue, for flute trumpet and string quartet, in his score for the 1953 film Graham Sutherland
The film composer John Williams includes a fugue in his score for the 1990 film, Home Alone, at the point where Kevin, accidentally left at home by his family, and realizing he is about to be attacked by a pair of bumbling burglars, begins to plan his elaborate defenses. Another fugue occurs at a similar point in the 1992 sequel film, Home Alone 2: Lost in New York.
The jazz composer and film composer, Michel Legrand, includes a fugue as the climax of his score (a classical theme with variations, and fugue) for Joseph Losey's 1972 film The Go-Between, based on the 1953 novel by British novelist, L.P. Hartley, as well as several times in his score for Jacques Demy's 1970 film Peau d'âne.
A widespread view of the fugue is that it is not a musical form but rather a technique of composition.
The Austrian musicologist Erwin Ratz argues that the formal organization of a fugue involves not only the arrangement of its theme and episodes, but also its harmonic structure. In particular, the exposition and coda tend to emphasize the tonic key, whereas the episodes usually explore more distant tonalities. Ratz stressed, however, that this is the core, underlying form ("Urform") of the fugue, from which individual fugues may deviate.
Although certain related keys are more commonly explored in fugal development, the overall structure of a fugue does not limit its harmonic structure. For example, a fugue may not even explore the dominant, one of the most closely related keys to the tonic. Bach's Fugue in B♭ major from Book 1 of the Well Tempered Clavier explores the relative minor, the supertonic and the subdominant. This is unlike later forms such as the sonata, which clearly prescribes which keys are explored (typically the tonic and dominant in an ABA form). Then, many modern fugues dispense with traditional tonal harmonic scaffolding altogether, and either use serial (pitch-oriented) rules, or (as the Kyrie/Christe in György Ligeti's Requiem, Witold Lutosławski works), use panchromatic, or even denser, harmonic spectra.
The fugue is the most complex of contrapuntal forms. In Ratz's words, "fugal technique significantly burdens the shaping of musical ideas, and it was given only to the greatest geniuses, such as Bach and Beethoven, to breathe life into such an unwieldy form and make it the bearer of the highest thoughts." In presenting Bach's fugues as among the greatest of contrapuntal works, Peter Kivy points out that "counterpoint itself, since time out of mind, has been associated in the thinking of musicians with the profound and the serious" and argues that "there seems to be some rational justification for their doing so."
This is related to the idea that restrictions create freedom for the composer, by directing their efforts. He also points out that fugal writing has its roots in improvisation, and was, during the Renaissance, practiced as an improvisatory art. Writing in 1555, Nicola Vicentino, for example, suggests that:
the composer, having completed the initial imitative entrances, take the passage which has served as accompaniment to the theme and make it the basis for new imitative treatment, so that "he will always have material with which to compose without having to stop and reflect". This formulation of the basic rule for fugal improvisation anticipates later sixteenth-century discussions which deal with the improvisational technique at the keyboard more extensively. | [
{
"paragraph_id": 0,
"text": "In classical music, a fugue (/fjuːɡ/) is a contrapuntal, polyphonic compositional technique in two or more voices, built on a subject (a musical theme) that is introduced at the beginning in imitation (repetition at different pitches), which recurs frequently throughout the course of the composition. It is not to be confused with a fuguing tune, which is a style of song popularized by and mostly limited to early American (i.e. shape note or \"Sacred Harp\") music and West Gallery music. A fugue usually has three main sections: an exposition, a development and a final entry that contains the return of the subject in the fugue's tonic key. Fugues can also have episodes—parts of the fugue where new material is heard, based on the subject—a stretto, when the fugue's subject \"overlaps\" itself in different voices, or a recapitulation. A popular compositional technique in the Baroque era, the fugue was fundamental in showing mastery of harmony and tonality as it presented counterpoint.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the Middle Ages, the term was widely used to denote any works in canonic style; by the Renaissance, it had come to denote specifically imitative works. Since the 17th century, the term fugue has described what is commonly regarded as the most fully developed procedure of imitative counterpoint.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Most fugues open with a short main theme, the subject, which then sounds successively in each voice (after the first voice is finished stating the subject, a second voice repeats the subject at a different pitch, and other voices repeat in the same way); when each voice has completed the subject, the exposition is complete. This is often followed by a connecting passage, or episode, developed from previously heard material; further \"entries\" of the subject then are heard in related keys. Episodes (if applicable) and entries are usually alternated until the \"final entry\" of the subject, by which point the music has returned to the opening key, or tonic, which is often followed by closing material, the coda. In this sense, a fugue is a style of composition, rather than a fixed structure.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The form evolved during the 18th century from several earlier types of contrapuntal compositions, such as imitative ricercars, capriccios, canzonas, and fantasias. The famous fugue composer Johann Sebastian Bach (1685–1750) shaped his own works after those of Jan Pieterszoon Sweelinck (1562-1621), Johann Jakob Froberger (1616–1667), Johann Pachelbel (1653–1706), Girolamo Frescobaldi (1583–1643), Dieterich Buxtehude (c. 1637–1707) and others. With the decline of sophisticated styles at the end of the baroque period, the fugue's central role waned, eventually giving way as sonata form and the symphony orchestra rose to a dominant position. Nevertheless, composers continued to write and study fugues for various purposes; they appear in the works of Wolfgang Amadeus Mozart (1756–1791) and Ludwig van Beethoven (1770–1827), as well as modern composers such as Dmitri Shostakovich (1906–1975) and Paul Hindemith (1895-1963).",
"title": ""
},
{
"paragraph_id": 4,
"text": "The English term fugue originated in the 16th century and is derived from the French word fugue or the Italian fuga. This in turn comes from Latin, also fuga, which is itself related to both fugere (\"to flee\") and fugare (\"to chase\"). The adjectival form is fugal. Variants include fughetta (\"a small fugue\") and fugato (a passage in fugal style within another work that is not a fugue).",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "A fugue begins with the exposition and is written according to certain predefined rules; in later portions the composer has more freedom, though a logical key structure is usually followed. Further entries of the subject will occur throughout the fugue, repeating the accompanying material at the same time. The various entries may or may not be separated by episodes.",
"title": "Musical outline"
},
{
"paragraph_id": 6,
"text": "What follows is a chart displaying a fairly typical fugal outline, and an explanation of the processes involved in creating this structure.",
"title": "Musical outline"
},
{
"paragraph_id": 7,
"text": "A fugue begins with the exposition of its subject in one of the voices alone in the tonic key. After the statement of the subject, a second voice enters and states the subject with the subject transposed to another key (usually the dominant or subdominant), which is known as the answer. To make the music run smoothly, it may also have to be altered slightly. When the answer is an exact copy of the subject to the new key, with identical intervals to the first statement, it is classified as a real answer; if the intervals are altered to maintain the key it is a tonal answer.",
"title": "Musical outline"
},
{
"paragraph_id": 8,
"text": "A tonal answer is usually called for when the subject begins with a prominent dominant note, or where there is a prominent dominant note very close to the beginning of the subject. To prevent an undermining of the music's sense of key, this note is transposed up a fourth to the tonic rather than up a fifth to the supertonic. Answers in the subdominant are also employed for the same reason.",
"title": "Musical outline"
},
{
"paragraph_id": 9,
"text": "While the answer is being stated, the voice in which the subject was previously heard continues with new material. If this new material is reused in later statements of the subject, it is called a countersubject; if this accompanying material is only heard once, it is simply referred to as free counterpoint.",
"title": "Musical outline"
},
{
"paragraph_id": 10,
"text": "The countersubject is written in invertible counterpoint at the octave or fifteenth. The distinction is made between the use of free counterpoint and regular countersubjects accompanying the fugue subject/answer, because in order for a countersubject to be heard accompanying the subject in more than one instance, it must be capable of sounding correctly above or below the subject, and must be conceived, therefore, in invertible (double) counterpoint.",
"title": "Musical outline"
},
{
"paragraph_id": 11,
"text": "In tonal music, invertible contrapuntal lines must be written according to certain rules because several intervallic combinations, while acceptable in one particular orientation, are no longer permissible when inverted. For example, when the note \"G\" sounds in one voice above the note \"C\" in lower voice, the interval of a fifth is formed, which is considered consonant and entirely acceptable. When this interval is inverted (\"C\" in the upper voice above \"G\" in the lower), it forms a fourth, considered a dissonance in tonal contrapuntal practice, and requires special treatment, or preparation and resolution, if it is to be used. The countersubject, if sounding at the same time as the answer, is transposed to the pitch of the answer. Each voice then responds with its own subject or answer, and further countersubjects or free counterpoint may be heard.",
"title": "Musical outline"
},
{
"paragraph_id": 12,
"text": "When a tonal answer is used, it is customary for the exposition to alternate subjects (S) with answers (A), however, in some fugues this order is occasionally varied: e.g., see the SAAS arrangement of Fugue No. 1 in C Major, BWV 846, from J.S. Bach's Well-Tempered Clavier, Book 1. A brief codetta is often heard connecting the various statements of the subject and answer. This allows the music to run smoothly. The codetta, just as the other parts of the exposition, can be used throughout the rest of the fugue.",
"title": "Musical outline"
},
{
"paragraph_id": 13,
"text": "The first answer must occur as soon after the initial statement of the subject as possible; therefore the first codetta is often extremely short, or not needed. In the above example, this is the case: the subject finishes on the quarter note (or crotchet) B♭ of the third beat of the second bar which harmonizes the opening G of the answer. The later codettas may be considerably longer, and often serve to (a) develop the material heard so far in the subject/answer and countersubject and possibly introduce ideas heard in the second countersubject or free counterpoint that follows (b) delay, and therefore heighten the impact of the reentry of the subject in another voice as well as modulating back to the tonic.",
"title": "Musical outline"
},
{
"paragraph_id": 14,
"text": "The exposition usually concludes when all voices have given a statement of the subject or answer. In some fugues, the exposition will end with a redundant entry, or an extra presentation of the theme. Furthermore, in some fugues the entry of one of the voices may be reserved until later, for example in the pedals of an organ fugue (see J.S. Bach's Fugue in C major for Organ, BWV 547).",
"title": "Musical outline"
},
{
"paragraph_id": 15,
"text": "Further entries of the subject follow this initial exposition, either immediately (as for example in Fugue No. 1 in C major, BWV 846 of the Well-Tempered Clavier) or separated by episodes. Episodic material is always modulatory and is usually based upon some element heard in the exposition. Each episode has the primary function of transitioning for the next entry of the subject in a new key, and may also provide release from the strictness of form employed in the exposition, and middle-entries. André Gedalge states that the episode of the fugue is generally based on a series of imitations of the subject that have been fragmented.",
"title": "Musical outline"
},
{
"paragraph_id": 16,
"text": "Further entries of the subject, or middle entries, occur throughout the fugue. They must state the subject or answer at least once in its entirety, and may also be heard in combination with the countersubject(s) from the exposition, new countersubjects, free counterpoint, or any of these in combination. It is uncommon for the subject to enter alone in a single voice in the middle entries as in the exposition; rather, it is usually heard with at least one of the countersubjects and/or other free contrapuntal accompaniments.",
"title": "Musical outline"
},
{
"paragraph_id": 17,
"text": "Middle entries tend to occur at pitches other than the initial. As shown in the typical structure above, these are often closely related keys such as the relative dominant and subdominant, although the key structure of fugues varies greatly. In the fugues of J.S. Bach, the first middle entry occurs most often in the relative major or minor of the work's overall key, and is followed by an entry in the dominant of the relative major or minor when the fugue's subject requires a tonal answer. In the fugues of earlier composers (notably, Buxtehude and Pachelbel), middle entries in keys other than the tonic and dominant tend to be the exception, and non-modulation the norm. One of the famous examples of such non-modulating fugue occurs in Buxtehude's Praeludium (Fugue and Chaconne) in C, BuxWV 137.",
"title": "Musical outline"
},
{
"paragraph_id": 18,
"text": "When there is no entrance of the subject and answer material, the composer can develop the subject by altering the subject. This is called an episode, often by inversion, although the term is sometimes used synonymously with middle entry and may also describe the exposition of completely new subjects, as in a double fugue for example (see below). In any of the entries within a fugue, the subject may be altered, by inversion, retrograde (a less common form where the entire subject is heard back-to-front) and diminution (the reduction of the subject's rhythmic values by a certain factor), augmentation (the increase of the subject's rhythmic values by a certain factor) or any combination of them.",
"title": "Musical outline"
},
{
"paragraph_id": 19,
"text": "The excerpt below, bars 7–12 of J.S. Bach's Fugue No. 2 in C minor, BWV 847, from the Well-Tempered Clavier, Book 1 illustrates the application of most of the characteristics described above. The fugue is for keyboard and in three voices, with regular countersubjects. This excerpt opens at last entry of the exposition: the subject is sounding in the bass, the first countersubject in the treble, while the middle-voice is stating a second version of the second countersubject, which concludes with the characteristic rhythm of the subject, and is always used together with the first version of the second countersubject. Following this an episode modulates from the tonic to the relative major by means of sequence, in the form of an accompanied canon at the fourth. Arrival in E♭ major is marked by a quasi perfect cadence across the bar line, from the last quarter note beat of the first bar to the first beat of the second bar in the second system, and the first middle entry. Here, Bach has altered the second countersubject to accommodate the change of mode.",
"title": "Musical outline"
},
{
"paragraph_id": 20,
"text": "At any point in the fugue there may be \"false entries\" of the subject, which include the start of the subject but are not completed. False entries are often abbreviated to the head of the subject, and anticipate the \"true\" entry of the subject, heightening the impact of the subject proper.",
"title": "Musical outline"
},
{
"paragraph_id": 21,
"text": "The counter-exposition is a second exposition. However, there are only two entries, and the entries occur in reverse order. The counter-exposition in a fugue is separated from the exposition by an episode and is in the same key as the original exposition.",
"title": "Musical outline"
},
{
"paragraph_id": 22,
"text": "Sometimes counter-expositions or the middle entries take place in stretto, whereby one voice responds with the subject/answer before the first voice has completed its entry of the subject/answer, usually increasing the intensity of the music.",
"title": "Musical outline"
},
{
"paragraph_id": 23,
"text": "Only one entry of the subject must be heard in its completion in a stretto. However, a stretto in which the subject/answer is heard in completion in all voices is known as stretto maestrale or grand stretto. Strettos may also occur by inversion, augmentation and diminution. A fugue in which the opening exposition takes place in stretto form is known as a close fugue or stretto fugue (see for example, the Gratias agimus tibi and Dona nobis pacem choruses from J.S. Bach's Mass in B minor).",
"title": "Musical outline"
},
{
"paragraph_id": 24,
"text": "The closing section of a fugue often includes one or two counter-expositions, and possibly a stretto, in the tonic; sometimes over a tonic or dominant pedal note. Any material that follows the final entry of the subject is considered to be the final coda and is normally cadential.",
"title": "Musical outline"
},
{
"paragraph_id": 25,
"text": "A simple fugue has only one subject, and does not utilize invertible counterpoint.",
"title": "Types"
},
{
"paragraph_id": 26,
"text": "A double fugue has two subjects that are often developed simultaneously. Similarly, a triple fugue has three subjects. There are two kinds of double (triple) fugue: (a) a fugue in which the second (third) subject is (are) presented simultaneously with the subject in the exposition (e.g. as in Kyrie Eleison of Mozart's Requiem in D minor or the fugue of Bach's Passacaglia and Fugue in C minor, BWV 582), and (b) a fugue in which all subjects have their own expositions at some point, and they are not combined until later (see for example, the three-subject Fugue No. 14 in F♯ minor from Bach's Well-Tempered Clavier Book 2, or more famously, Bach's \"St. Anne\" Fugue in E♭ major, BWV 552, a triple fugue for organ.)",
"title": "Types"
},
{
"paragraph_id": 27,
"text": "A counter-fugue is a fugue in which the first answer is presented as the subject in inversion (upside down), and the inverted subject continues to feature prominently throughout the fugue. Examples include Contrapunctus V through Contrapunctus VII, from Bach's The Art of Fugue.",
"title": "Types"
},
{
"paragraph_id": 28,
"text": "Permutation fugue describes a type of composition (or technique of composition) in which elements of fugue and strict canon are combined. Each voice enters in succession with the subject, each entry alternating between tonic and dominant, and each voice, having stated the initial subject, continues by stating two or more themes (or countersubjects), which must be conceived in correct invertible counterpoint. (In other words, the subject and countersubjects must be capable of being played both above and below all the other themes without creating any unacceptable dissonances.) Each voice takes this pattern and states all the subjects/themes in the same order (and repeats the material when all the themes have been stated, sometimes after a rest).",
"title": "Types"
},
{
"paragraph_id": 29,
"text": "There is usually very little non-structural/thematic material. During the course of a permutation fugue, it is quite uncommon, actually, for every single possible voice-combination (or \"permutation\") of the themes to be heard. This limitation exists in consequence of sheer proportionality: the more voices in a fugue, the greater the number of possible permutations. In consequence, composers exercise editorial judgment as to the most musical of permutations and processes leading thereto. One example of permutation fugue can be seen in the eighth and final chorus of J.S. Bach's cantata, Himmelskönig, sei willkommen, BWV 182.",
"title": "Types"
},
{
"paragraph_id": 30,
"text": "Permutation fugues differ from conventional fugue in that there are no connecting episodes, nor statement of the themes in related keys. So for example, the fugue of Bach's Passacaglia and Fugue in C minor, BWV 582 is not purely a permutation fugue, as it does have episodes between permutation expositions. Invertible counterpoint is essential to permutation fugues but is not found in simple fugues.",
"title": "Types"
},
{
"paragraph_id": 31,
"text": "A fughetta is a short fugue that has the same characteristics as a fugue. Often the contrapuntal writing is not strict, and the setting less formal. See for example, variation 24 of Beethoven's Diabelli Variations Op. 120.",
"title": "Types"
},
{
"paragraph_id": 32,
"text": "The term fuga was used as far back as the Middle Ages, but was initially used to refer to any kind of imitative counterpoint, including canons, which are now thought of as distinct from fugues. Prior to the 16th century, fugue was originally a genre. It was not until the 16th century that fugal technique as it is understood today began to be seen in pieces, both instrumental and vocal. Fugal writing is found in works such as fantasias, ricercares and canzonas.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "\"Fugue\" as a theoretical term first occurred in 1330 when Jacobus of Liege wrote about the fuga in his Speculum musicae. The fugue arose from the technique of \"imitation\", where the same musical material was repeated starting on a different note.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Gioseffo Zarlino, a composer, author, and theorist in the Renaissance, was one of the first to distinguish between the two types of imitative counterpoint: fugues and canons (which he called imitations). Originally, this was to aid improvisation, but by the 1550s, it was considered a technique of composition. The composer Giovanni Pierluigi da Palestrina (1525?–1594) wrote masses using modal counterpoint and imitation, and fugal writing became the basis for writing motets as well. Palestrina's imitative motets differed from fugues in that each phrase of the text had a different subject which was introduced and worked out separately, whereas a fugue continued working with the same subject or subjects throughout the entire length of the piece.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "It was in the Baroque period that the writing of fugues became central to composition, in part as a demonstration of compositional expertise. Fugues were incorporated into a variety of musical forms. Jan Pieterszoon Sweelinck, Girolamo Frescobaldi, Johann Jakob Froberger and Dieterich Buxtehude all wrote fugues, and George Frideric Handel included them in many of his oratorios. Keyboard suites from this time often conclude with a fugal gigue. Domenico Scarlatti has only a few fugues among his corpus of over 500 harpsichord sonatas. The French overture featured a quick fugal section after a slow introduction. The second movement of a sonata da chiesa, as written by Arcangelo Corelli and others, was usually fugal.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The Baroque period also saw a rise in the importance of music theory. Some fugues during the Baroque period were pieces designed to teach contrapuntal technique to students. The most influential text was Johann Joseph Fux's Gradus Ad Parnassum (\"Steps to Parnassus\"), which appeared in 1725. This work laid out the terms of \"species\" of counterpoint, and offered a series of exercises to learn fugue writing. Fux's work was largely based on the practice of Palestrina's modal fugues. Mozart studied from this book, and it remained influential into the nineteenth century. Haydn, for example, taught counterpoint from his own summary of Fux and thought of it as the basis for formal structure.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Bach's most famous fugues are those for the harpsichord in The Well-Tempered Clavier, which many composers and theorists look at as the greatest model of fugue. The Well-Tempered Clavier comprises two volumes written in different times of Bach's life, each comprising 24 prelude and fugue pairs, one for each major and minor key. Bach is also known for his organ fugues, which are usually preceded by a prelude or toccata. The Art of Fugue, BWV 1080, is a collection of fugues (and four canons) on a single theme that is gradually transformed as the cycle progresses. Bach also wrote smaller single fugues and put fugal sections or movements into many of his more general works. J.S. Bach's influence extended forward through his son C.P.E. Bach and through the theorist Friedrich Wilhelm Marpurg (1718–1795) whose Abhandlung von der Fuge (\"Treatise on the fugue\", 1753) was largely based on J.S. Bach's work.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "During the Classical era, the fugue was no longer a central or even fully natural mode of musical composition. Nevertheless, both Haydn and Mozart had periods of their careers in which they in some sense \"rediscovered\" fugal writing and used it frequently in their work.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Joseph Haydn was the leader of fugal composition and technique in the Classical era. Haydn's most famous fugues can be found in his \"Sun\" Quartets (op. 20, 1772), of which three have fugal finales. This was a practice that Haydn repeated only once later in his quartet-writing career, with the finale of his String Quartet, Op. 50 No. 4 (1787). Some of the earliest examples of Haydn's use of counterpoint, however, are in three symphonies (No. 3, No. 13, and No. 40) that date from 1762 to 1763. The earliest fugues, in both the symphonies and in the Baryton trios, exhibit the influence of Joseph Fux's treatise on counterpoint, Gradus ad Parnassum (1725), which Haydn studied carefully.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Haydn's second fugal period occurred after he heard, and was greatly inspired by, the oratorios of Handel during his visits to London (1791–1793, 1794–1795). Haydn then studied Handel's techniques and incorporated Handelian fugal writing into the choruses of his mature oratorios The Creation and The Seasons, as well as several of his later symphonies, including No. 88, No. 95, and No. 101; and the late string quartets, Opus 71 no. 3 and (especially) Opus 76 no. 6.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "The young Wolfgang Amadeus Mozart studied counterpoint with Padre Martini in Bologna. Under the employment of Archbishop Colloredo, and the musical influence of his predecessors and colleagues such as Johann Ernst Eberlin, Anton Cajetan Adlgasser, Michael Haydn, and his own father, Leopold Mozart at the Salzburg Cathedral, the young Mozart composed ambitious fugues and contrapuntal passages in Catholic choral works such as Mass in C minor, K. 139 \"Waisenhaus\" (1768), Mass in C major, K. 66 \"Dominicus\" (1769), Mass in C major, K. 167 \"in honorem Sanctissimae Trinitatis\" (1773), Mass in C major, K. 262 \"Missa longa\" (1775), Mass in C major, K. 337 \"Solemnis\" (1780), various litanies, and vespers. Leopold admonished his son openly in 1777 that he not forget to make public demonstration of his abilities in \"fugue, canon, and contrapunctus\". Later in life, the major impetus to fugal writing for Mozart was the influence of Baron Gottfried van Swieten in Vienna around 1782. Van Swieten, during diplomatic service in Berlin, had taken the opportunity to collect as many manuscripts by Bach and Handel as he could, and he invited Mozart to study his collection and encouraged him to transcribe various works for other combinations of instruments. Mozart was evidently fascinated by these works and wrote a set of five transcriptions for string quartet, K. 405 (1782), of fugues from Bach's Well-Tempered Clavier, introducing them with preludes of his own. In a letter to his sister Nannerl Mozart, dated in Vienna on 20 April 1782, Mozart recognizes that he had not written anything in this form, but moved by his wife's interest he composed one piece, which is sent with the letter. He begs her not to let anybody see the fugue and manifests the hope to write five more and then present them to Baron van Swieten. Regarding the piece, he said \"I have taken particular care to write andante maestoso upon it, so that it should not be played fast – for if a fugue is not played slowly the ear cannot clearly distinguish the new subject as it is introduced and the effect is missed\". Mozart then set to writing fugues on his own, mimicking the Baroque style. These included a fugue in C minor, K. 426, for two pianos (1783). Later, Mozart incorporated fugal writing into his opera Die Zauberflöte and the finale of his Symphony No. 41.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The parts of the Requiem he completed also contain several fugues (most notably the Kyrie, and the three fugues in the Domine Jesu; he also left behind a sketch for an Amen fugue which, some believe, would have come at the end of the Sequentia).",
"title": "History"
},
{
"paragraph_id": 43,
"text": "Ludwig van Beethoven was familiar with fugal writing from childhood, as an important part of his training was playing from The Well-Tempered Clavier. During his early career in Vienna, Beethoven attracted notice for his performance of these fugues. There are fugal sections in Beethoven's early piano sonatas, and fugal writing is to be found in the second and fourth movements of the Eroica Symphony (1805). Beethoven incorporated fugues in his sonatas, and reshaped the episode's purpose and compositional technique for later generations of composers.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Nevertheless, fugues did not take on a truly central role in Beethoven's work until his late period. The finale of Beethoven's Hammerklavier Sonata contains a fugue, which was practically unperformed until the late 19th century, due to its tremendous technical difficulty and length. The last movement of his Cello Sonata, Op. 102 No. 2 is a fugue, and there are fugal passages in the last movements of his Piano Sonatas in A major, Op. 101 and A♭ major Op. 110. According to Charles Rosen, \"With the finale of 110, Beethoven re-conceived the significance of the most traditional elements of fugue writing.\"",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Fugal passages are also found in the Missa Solemnis and all movements of the Ninth Symphony, except the third. A massive, dissonant fugue forms the finale of his String Quartet, Op. 130 (1825); the latter was later published separately as Op. 133, the Große Fuge (\"Great Fugue\"). However, it is the fugue that opens Beethoven's String Quartet in C♯ minor, Op. 131 that several commentators regard as one of the composer's greatest achievements. Joseph Kerman (1966, p. 330) calls it \"this most moving of all fugues\". J. W. N. Sullivan (1927, p. 235) hears it as \"the most superhuman piece of music that Beethoven has ever written.\" Philip Radcliffe (1965, p. 149) says \"[a] bare description of its formal outline can give but little idea of the extraordinary profundity of this fugue .\"",
"title": "History"
},
{
"paragraph_id": 46,
"text": "By the beginning of the Romantic era, fugue writing had become specifically attached to the norms and styles of the Baroque. Felix Mendelssohn wrote many fugues inspired by his study of the music of Johann Sebastian Bach.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Johannes Brahms' Variations and Fugue on a Theme by Handel, Op. 24, is a work for solo piano written in 1861. It consists of a set of twenty-five variations and a concluding fugue, all based on a theme from George Frideric Handel's Harpsichord Suite No. 1 in B♭ major, HWV 434.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "Franz Liszt's Piano Sonata in B minor (1853) contains a powerful fugue, demanding incisive virtuosity from its player:",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Richard Wagner included several fugues in his opera Die Meistersinger von Nürnberg. Giuseppe Verdi included a whimsical example at the end of his opera Falstaff and his setting of the Requiem Mass contained two (originally three) choral fugues. Anton Bruckner and Gustav Mahler also included them in their respective symphonies. The exposition of the finale of Bruckner's Symphony No. 5 begins with a fugal exposition. The exposition ends with a chorale, the melody of which is then used as a second fugal exposition at the beginning of the development. The recapitulation features both fugal subjects concurrently. The finale of Mahler's Symphony No. 5 features a \"fugue-like\" passage early in the movement, though this is not actually an example of a fugue.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "Twentieth-century composers brought fugue back to its position of prominence, realizing its uses in full instrumental works, its importance in development and introductory sections, and the developmental capabilities of fugal composition.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The second movement of Maurice Ravel's piano suite Le Tombeau de Couperin (1917) is a fugue that Roy Howat (200, p. 88) describes as having \"a subtle glint of jazz\". Béla Bartók's Music for Strings, Percussion and Celesta (1936) opens with a slow fugue that Pierre Boulez (1986, pp. 346–47) regards as \"certainly the finest and most characteristic example of Bartók's subtle style... probably the most timeless of all Bartók's works – a fugue that unfolds like a fan to a point of maximum intensity and then closes, returning to the mysterious atmosphere of the opening.\" The second movement of Bartók's Sonata for Solo Violin is a fugue, and the first movement of his Sonata for Two Pianos and Percussion contains a fugato.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "Schwanda the Bagpiper (Czech: Švanda dudák), written in 1926, an opera in two acts (five scenes), with music by Jaromír Weinberger, includes a Polka followed by a powerful Fugue based on the Polka theme.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Igor Stravinsky also incorporated fugues into his works, including the Symphony of Psalms and the Dumbarton Oaks concerto. Stravinsky recognized the compositional techniques of Bach, and in the second movement of his Symphony of Psalms (1930), he lays out a fugue that is much like that of the Baroque era. It employs a double fugue with two distinct subjects, the first beginning in C and the second in E♭. Techniques such as stretto, sequencing, and the use of subject incipits are frequently heard in the movement. Dmitri Shostakovich's 24 Preludes and Fugues is the composer's homage to Bach's two volumes of The Well-Tempered Clavier. In the first movement of his Fourth Symphony, starting at rehearsal mark 63, is a gigantic fugue in which the 20-bar subject (and tonal answer) consist entirely of semiquavers, played at the speed of quaver = 168.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Olivier Messiaen, writing about his Vingt regards sur l'enfant-Jésus (1944) wrote of the sixth piece of that collection, \"Par Lui tout a été fait\" (\"By Him were all things made\"):",
"title": "History"
},
{
"paragraph_id": 55,
"text": "It expresses the Creation of All Things: space, time, stars, planets – and the Countenance (or rather, the Thought) of God behind the flames and the seething – impossible even to speak of it, I have not attempted to describe it ... Instead, I have sheltered behind the form of the Fugue. Bach's Art of Fugue and the fugue from Beethoven's Opus 106 (the Hammerklavier sonata) have nothing to do with the academic fugue. Like those great models, this one is an anti-scholastic fugue.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "György Ligeti wrote a five-part double fugue for his Requiem's second movement, the Kyrie, in which each part (SMATB) is subdivided in four-voice \"bundles\" that make a canon. The melodic material in this fugue is totally chromatic, with melismatic (running) parts overlaid onto skipping intervals, and use of polyrhythm (multiple simultaneous subdivisions of the measure), blurring everything both harmonically and rhythmically so as to create an aural aggregate, thus highlighting the theoretical/aesthetic question of the next section as to whether fugue is a form or a texture. According to Tom Service, in this work, Ligeti",
"title": "History"
},
{
"paragraph_id": 57,
"text": "takes the logic of the fugal idea and creates something that's meticulously built on precise contrapuntal principles of imitation and fugality, but he expands them into a different region of musical experience. Ligeti doesn't want us to hear individual entries of the subject or any subject, or to allow us access to the labyrinth through listening in to individual lines… He creates instead a vastly dense texture of voices in his choir and orchestra, a huge stratified slab of terrifying visionary power. Yet this is music that's made with a fine craft and detail of a Swiss clock maker. Ligeti's so-called 'micro-polyphony': the many voicedness of small intervals at small distances in time from one another is a kind of conjuring trick. At the micro level of the individual lines, and there are dozens and dozens of them in this music...there's an astonishing detail and finesse, but the overall macro effect is a huge overwhelming and singular experience.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "Benjamin Britten used a fugue in the final part of The Young Person's Guide to the Orchestra (1946). The Henry Purcell theme is triumphantly cited at the end, making it a choral fugue.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Canadian pianist and musical thinker Glenn Gould composed So You Want to Write a Fugue?, a full-scale fugue set to a text that cleverly explicates its own musical form.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Fugues (or fughettas/fugatos) have been incorporated into genres outside Western classical music. Several examples exist within jazz, such as Bach goes to Town, composed by the Welsh composer Alec Templeton and recorded by Benny Goodman in 1938, and Concorde composed by John Lewis and recorded by the Modern Jazz Quartet in 1955.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "In \"Fugue for Tinhorns\" from the Broadway musical Guys and Dolls, written by Frank Loesser, the characters Nicely-Nicely, Benny, and Rusty sing simultaneously about hot tips they each have in an upcoming horse race.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "In \"West Side Story\", the dance sequence following the song \"Cool\" is structured as a fugue. Interestingly, Leonard Bernstein quotes Beethoven's monumental \"Große Fuge\" for string quartet and employs Arnold Schoenberg's twelve tone technique, all in the context of a jazz infused Broadway show stopper.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "A few examples also exist within progressive rock, such as the central movement of \"The Endless Enigma\" by Emerson, Lake & Palmer and \"On Reflection\" by Gentle Giant.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "On their EP of the same name, Vulfpeck has a composition called \"Fugue State\", which incorporates a fugue-like section between Theo Katzman (guitar), Joe Dart (bass), and Woody Goss (Wurlitzer keyboard).",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The composer Matyas Seiber included an atonal or twelve-tone fugue, for flute trumpet and string quartet, in his score for the 1953 film Graham Sutherland",
"title": "History"
},
{
"paragraph_id": 66,
"text": "The film composer John Williams includes a fugue in his score for the 1990 film, Home Alone, at the point where Kevin, accidentally left at home by his family, and realizing he is about to be attacked by a pair of bumbling burglars, begins to plan his elaborate defenses. Another fugue occurs at a similar point in the 1992 sequel film, Home Alone 2: Lost in New York.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "The jazz composer and film composer, Michel Legrand, includes a fugue as the climax of his score (a classical theme with variations, and fugue) for Joseph Losey's 1972 film The Go-Between, based on the 1953 novel by British novelist, L.P. Hartley, as well as several times in his score for Jacques Demy's 1970 film Peau d'âne.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "A widespread view of the fugue is that it is not a musical form but rather a technique of composition.",
"title": "Discussion"
},
{
"paragraph_id": 69,
"text": "The Austrian musicologist Erwin Ratz argues that the formal organization of a fugue involves not only the arrangement of its theme and episodes, but also its harmonic structure. In particular, the exposition and coda tend to emphasize the tonic key, whereas the episodes usually explore more distant tonalities. Ratz stressed, however, that this is the core, underlying form (\"Urform\") of the fugue, from which individual fugues may deviate.",
"title": "Discussion"
},
{
"paragraph_id": 70,
"text": "Although certain related keys are more commonly explored in fugal development, the overall structure of a fugue does not limit its harmonic structure. For example, a fugue may not even explore the dominant, one of the most closely related keys to the tonic. Bach's Fugue in B♭ major from Book 1 of the Well Tempered Clavier explores the relative minor, the supertonic and the subdominant. This is unlike later forms such as the sonata, which clearly prescribes which keys are explored (typically the tonic and dominant in an ABA form). Then, many modern fugues dispense with traditional tonal harmonic scaffolding altogether, and either use serial (pitch-oriented) rules, or (as the Kyrie/Christe in György Ligeti's Requiem, Witold Lutosławski works), use panchromatic, or even denser, harmonic spectra.",
"title": "Discussion"
},
{
"paragraph_id": 71,
"text": "The fugue is the most complex of contrapuntal forms. In Ratz's words, \"fugal technique significantly burdens the shaping of musical ideas, and it was given only to the greatest geniuses, such as Bach and Beethoven, to breathe life into such an unwieldy form and make it the bearer of the highest thoughts.\" In presenting Bach's fugues as among the greatest of contrapuntal works, Peter Kivy points out that \"counterpoint itself, since time out of mind, has been associated in the thinking of musicians with the profound and the serious\" and argues that \"there seems to be some rational justification for their doing so.\"",
"title": "Discussion"
},
{
"paragraph_id": 72,
"text": "This is related to the idea that restrictions create freedom for the composer, by directing their efforts. He also points out that fugal writing has its roots in improvisation, and was, during the Renaissance, practiced as an improvisatory art. Writing in 1555, Nicola Vicentino, for example, suggests that:",
"title": "Discussion"
},
{
"paragraph_id": 73,
"text": "the composer, having completed the initial imitative entrances, take the passage which has served as accompaniment to the theme and make it the basis for new imitative treatment, so that \"he will always have material with which to compose without having to stop and reflect\". This formulation of the basic rule for fugal improvisation anticipates later sixteenth-century discussions which deal with the improvisational technique at the keyboard more extensively.",
"title": "Discussion"
}
]
| In classical music, a fugue is a contrapuntal, polyphonic compositional technique in two or more voices, built on a subject that is introduced at the beginning in imitation, which recurs frequently throughout the course of the composition. It is not to be confused with a fuguing tune, which is a style of song popularized by and mostly limited to early American music and West Gallery music. A fugue usually has three main sections: an exposition, a development and a final entry that contains the return of the subject in the fugue's tonic key. Fugues can also have episodes—parts of the fugue where new material is heard, based on the subject—a stretto, when the fugue's subject "overlaps" itself in different voices, or a recapitulation. A popular compositional technique in the Baroque era, the fugue was fundamental in showing mastery of harmony and tonality as it presented counterpoint. In the Middle Ages, the term was widely used to denote any works in canonic style; by the Renaissance, it had come to denote specifically imitative works. Since the 17th century, the term fugue has described what is commonly regarded as the most fully developed procedure of imitative counterpoint. Most fugues open with a short main theme, the subject, which then sounds successively in each voice; when each voice has completed the subject, the exposition is complete. This is often followed by a connecting passage, or episode, developed from previously heard material; further "entries" of the subject then are heard in related keys. Episodes and entries are usually alternated until the "final entry" of the subject, by which point the music has returned to the opening key, or tonic, which is often followed by closing material, the coda. In this sense, a fugue is a style of composition, rather than a fixed structure. The form evolved during the 18th century from several earlier types of contrapuntal compositions, such as imitative ricercars, capriccios, canzonas, and fantasias. The famous fugue composer Johann Sebastian Bach (1685–1750) shaped his own works after those of Jan Pieterszoon Sweelinck (1562-1621), Johann Jakob Froberger (1616–1667), Johann Pachelbel (1653–1706), Girolamo Frescobaldi (1583–1643), Dieterich Buxtehude and others. With the decline of sophisticated styles at the end of the baroque period, the fugue's central role waned, eventually giving way as sonata form and the symphony orchestra rose to a dominant position. Nevertheless, composers continued to write and study fugues for various purposes; they appear in the works of Wolfgang Amadeus Mozart (1756–1791) and Ludwig van Beethoven (1770–1827), as well as modern composers such as Dmitri Shostakovich (1906–1975) and Paul Hindemith (1895-1963). | 2001-08-16T19:31:48Z | 2023-11-08T01:56:52Z | [
"Template:Citation needed",
"Template:Counterpoint & polyphony",
"Template:Main",
"Template:Failed verification",
"Template:Blockquote",
"Template:Cite book",
"Template:Harvnb",
"Template:Use dmy dates",
"Template:Lang",
"Template:Quote",
"Template:Full citation needed",
"Template:YouTube",
"Template:IPAc-en",
"Template:EB1911 poster",
"Template:Music",
"Template:Cite web",
"Template:Dead link",
"Template:Wiktionary",
"Template:Cite AmCyc",
"Template:Clarify",
"Template:Webarchive",
"Template:Authority control",
"Template:Further",
"Template:ISBN",
"Template:Portal bar",
"Template:Cite Grove",
"Template:Short description",
"Template:Other uses",
"Template:TOC limit",
"Template:Who",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Fugue |
10,898 | Dissociative fugue | Dissociative fugue (/fjuːɡ/), formerly called a fugue state or psychogenic fugue, is a rare psychiatric phenomenon characterized by reversible amnesia for one's identity in conjunction with unexpected wandering or travel. This is sometimes accompanied by the establishment of a new identity and the inability to recall personal information prior to the presentation of symptoms. Dissociative fugue is a mental and behavioral disorder that is classified variously as a dissociative disorder, a conversion disorder, and a somatic symptom disorder. It is a facet of dissociative amnesia, according to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5).
After recovery from a fugue state, previous memories usually return intact, and further treatment is unnecessary. An episode of fugue is not characterized as attributable to a psychiatric disorder if it can be related to the ingestion of psychotropic substances, to physical trauma, to a general medical condition or to dissociative identity disorder, delirium, or dementia. Fugues are precipitated by a series of long-term traumatic episodes. It is most commonly associated with childhood victims of sexual abuse who learn to dissociate memory of the abuse (dissociative amnesia).
Symptoms of a dissociative fugue include mild confusion and once the fugue ends, possible depression, grief, shame, and discomfort. People have also experienced a post-fugue anger. Another symptom of the fugue state can consist of loss of one's identity.
Before dissociative fugue can be diagnosed, either dissociative amnesia or dissociative identity disorder must be diagnosed. The only difference between dissociative amnesia or dissociative identity disorder and dissociative fugue, is that the person affected by the latter travels or wanders. This traveling or wandering is typically associated with the amnesia induced identity or of the person’s physical surroundings.
Sometimes dissociative fugue cannot be diagnosed until people return to their pre-fugue identity and are distressed to find themselves in unfamiliar circumstances, sometimes with awareness of "lost time". The diagnosis is usually made retroactively when a doctor reviews the history and collects information that documents the circumstances before people left home, the travel itself, and the establishment of an alternative life.
Functional amnesia can also be situation-specific, varying from all forms and variations of traumas or generally violent experiences, with the person experiencing severe memory loss for a particular trauma. Committing homicide, experiencing or committing a violent crime such as rape or torture, experiencing combat violence, attempting suicide, and being in automobile accidents and natural disasters have all induced cases of situation-specific amnesia. In these unusual cases, care must be exercised in interpreting cases of psychogenic amnesia when there are compelling motives to feign memory deficits for legal or financial reasons. However, although some fraction of psychogenic amnesia cases can be explained in this fashion, it is generally acknowledged that true cases are not uncommon. Both global and situationally specific amnesia are often distinguished from the organic amnesic syndrome, in that the capacity to store new memories and experiences remains intact. Given the very delicate and oftentimes dramatic nature of memory loss in such cases, there usually is a concerted effort to help the person recover their identity and history. This will allow the subject to be recovered sometimes spontaneously when particular cues are encountered.
The cause of the fugue state is related to dissociative amnesia, (Code 300.12 of the DSM-IV codes) which has several other subtypes: selective amnesia, generalized amnesia, continuous amnesia, and systematized amnesia, in addition to the subtype "dissociative fugue".
Unlike retrograde amnesia (which is popularly referred to simply as "amnesia", the state where someone forgets events before brain damage), dissociative amnesia is not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication, DSM-IV Codes 291.1 & 292.83) or a neurological or other general medical condition (e.g., amnestic disorder due to a head trauma, DSM-IV Code 294.0). It is a complex neuropsychological process.
As the person experiencing a dissociative fugue may have recently experienced the reappearance of an event or person representing an earlier life trauma, the emergence of an armoring or defensive personality seems to be for some, a logical apprehension of the situation.
Therefore, the terminology "fugue state" may carry a slight linguistic distinction from "dissociative fugue", the former implying a greater degree of "motion". For the purposes of this article, then, a "fugue state" occurs while one is "acting out" a "dissociative fugue".
The DSM-IV defines "dissociative fugue" as:
The Merck Manual defines "dissociative fugue" as:
In support of this definition, the Merck Manual further defines dissociative amnesia as:
The DSM-IV-TR states that the fugue may have a duration from days to months, and recovery is usually rapid. However, some cases may be refractory. An individual usually has only one episode. | [
{
"paragraph_id": 0,
"text": "Dissociative fugue (/fjuːɡ/), formerly called a fugue state or psychogenic fugue, is a rare psychiatric phenomenon characterized by reversible amnesia for one's identity in conjunction with unexpected wandering or travel. This is sometimes accompanied by the establishment of a new identity and the inability to recall personal information prior to the presentation of symptoms. Dissociative fugue is a mental and behavioral disorder that is classified variously as a dissociative disorder, a conversion disorder, and a somatic symptom disorder. It is a facet of dissociative amnesia, according to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5).",
"title": ""
},
{
"paragraph_id": 1,
"text": "After recovery from a fugue state, previous memories usually return intact, and further treatment is unnecessary. An episode of fugue is not characterized as attributable to a psychiatric disorder if it can be related to the ingestion of psychotropic substances, to physical trauma, to a general medical condition or to dissociative identity disorder, delirium, or dementia. Fugues are precipitated by a series of long-term traumatic episodes. It is most commonly associated with childhood victims of sexual abuse who learn to dissociate memory of the abuse (dissociative amnesia).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Symptoms of a dissociative fugue include mild confusion and once the fugue ends, possible depression, grief, shame, and discomfort. People have also experienced a post-fugue anger. Another symptom of the fugue state can consist of loss of one's identity.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 3,
"text": "Before dissociative fugue can be diagnosed, either dissociative amnesia or dissociative identity disorder must be diagnosed. The only difference between dissociative amnesia or dissociative identity disorder and dissociative fugue, is that the person affected by the latter travels or wanders. This traveling or wandering is typically associated with the amnesia induced identity or of the person’s physical surroundings.",
"title": "Diagnosis"
},
{
"paragraph_id": 4,
"text": "Sometimes dissociative fugue cannot be diagnosed until people return to their pre-fugue identity and are distressed to find themselves in unfamiliar circumstances, sometimes with awareness of \"lost time\". The diagnosis is usually made retroactively when a doctor reviews the history and collects information that documents the circumstances before people left home, the travel itself, and the establishment of an alternative life.",
"title": "Diagnosis"
},
{
"paragraph_id": 5,
"text": "Functional amnesia can also be situation-specific, varying from all forms and variations of traumas or generally violent experiences, with the person experiencing severe memory loss for a particular trauma. Committing homicide, experiencing or committing a violent crime such as rape or torture, experiencing combat violence, attempting suicide, and being in automobile accidents and natural disasters have all induced cases of situation-specific amnesia. In these unusual cases, care must be exercised in interpreting cases of psychogenic amnesia when there are compelling motives to feign memory deficits for legal or financial reasons. However, although some fraction of psychogenic amnesia cases can be explained in this fashion, it is generally acknowledged that true cases are not uncommon. Both global and situationally specific amnesia are often distinguished from the organic amnesic syndrome, in that the capacity to store new memories and experiences remains intact. Given the very delicate and oftentimes dramatic nature of memory loss in such cases, there usually is a concerted effort to help the person recover their identity and history. This will allow the subject to be recovered sometimes spontaneously when particular cues are encountered.",
"title": "Diagnosis"
},
{
"paragraph_id": 6,
"text": "The cause of the fugue state is related to dissociative amnesia, (Code 300.12 of the DSM-IV codes) which has several other subtypes: selective amnesia, generalized amnesia, continuous amnesia, and systematized amnesia, in addition to the subtype \"dissociative fugue\".",
"title": "Diagnosis"
},
{
"paragraph_id": 7,
"text": "Unlike retrograde amnesia (which is popularly referred to simply as \"amnesia\", the state where someone forgets events before brain damage), dissociative amnesia is not due to the direct physiological effects of a substance (e.g., a drug of abuse, a medication, DSM-IV Codes 291.1 & 292.83) or a neurological or other general medical condition (e.g., amnestic disorder due to a head trauma, DSM-IV Code 294.0). It is a complex neuropsychological process.",
"title": "Diagnosis"
},
{
"paragraph_id": 8,
"text": "As the person experiencing a dissociative fugue may have recently experienced the reappearance of an event or person representing an earlier life trauma, the emergence of an armoring or defensive personality seems to be for some, a logical apprehension of the situation.",
"title": "Diagnosis"
},
{
"paragraph_id": 9,
"text": "Therefore, the terminology \"fugue state\" may carry a slight linguistic distinction from \"dissociative fugue\", the former implying a greater degree of \"motion\". For the purposes of this article, then, a \"fugue state\" occurs while one is \"acting out\" a \"dissociative fugue\".",
"title": "Diagnosis"
},
{
"paragraph_id": 10,
"text": "The DSM-IV defines \"dissociative fugue\" as:",
"title": "Diagnosis"
},
{
"paragraph_id": 11,
"text": "The Merck Manual defines \"dissociative fugue\" as:",
"title": "Diagnosis"
},
{
"paragraph_id": 12,
"text": "In support of this definition, the Merck Manual further defines dissociative amnesia as:",
"title": "Diagnosis"
},
{
"paragraph_id": 13,
"text": "The DSM-IV-TR states that the fugue may have a duration from days to months, and recovery is usually rapid. However, some cases may be refractory. An individual usually has only one episode.",
"title": "Prognosis"
}
]
| Dissociative fugue, formerly called a fugue state or psychogenic fugue, is a rare psychiatric phenomenon characterized by reversible amnesia for one's identity in conjunction with unexpected wandering or travel. This is sometimes accompanied by the establishment of a new identity and the inability to recall personal information prior to the presentation of symptoms. Dissociative fugue is a mental and behavioral disorder that is classified variously as a dissociative disorder, a conversion disorder, and a somatic symptom disorder. It is a facet of dissociative amnesia, according to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). After recovery from a fugue state, previous memories usually return intact, and further treatment is unnecessary. An episode of fugue is not characterized as attributable to a psychiatric disorder if it can be related to the ingestion of psychotropic substances, to physical trauma, to a general medical condition or to dissociative identity disorder, delirium, or dementia. Fugues are precipitated by a series of long-term traumatic episodes. It is most commonly associated with childhood victims of sexual abuse who learn to dissociate memory of the abuse. | 2002-02-25T15:51:15Z | 2023-12-13T09:06:27Z | [
"Template:Reflist",
"Template:Cite journal",
"Template:Infobox medical condition (new)",
"Template:As of",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite magazine",
"Template:Mental and behavioral disorders",
"Template:Authority control",
"Template:Citation needed",
"Template:For",
"Template:IPAc-en",
"Template:More citations needed",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:Hair space",
"Template:Portal",
"Template:Medical resources",
"Template:Clarify"
]
| https://en.wikipedia.org/wiki/Dissociative_fugue |
10,902 | Force | Error: no inner hatnotes detected (help).
In physics, a force is an influence that can cause an object to change its velocity, i.e., to accelerate, meaning a change in speed or direction, unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F.
Force plays a central role in classical mechanics, figuring in all three of Newton's laws of motion, which specify that the force on an object with an unchanging mass is equal to the product of the object's mass and the acceleration that it undergoes. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium they can cause deformation of solid materials, or flow in fluids.
In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force, but the understanding of force provided by classical mechanics remains entirely satisfactory for many practical purposes.
Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years.
By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.
Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion.
Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics.
In the early 17th century, before Newton's Principia, the term "force" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force).
Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches.
Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.
According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.
A modern statement of Newton's second law is a vector equation:
where p → {\displaystyle {\vec {p}}} is the momentum of the system, and F → {\displaystyle {\vec {F}}} is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time.
In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum,
where m is the mass and v → {\displaystyle {\vec {v}}} is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes
By substituting the definition of acceleration, the algebraic version of Newton's second law is derived:
Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F → 1 , 2 {\displaystyle {\vec {F}}_{1,2}} is the force of body 1 on body 2 and F → 2 , 1 {\displaystyle {\vec {F}}_{2,1}} that of body 2 on body 1, then
This law is sometimes referred to as the action-reaction law, with F → 1 , 2 {\displaystyle {\vec {F}}_{1,2}} called the action and − F → 2 , 1 {\displaystyle -{\vec {F}}_{2,1}} the reaction.
Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body.
In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero:
More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.
Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p → 1 {\displaystyle {\vec {p}}_{1}} is the momentum of object 1 and p → 2 {\displaystyle {\vec {p}}_{2}} the momentum of object 2, then
Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.
Some textbooks use Newton's second law as a definition of force. However, for the equation F → = m a → {\displaystyle {\vec {F}}=m{\vec {a}}} for a constant mass m {\displaystyle m} to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.
Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.
Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.
Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.
As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.
When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.
Static equilibrium was understood well before the invention of classical mechanics. Objects that are at rest have zero net force acting on them.
The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.
Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.
A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.
Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.
Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.
Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body.
What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g → {\displaystyle {\vec {g}}} and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\displaystyle m} will experience a force:
For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.
Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion.
Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\displaystyle m_{\oplus }} ) and the radius ( R ⊕ {\displaystyle R_{\oplus }} ) of the Earth to the gravitational acceleration:
where the vector direction is given by r ^ {\displaystyle {\hat {r}}} , is the unit vector directed outward from the center of the Earth.
In this equation, a dimensional constant G {\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\displaystyle m_{2}} is
where r {\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\displaystyle {\hat {r}}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.
This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.
The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.
Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as
where q {\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\displaystyle q} due to electric and magnetic fields:
where F → {\displaystyle {\vec {F}}} is the electromagnetic force, E → {\displaystyle {\vec {E}}} is the electric field at the body's location, B → {\displaystyle {\vec {B}}} is the magnetic field, and v → {\displaystyle {\vec {v}}} is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.
The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.
When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.
Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.
The static friction force ( F s f {\displaystyle F_{\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\displaystyle \mu _{\mathrm {sf} }} ) multiplied by the normal force ( F N {\displaystyle F_{\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality:
The kinetic friction force ( F k f {\displaystyle F_{\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:
where μ k f {\displaystyle \mu _{\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.
Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.
A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\displaystyle \Delta x} is the displacement, the force exerted by an ideal spring equals:
where k {\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.
For an object in uniform circular motion, the net force acting on the object equals:
where m {\displaystyle m} is the mass of the object, v {\displaystyle v} is the velocity of the object and r {\displaystyle r} is the distance to the center of the circular path and r ^ {\displaystyle {\hat {r}}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.
Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:
where V {\displaystyle V} is the volume of the object in the fluid and P {\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.
A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction:
where:
More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as
where A {\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.
There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as "pseudo forces".
In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry.
Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F → {\displaystyle {\vec {F}}} is defined relative to an arbitrary reference point as the cross product:
where r → {\displaystyle {\vec {r}}} is the position vector of the force application point relative to the reference point.
Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:
where
This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.
Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:
where L → {\displaystyle {\vec {L}}} is the angular momentum of the particle.
Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.
Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:
which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem).
Similarly, integrating with respect to position gives a definition for the work done by a force:
which is equivalent to changes in kinetic energy (yielding the work energy theorem).
Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x → {\displaystyle d{\vec {x}}} in a time interval dt:
so
with v → = d x → / d t {\displaystyle {\vec {v}}=\mathrm {d} {\vec {x}}/\mathrm {d} t} the velocity.
Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r → ) {\displaystyle U({\vec {r}})} is defined as that field whose gradient is equal and opposite to the force produced at every point:
Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.
A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.
Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r → {\displaystyle {\vec {r}}} emanating from spherically symmetric potentials. Examples of this follow:
For gravity:
where G {\displaystyle G} is the gravitational constant, and m n {\displaystyle m_{n}} is the mass of object n.
For electrostatic forces:
where ε 0 {\displaystyle \varepsilon _{0}} is electric permittivity of free space, and q n {\displaystyle q_{n}} is the electric charge of object n.
For spring forces:
where k {\displaystyle k} is the spring constant.
For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.
The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.
The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s. A newton is thus equal to 100,000 dynes.
The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.
The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.
At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly.
In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law,
remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, v {\displaystyle v} , momentum must be redefined as:
where m 0 {\displaystyle m_{0}} is the rest mass and c {\displaystyle c} the speed of light.
The expression relating force and acceleration for a particle with constant non-zero rest mass m {\displaystyle m} moving in the x {\displaystyle x} direction at velocity v {\displaystyle v} is:
where
is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\displaystyle c} . If v {\displaystyle v} is very small compared to c {\displaystyle c} , then γ {\displaystyle \gamma } is very close to 1 and
is a close approximation. Even for use in relativity, one can restore the form of
through the use of four-vectors. This relation is correct in relativity when F μ {\displaystyle F^{\mu }} is the four-force, m {\displaystyle m} is the invariant mass, and A μ {\displaystyle A^{\mu }} is the four-acceleration.
The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below.
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability.
In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".
While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.
All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.
The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation.
Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact.
Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".
Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force.
There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.
The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.
Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W and W bosons, and neutral current, involving electrically neutral Z bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 10 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 10 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang. | [
{
"paragraph_id": 0,
"text": "Error: no inner hatnotes detected (help).",
"title": ""
},
{
"paragraph_id": 1,
"text": "In physics, a force is an influence that can cause an object to change its velocity, i.e., to accelerate, meaning a change in speed or direction, unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Force plays a central role in classical mechanics, figuring in all three of Newton's laws of motion, which specify that the force on an object with an unchanging mass is equal to the product of the object's mass and the acceleration that it undergoes. Types of forces often encountered in classical mechanics include elastic, frictional, contact or \"normal\" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium they can cause deformation of solid materials, or flow in fluids.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force, but the understanding of force provided by classical mechanics remains entirely satisfactory for many practical purposes.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years.",
"title": "Development of the concept"
},
{
"paragraph_id": 5,
"text": "By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.",
"title": "Development of the concept"
},
{
"paragraph_id": 6,
"text": "Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.",
"title": "Pre-Newtonian concepts"
},
{
"paragraph_id": 7,
"text": "Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different \"natural places\" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their \"natural place\" (e.g., for heavy bodies to fall), which led to \"natural motion\", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion.",
"title": "Pre-Newtonian concepts"
},
{
"paragraph_id": 8,
"text": "Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics.",
"title": "Pre-Newtonian concepts"
},
{
"paragraph_id": 9,
"text": "In the early 17th century, before Newton's Principia, the term \"force\" (Latin: vis) was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named vis viva (live force) by Leibniz. The modern concept of force corresponds to Newton's vis motrix (accelerating force).",
"title": "Pre-Newtonian concepts"
},
{
"paragraph_id": 10,
"text": "Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 11,
"text": "Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 12,
"text": "According to the first law, motion at constant speed in a straight line does not need a cause. It is change in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion. Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 13,
"text": "A modern statement of Newton's second law is a vector equation:",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 14,
"text": "where p → {\\displaystyle {\\vec {p}}} is the momentum of the system, and F → {\\displaystyle {\\vec {F}}} is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 15,
"text": "In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum,",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 16,
"text": "where m is the mass and v → {\\displaystyle {\\vec {v}}} is the velocity. If Newton's second law is applied to a system of constant mass, m may be moved outside the derivative operator. The equation then becomes",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 17,
"text": "By substituting the definition of acceleration, the algebraic version of Newton's second law is derived:",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 18,
"text": "Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if F → 1 , 2 {\\displaystyle {\\vec {F}}_{1,2}} is the force of body 1 on body 2 and F → 2 , 1 {\\displaystyle {\\vec {F}}_{2,1}} that of body 2 on body 1, then",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 19,
"text": "This law is sometimes referred to as the action-reaction law, with F → 1 , 2 {\\displaystyle {\\vec {F}}_{1,2}} called the action and − F → 2 , 1 {\\displaystyle -{\\vec {F}}_{2,1}} the reaction.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 20,
"text": "Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 21,
"text": "In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero:",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 22,
"text": "More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 23,
"text": "Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if p → 1 {\\displaystyle {\\vec {p}}_{1}} is the momentum of object 1 and p → 2 {\\displaystyle {\\vec {p}}_{2}} the momentum of object 2, then",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 24,
"text": "Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 25,
"text": "Some textbooks use Newton's second law as a definition of force. However, for the equation F → = m a → {\\displaystyle {\\vec {F}}=m{\\vec {a}}} for a constant mass m {\\displaystyle m} to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.",
"title": "Newtonian mechanics"
},
{
"paragraph_id": 26,
"text": "Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as \"vector quantities\". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.",
"title": "Combining forces"
},
{
"paragraph_id": 27,
"text": "Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.",
"title": "Combining forces"
},
{
"paragraph_id": 28,
"text": "Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.",
"title": "Combining forces"
},
{
"paragraph_id": 29,
"text": "As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.",
"title": "Combining forces"
},
{
"paragraph_id": 30,
"text": "When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in static equilibrium with respect to a frame of reference if it at rest and not accelerating, whereas a body in dynamic equilibrium is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.",
"title": "Combining forces"
},
{
"paragraph_id": 31,
"text": "Static equilibrium was understood well before the invention of classical mechanics. Objects that are at rest have zero net force acting on them.",
"title": "Combining forces"
},
{
"paragraph_id": 32,
"text": "The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.",
"title": "Combining forces"
},
{
"paragraph_id": 33,
"text": "Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.",
"title": "Combining forces"
},
{
"paragraph_id": 34,
"text": "A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the \"spring reaction force\", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.",
"title": "Combining forces"
},
{
"paragraph_id": 35,
"text": "Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an \"absolute rest frame\" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a \"natural state\" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.",
"title": "Combining forces"
},
{
"paragraph_id": 36,
"text": "Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.",
"title": "Combining forces"
},
{
"paragraph_id": 37,
"text": "Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 38,
"text": "What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as g → {\\displaystyle {\\vec {g}}} and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m {\\displaystyle m} will experience a force:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 39,
"text": "For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 40,
"text": "Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 41,
"text": "Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass ( m ⊕ {\\displaystyle m_{\\oplus }} ) and the radius ( R ⊕ {\\displaystyle R_{\\oplus }} ) of the Earth to the gravitational acceleration:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 42,
"text": "where the vector direction is given by r ^ {\\displaystyle {\\hat {r}}} , is the unit vector directed outward from the center of the Earth.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 43,
"text": "In this equation, a dimensional constant G {\\displaystyle G} is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G {\\displaystyle G} using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G {\\displaystyle G} could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass m 1 {\\displaystyle m_{1}} due to the gravitational pull of mass m 2 {\\displaystyle m_{2}} is",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 44,
"text": "where r {\\displaystyle r} is the distance between the two objects' centers of mass and r ^ {\\displaystyle {\\hat {r}}} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 45,
"text": "This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 46,
"text": "The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 47,
"text": "Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical \"test charge\" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 48,
"text": "where q {\\displaystyle q} is the magnitude of the hypothetical test charge. Similarly, the idea of the magnetic field was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge q {\\displaystyle q} due to electric and magnetic fields:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 49,
"text": "where F → {\\displaystyle {\\vec {F}}} is the electromagnetic force, E → {\\displaystyle {\\vec {E}}} is the electric field at the body's location, B → {\\displaystyle {\\vec {B}}} is the magnetic field, and v → {\\displaystyle {\\vec {v}}} is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 50,
"text": "The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These \"Maxwell's equations\" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be \"self-generating\" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 51,
"text": "When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 52,
"text": "Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 53,
"text": "The static friction force ( F s f {\\displaystyle F_{\\mathrm {sf} }} ) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction ( μ s f {\\displaystyle \\mu _{\\mathrm {sf} }} ) multiplied by the normal force ( F N {\\displaystyle F_{\\text{N}}} ). In other words, the magnitude of the static friction force satisfies the inequality:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 54,
"text": "The kinetic friction force ( F k f {\\displaystyle F_{\\mathrm {kf} }} ) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 55,
"text": "where μ k f {\\displaystyle \\mu _{\\mathrm {kf} }} is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 56,
"text": "Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 57,
"text": "A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If Δ x {\\displaystyle \\Delta x} is the displacement, the force exerted by an ideal spring equals:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 58,
"text": "where k {\\displaystyle k} is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 59,
"text": "For an object in uniform circular motion, the net force acting on the object equals:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 60,
"text": "where m {\\displaystyle m} is the mass of the object, v {\\displaystyle v} is the velocity of the object and r {\\displaystyle r} is the distance to the center of the circular path and r ^ {\\displaystyle {\\hat {r}}} is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 61,
"text": "Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 62,
"text": "where V {\\displaystyle V} is the volume of the object in the fluid and P {\\displaystyle P} is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 63,
"text": "A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called \"Stokes' drag\" the force is approximately proportional to the velocity, but opposite in direction:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 64,
"text": "where:",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 65,
"text": "More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 66,
"text": "where A {\\displaystyle A} is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 67,
"text": "There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating. Because these forces are not genuine they are also referred to as \"pseudo forces\".",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 68,
"text": "In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry.",
"title": "Examples of forces in classical mechanics "
},
{
"paragraph_id": 69,
"text": "Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force F → {\\displaystyle {\\vec {F}}} is defined relative to an arbitrary reference point as the cross product:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 70,
"text": "where r → {\\displaystyle {\\vec {r}}} is the position vector of the force application point relative to the reference point.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 71,
"text": "Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 72,
"text": "where",
"title": "Concepts derived from force"
},
{
"paragraph_id": 73,
"text": "This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 74,
"text": "Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 75,
"text": "where L → {\\displaystyle {\\vec {L}}} is the angular momentum of the particle.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 76,
"text": "Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 77,
"text": "Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 78,
"text": "which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem).",
"title": "Concepts derived from force"
},
{
"paragraph_id": 79,
"text": "Similarly, integrating with respect to position gives a definition for the work done by a force:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 80,
"text": "which is equivalent to changes in kinetic energy (yielding the work energy theorem).",
"title": "Concepts derived from force"
},
{
"paragraph_id": 81,
"text": "Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change d x → {\\displaystyle d{\\vec {x}}} in a time interval dt:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 82,
"text": "so",
"title": "Concepts derived from force"
},
{
"paragraph_id": 83,
"text": "with v → = d x → / d t {\\displaystyle {\\vec {v}}=\\mathrm {d} {\\vec {x}}/\\mathrm {d} t} the velocity.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 84,
"text": "Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field U ( r → ) {\\displaystyle U({\\vec {r}})} is defined as that field whose gradient is equal and opposite to the force produced at every point:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 85,
"text": "Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 86,
"text": "A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 87,
"text": "Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector r → {\\displaystyle {\\vec {r}}} emanating from spherically symmetric potentials. Examples of this follow:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 88,
"text": "For gravity:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 89,
"text": "where G {\\displaystyle G} is the gravitational constant, and m n {\\displaystyle m_{n}} is the mass of object n.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 90,
"text": "For electrostatic forces:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 91,
"text": "where ε 0 {\\displaystyle \\varepsilon _{0}} is electric permittivity of free space, and q n {\\displaystyle q_{n}} is the electric charge of object n.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 92,
"text": "For spring forces:",
"title": "Concepts derived from force"
},
{
"paragraph_id": 93,
"text": "where k {\\displaystyle k} is the spring constant.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 94,
"text": "For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 95,
"text": "The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.",
"title": "Concepts derived from force"
},
{
"paragraph_id": 96,
"text": "The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s. A newton is thus equal to 100,000 dynes.",
"title": "Units"
},
{
"paragraph_id": 97,
"text": "The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.",
"title": "Units"
},
{
"paragraph_id": 98,
"text": "The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.",
"title": "Units"
},
{
"paragraph_id": 99,
"text": "At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of \"force\" in microscopic contexts where Newton's laws do not apply directly.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 100,
"text": "In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law,",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 101,
"text": "remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, v {\\displaystyle v} , momentum must be redefined as:",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 102,
"text": "where m 0 {\\displaystyle m_{0}} is the rest mass and c {\\displaystyle c} the speed of light.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 103,
"text": "The expression relating force and acceleration for a particle with constant non-zero rest mass m {\\displaystyle m} moving in the x {\\displaystyle x} direction at velocity v {\\displaystyle v} is:",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 104,
"text": "where",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 105,
"text": "is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach c {\\displaystyle c} . If v {\\displaystyle v} is very small compared to c {\\displaystyle c} , then γ {\\displaystyle \\gamma } is very close to 1 and",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 106,
"text": "is a close approximation. Even for use in relativity, one can restore the form of",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 107,
"text": "through the use of four-vectors. This relation is correct in relativity when F μ {\\displaystyle F^{\\mu }} is the four-force, m {\\displaystyle m} is the invariant mass, and A μ {\\displaystyle A^{\\mu }} is the four-acceleration.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 108,
"text": "The general theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 109,
"text": "Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 110,
"text": "In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 111,
"text": "Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 112,
"text": "In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be \"fundamental interactions\".",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 113,
"text": "While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.",
"title": "Revisions of the force concept"
},
{
"paragraph_id": 114,
"text": "All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 115,
"text": "The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 116,
"text": "Newton's law of gravitation is an example of action at a distance: one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is instantaneous. According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 117,
"text": "Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as \"gravitational force\".",
"title": "Fundamental interactions"
},
{
"paragraph_id": 118,
"text": "Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 119,
"text": "There are two \"nuclear forces\", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 120,
"text": "The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts directly upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 121,
"text": "Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or \"vertices\" — charged current, involving the electrically charged W and W bosons, and neutral current, involving electrically neutral Z bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word \"weak\" derives from the fact that the field strength is some 10 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 10 K. Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.",
"title": "Fundamental interactions"
},
{
"paragraph_id": 122,
"text": "",
"title": "External links"
}
]
| Error: no inner hatnotes detected (help). In physics, a force is an influence that can cause an object to change its velocity, i.e., to accelerate, meaning a change in speed or direction, unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F. Force plays a central role in classical mechanics, figuring in all three of Newton's laws of motion, which specify that the force on an object with an unchanging mass is equal to the product of the object's mass and the acceleration that it undergoes. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium they can cause deformation of solid materials, or flow in fluids. In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force, but the understanding of force provided by classical mechanics remains entirely satisfactory for many practical purposes. | 2001-11-02T14:24:00Z | 2023-12-31T00:32:22Z | [
"Template:Fundamental interactions",
"Template:Rp",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite news",
"Template:Commons category",
"Template:See also",
"Template:Main",
"Template:Val",
"Template:LCCN",
"Template:Good article",
"Template:Infobox physical quantity",
"Template:Classical mechanics",
"Template:Cite book",
"Template:Authority control",
"Template:Other uses",
"Template:Redirect",
"Template:Units of force",
"Template:Cite OED",
"Template:Reflist",
"Template:Wiktionary",
"Template:Short description",
"Template:Hatnote group",
"Template:Math",
"Template:Lang-la",
"Template:Lang",
"Template:Portal",
"Template:Annotated link",
"Template:Classical mechanics SI units"
]
| https://en.wikipedia.org/wiki/Force |
10,905 | Family law | Family law (also called matrimonial law or the law of domestic relations) is an area of the law that deals with family matters and domestic relations.
Subjects that commonly fall under a nation's body of family law include:
This list is not exhaustive and varies depending on jurisdiction.
Issues may arise in family law where there is a question as to the laws of the jurisdiction that apply to the marriage relationship or to custody and divorce, and whether a divorce or child custody order is recognized under the laws of another jurisdiction. For child custody, many nations have joined the Hague Convention on the Civil Aspects of International Child Abduction in order to grant recognition to other member states' custody orders and avoid issues of parental kidnapping. | [
{
"paragraph_id": 0,
"text": "Family law (also called matrimonial law or the law of domestic relations) is an area of the law that deals with family matters and domestic relations.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Subjects that commonly fall under a nation's body of family law include:",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "This list is not exhaustive and varies depending on jurisdiction.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Issues may arise in family law where there is a question as to the laws of the jurisdiction that apply to the marriage relationship or to custody and divorce, and whether a divorce or child custody order is recognized under the laws of another jurisdiction. For child custody, many nations have joined the Hague Convention on the Civil Aspects of International Child Abduction in order to grant recognition to other member states' custody orders and avoid issues of parental kidnapping.",
"title": "Conflict of laws"
}
]
| Family law is an area of the law that deals with family matters and domestic relations. | 2002-02-25T15:51:15Z | 2023-11-06T08:44:51Z | [
"Template:Short description",
"Template:Columns-list",
"Template:Reflist",
"Template:Cite web",
"Template:Family rights",
"Template:Authority control",
"Template:Other uses",
"Template:Family law",
"Template:Main",
"Template:Cite journal",
"Template:Cite report",
"Template:Cite book",
"Template:Law"
]
| https://en.wikipedia.org/wiki/Family_law |
10,909 | Foonly | Foonly Inc. was an American computer company formed by Dave Poole in 1976, that produced a series of DEC PDP-10 compatible mainframe computers, named Foonly F1 to Foonly F5.
The first and most famous Foonly machine, the F1, was the computer used by Triple-I to create some of the computer-generated imagery in the 1982 film Tron.
At the beginning of the 1970s, the Stanford Artificial Intelligence Laboratory (SAIL) began to study the building of a new computer to replace their DEC PDP-10 KA10, by a far more powerful machine, with a funding from Defense Advanced Research Projects Agency (DARPA). This project was named "Super-Foonly", and was developed by a team led by Phil Petit, Jack Holloway, and Dave Poole. The name itself came from FOO NLI, an error message emitted by a PDP-10 assembler at SAIL meaning "FOO is Not a Legal Identifier". In 1974, DARPA cut the funding, and a large part of the team went to DEC to develop the PDP-10 model KL10, based on the Super-Foonly project.
But Dave Poole, with Phil Petit and Jack Holloway, preferred to found the Foonly Company in 1976, to try to build a series of computers based on the Super-Foonly project.
During the early 1980s, after the releasing of their first and only F1, Foonly built and sold some F2, F4 and F5 low cost DEC PDP-10 compatible machines.
In 1983, after the cancellation of the Jupiter project, Foonly tried to propose a new Foonly F1, but it was eclipsed by the SC Group company and their Mars project, and the company never quite recovered, shutting down in 1989.
The Foonly F1 was the first and most powerful Foonly computer, but also the only one being built of its kind. It was based on the Super-Foonly project designs, aimed to be the fastest DEC PDP-10 compatible, but using emitter-coupled logic (ECL) gates rather than transistor–transistor logic (TTL), and without the extended instruction set. It was developed with the help of Triple-I, its first customer, and began operations in 1978.
The computer consisted of four cabinets:
It was able to reach 4.5 MIPS.
The F1 is mostly famous for having been the computer behind some of the Computer-generated imagery of the Disney 1982 Tron movie, and also Looker (1981).
After that, the computer was bought by the Canadian Omnibus Computer Graphics company, and was used on some movies, such as television logos for CBC, CTV, and Global Television Network channels, opening titles for the show Hockey Night in Canada, Star Trek III: The Search for Spock (1984), Flight of the Navigator (1986), Captain Power and the Soldiers of the Future television series (1987), and MarilynMonrobot.
Unlike the F1, the other models (F2, F4, F4B, F5) were built with the slower TTL rather than ECL circuits, and housed in a single cabinet, rather than four.
Rather than use DEC's Massbus (or other DEC bus), Foonly developed F-bus, which can work with DEC and non-DEC peripherals.
Foonly described the F2 as "a powerful mainframe at a minicomputer price," "with an average execution speed about 25% of that of the DECSYSTEM-2060."
Standard equipment:
The Foonly machines, which could run the TENEX operating system, came with a derivative thereof, FOONEX.
Tymshare attempted marketing the Foonly line, using the name "Tymshare XX Series Computer Family" of which the Tymshare System XXVI" was the main focus.
Other companies that produced PDP-10 compatible computers: | [
{
"paragraph_id": 0,
"text": "Foonly Inc. was an American computer company formed by Dave Poole in 1976, that produced a series of DEC PDP-10 compatible mainframe computers, named Foonly F1 to Foonly F5.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The first and most famous Foonly machine, the F1, was the computer used by Triple-I to create some of the computer-generated imagery in the 1982 film Tron.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At the beginning of the 1970s, the Stanford Artificial Intelligence Laboratory (SAIL) began to study the building of a new computer to replace their DEC PDP-10 KA10, by a far more powerful machine, with a funding from Defense Advanced Research Projects Agency (DARPA). This project was named \"Super-Foonly\", and was developed by a team led by Phil Petit, Jack Holloway, and Dave Poole. The name itself came from FOO NLI, an error message emitted by a PDP-10 assembler at SAIL meaning \"FOO is Not a Legal Identifier\". In 1974, DARPA cut the funding, and a large part of the team went to DEC to develop the PDP-10 model KL10, based on the Super-Foonly project.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "But Dave Poole, with Phil Petit and Jack Holloway, preferred to found the Foonly Company in 1976, to try to build a series of computers based on the Super-Foonly project.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "During the early 1980s, after the releasing of their first and only F1, Foonly built and sold some F2, F4 and F5 low cost DEC PDP-10 compatible machines.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1983, after the cancellation of the Jupiter project, Foonly tried to propose a new Foonly F1, but it was eclipsed by the SC Group company and their Mars project, and the company never quite recovered, shutting down in 1989.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Foonly F1 was the first and most powerful Foonly computer, but also the only one being built of its kind. It was based on the Super-Foonly project designs, aimed to be the fastest DEC PDP-10 compatible, but using emitter-coupled logic (ECL) gates rather than transistor–transistor logic (TTL), and without the extended instruction set. It was developed with the help of Triple-I, its first customer, and began operations in 1978.",
"title": "Computers"
},
{
"paragraph_id": 7,
"text": "The computer consisted of four cabinets:",
"title": "Computers"
},
{
"paragraph_id": 8,
"text": "It was able to reach 4.5 MIPS.",
"title": "Computers"
},
{
"paragraph_id": 9,
"text": "The F1 is mostly famous for having been the computer behind some of the Computer-generated imagery of the Disney 1982 Tron movie, and also Looker (1981).",
"title": "Computers"
},
{
"paragraph_id": 10,
"text": "After that, the computer was bought by the Canadian Omnibus Computer Graphics company, and was used on some movies, such as television logos for CBC, CTV, and Global Television Network channels, opening titles for the show Hockey Night in Canada, Star Trek III: The Search for Spock (1984), Flight of the Navigator (1986), Captain Power and the Soldiers of the Future television series (1987), and MarilynMonrobot.",
"title": "Computers"
},
{
"paragraph_id": 11,
"text": "Unlike the F1, the other models (F2, F4, F4B, F5) were built with the slower TTL rather than ECL circuits, and housed in a single cabinet, rather than four.",
"title": "Other models"
},
{
"paragraph_id": 12,
"text": "Rather than use DEC's Massbus (or other DEC bus), Foonly developed F-bus, which can work with DEC and non-DEC peripherals.",
"title": "Other models"
},
{
"paragraph_id": 13,
"text": "Foonly described the F2 as \"a powerful mainframe at a minicomputer price,\" \"with an average execution speed about 25% of that of the DECSYSTEM-2060.\"",
"title": "Other models"
},
{
"paragraph_id": 14,
"text": "Standard equipment:",
"title": "Peripherals"
},
{
"paragraph_id": 15,
"text": "The Foonly machines, which could run the TENEX operating system, came with a derivative thereof, FOONEX.",
"title": "Software"
},
{
"paragraph_id": 16,
"text": "Tymshare attempted marketing the Foonly line, using the name \"Tymshare XX Series Computer Family\" of which the Tymshare System XXVI\" was the main focus.",
"title": "Tymshare"
},
{
"paragraph_id": 17,
"text": "Other companies that produced PDP-10 compatible computers:",
"title": "See also"
}
]
| Foonly Inc. was an American computer company formed by Dave Poole in 1976, that produced a series of DEC PDP-10 compatible mainframe computers, named Foonly F1 to Foonly F5. The first and most famous Foonly machine, the F1, was the computer used by Triple-I to create some of the computer-generated imagery in the 1982 film Tron. | 2002-02-25T15:43:11Z | 2023-11-10T02:26:22Z | [
"Template:Reflist",
"Template:Cite web",
"Template:Cite magazine",
"Template:Cite book",
"Template:Tron",
"Template:Short description",
"Template:Infobox company",
"Template:Infobox supercomputer"
]
| https://en.wikipedia.org/wiki/Foonly |
10,911 | Functional group | In organic chemistry, a functional group is a substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis.
A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (–COO), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group (–OH) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment.
Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has.
The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms.
Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity.
There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion.
Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity.
Compounds that contain C-O bonds each possess differing reactivity based upon the location and hybridization of the C-O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp-hybridized oxygen (alcohol groups).
Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides.
Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones.
Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table.
Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids.
Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead.
These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules.
When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. "ethyne" becomes "ethynyl").
When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds).
There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl. | [
{
"paragraph_id": 0,
"text": "In organic chemistry, a functional group is a substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (–COO), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb \"like dissolves like\", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group (–OH) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term \"functional group\". However, a moiety is an entire \"half\" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an \"aryl moiety\" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 4,
"text": "Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 5,
"text": "There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 6,
"text": "Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 7,
"text": "",
"title": "Table of common functional groups"
},
{
"paragraph_id": 8,
"text": "Compounds that contain C-O bonds each possess differing reactivity based upon the location and hybridization of the C-O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp-hybridized oxygen (alcohol groups).",
"title": "Table of common functional groups"
},
{
"paragraph_id": 9,
"text": "Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 10,
"text": "Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 11,
"text": "Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 12,
"text": "Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 13,
"text": "Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 14,
"text": "These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules.",
"title": "Table of common functional groups"
},
{
"paragraph_id": 15,
"text": "When the parent hydrocarbon is unsaturated, the suffix (\"-yl\", \"-ylidene\", or \"-ylidyne\") replaces \"-ane\" (e.g. \"ethane\" becomes \"ethyl\"); otherwise, the suffix replaces only the final \"-e\" (e.g. \"ethyne\" becomes \"ethynyl\").",
"title": "Table of common functional groups"
},
{
"paragraph_id": 16,
"text": "When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds).",
"title": "Table of common functional groups"
},
{
"paragraph_id": 17,
"text": "There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl.",
"title": "Table of common functional groups"
}
]
| In organic chemistry, a functional group is a substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis. A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (–COO−), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group (–OH) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has. | 2001-08-31T05:19:32Z | 2023-11-07T22:06:02Z | [
"Template:Nowrap",
"Template:Smallcaps",
"Template:Ref label",
"Template:Short description",
"Template:Organic chemistry",
"Template:Note label",
"Template:Functional groups",
"Template:Reflist",
"Template:March3rd",
"Template:Cite book",
"Template:Cite web",
"Template:Commons",
"Template:Other uses",
"Template:More citations needed",
"Template:Center",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Functional_group |
10,913 | Fractal | In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory.
One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension).
Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line.
Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century.
There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole." Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants".
The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction).
The term "fractal" was coined by the mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus, meaning "broken" or "fractured", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature.
The word "fractal" often has different connotations for the lay public as opposed to mathematicians, where the public is more likely to be familiar with fractal art than the mathematical concept. The mathematical concept is difficult to define formally, even for mathematicians, but key features can be understood with a little mathematical background.
The feature of "self-similarity", for instance, is easily understood by analogy to zooming in with a lens or other device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for some fractals, nearly the same pattern reappears over and over. Self-similarity itself is not necessarily counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel mirrors or the homunculus, the little man inside the head of the little man inside the head ...). The difference for fractals is that the pattern reproduced must be detailed.
This idea of being detailed relates to another feature that can be understood without much mathematical background: Having a fractal dimension greater than its topological dimension, for instance, refers to how a fractal scales compared to how geometric shapes are usually perceived. A straight line, for instance, is conventionally understood to be one-dimensional; if such a figure is rep-tiled into pieces each 1/3 the length of the original, then there are always three equal pieces. A solid square is understood to be two-dimensional; if such a figure is rep-tiled into pieces each scaled down by a factor of 1/3 in both dimensions, there are a total of 3 = 9 pieces.
We see that for ordinary self-similar objects, being n-dimensional means that when it is rep-tiled into pieces each scaled down by a scale-factor of 1/r, there are a total of r pieces. Now, consider the Koch curve. It can be rep-tiled into four sub-copies, each scaled down by a scale-factor of 1/3. So, strictly by analogy, we can consider the "dimension" of the Koch curve as being the unique real number D that satisfies 3 = 4. This number is called the fractal dimension of the Koch curve; it is not the conventionally perceived dimension of a curve. In general, a key property of fractals is that the fractal dimension differs from the conventionally understood dimension (formally called the topological dimension).
This also leads to understanding a third feature, that fractals as mathematical equations are "nowhere differentiable". In a concrete sense, this means fractals cannot be measured in traditional ways. To elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring an infinitely "wiggly" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter.
The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way. A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense).
In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". Thus, it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass presented the first definition of a function with a graph that would today be considered a fractal, having the non-intuitive property of being everywhere continuous but nowhere differentiable at the Royal Prussian Academy of Sciences.
In addition, the quotient difference becomes arbitrarily large as the summation index increases. Not long after that, in 1883, Georg Cantor, who attended lectures by Weierstrass, published examples of subsets of the real line known as Cantor sets, which had unusual properties and are now recognized as fractals. Also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called "self-inverse" fractals.
One of the next milestones came in 1904, when Helge von Koch, extending ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic definition, gave a more geometric definition including hand-drawn images of a similar function, which is now called the Koch snowflake. Another milestone came a decade later in 1915, when Wacław Sierpiński constructed his famous triangle then, one year later, his carpet. By 1918, two French mathematicians, Pierre Fatou and Gaston Julia, though working independently, arrived essentially simultaneously at results describing what is now seen as fractal behaviour associated with mapping complex numbers and iterative functions and leading to further ideas about attractors and repellors (i.e., points that attract or repel other points), which have become very important in the study of fractals.
Very shortly after that work was submitted, by March 1918, Felix Hausdorff expanded the definition of "dimension", significantly for the evolution of the definition of fractals, to allow for sets to have non-integer dimensions. The idea of self-similar curves was taken further by Paul Lévy, who, in his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole, described a new fractal curve, the Lévy C curve.
Different researchers have postulated that without the aid of modern computer graphics, early investigators were limited to what they could depict in manual drawings, so lacked the means to visualize the beauty and appreciate some of the implications of many of the patterns they had discovered (the Julia set, for instance, could only be visualized through a few iterations as very simple drawings). That changed, however, in the 1960s, when Benoit Mandelbrot started writing about self-similarity in papers such as How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension, which built on earlier work by Lewis Fry Richardson.
In 1975, Mandelbrot solidified hundreds of years of thought and mathematical development in coining the word "fractal" and illustrated his mathematical definition with striking computer-constructed visualizations. These images, such as of his canonical Mandelbrot set, captured the popular imagination; many of them were based on recursion, leading to the popular meaning of the term "fractal".
In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he introduced his software for generating and rendering fractally generated landscapes.
One often cited description that Mandelbrot published to describe geometric fractals is "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole"; this is generally helpful but limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and the unusual relationship fractals have with the space they are embedded in.
One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. In 1975 when Mandelbrot coined the word "fractal", he did so to denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension. However, this requirement is not met by space-filling curves such as the Hilbert curve.
Because of the trouble involved in finding one definition for fractals, some argue that fractals should not be strictly defined at all. According to Falconer, fractals should be only generally characterized by a gestalt of the following features;
As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar without having other typically fractal features. A straight line, for instance, is self-similar but not fractal because it lacks detail, and is easily described in Euclidean language without a need for recursion.
Images of fractals can be created by fractal generating programs. Because of the butterfly effect, a small change in a single variable can have an unpredictable outcome.
Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed elsewhere. Images and other outputs of modelling are normally referred to as being "fractals" even if they do not have strictly fractal characteristics, such as when it is possible to zoom into a region of the fractal image that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals.
Modeled fractals may be sounds, digital images, electrochemical patterns, circadian rhythms, etc. Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called "in silico" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques.
The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms.
Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The connection between fractals and leaves, for instance, is currently being used to determine how much carbon is contained in trees. Phenomena known to have fractal features include:
Fractals often appear in the realm of living organisms where they arise through branching processes and other complex pattern formation. Ian Wong and co-workers have shown that migrating cells can form fractals by clustering and branching. Nerve cells function through processes at the cell surface, with phenomena that are enhanced by largely increasing the surface to volume ratio. As a consequence nerve cells often are found to form into fractal patterns. These processes are crucial in cell physiology and different pathologies.
Multiple subcellular structures also are found to assemble into fractals. Diego Krapf has shown that through branching processes the actin filaments in human cells assemble into fractal patterns. Similarly Matthias Weiss showed that the endoplasmic reticulum displays fractal features. The current understanding is that fractals are ubiquitous in cell biology, from proteins, to organelles, to whole cells.
Since 1999 numerous scientific groups have performed fractal analysis on over 50 paintings created by Jackson Pollock by pouring paint directly onto horizontal canvasses.
Recently, fractal analysis has been used to achieve a 93% success rate in distinguishing real from imitation Pollocks. Cognitive neuroscientists have shown that Pollock's fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals.
Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns. It involves pressing paint between two surfaces and pulling them apart.
Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art, games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and even cornrow hairstyles. Hokky Situngkir also suggested the similar properties in Indonesian traditional art, batik, and ornaments found in traditional houses.
Ethnomathematician Ron Eglash has discussed the planned layout of Benin city using fractals as the basis, not only in the city itself and the villages but even in the rooms of houses. He commented that "When Europeans first came to Africa, they considered the architecture very disorganised and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn't even discovered yet."
In a 1996 interview with Michael Silverblatt, David Foster Wallace explained that the structure of the first draft of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle (a.k.a. Sierpinski gasket), but that the edited novel is "more like a lopsided Sierpinsky Gasket".
Some works by the Dutch artist M. C. Escher, such as Circle Limit III, contain shapes repeated to infinity that become smaller and smaller as they get near to the edges, in a pattern that would always look the same if zoomed in.
Aesthetics and Psychological Effects of Fractal Based Design: Highly prevalent in nature, fractal patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal ‘global-forest’ designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant well-being. These designs are composite fractal patterns consisting of individual fractal ‘tree-seeds’ which combine to create a ‘global fractal forest.’ The local ‘tree-seed’ patterns, global configuration of tree-seed locations, and overall resulting ‘global-forest’ patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal (‘tree-seed’) patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity ‘global-forest’ patterns consisting of ‘tree-seed’ components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant well-being.
Humans appear to be especially well-adapted to processing fractal patterns with D values between 1.3 and 1.5. When humans view fractal patterns with D values between 1.3 and 1.5, this tends to reduce physiological stress. | [
{
"paragraph_id": 0,
"text": "In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory.",
"title": ""
},
{
"paragraph_id": 1,
"text": "One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century.",
"title": ""
},
{
"paragraph_id": 4,
"text": "There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as \"beautiful, damn hard, increasingly useful. That's fractals.\" More formally, in 1982 Mandelbrot defined fractal as follows: \"A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension.\" Later, seeing this as too restrictive, he simplified and expanded the definition to this: \"A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole.\" Still later, Mandelbrot proposed \"to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants\".",
"title": ""
},
{
"paragraph_id": 5,
"text": "The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction).",
"title": ""
},
{
"paragraph_id": 6,
"text": "The term \"fractal\" was coined by the mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus, meaning \"broken\" or \"fractured\", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The word \"fractal\" often has different connotations for the lay public as opposed to mathematicians, where the public is more likely to be familiar with fractal art than the mathematical concept. The mathematical concept is difficult to define formally, even for mathematicians, but key features can be understood with a little mathematical background.",
"title": "Introduction"
},
{
"paragraph_id": 8,
"text": "The feature of \"self-similarity\", for instance, is easily understood by analogy to zooming in with a lens or other device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for some fractals, nearly the same pattern reappears over and over. Self-similarity itself is not necessarily counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel mirrors or the homunculus, the little man inside the head of the little man inside the head ...). The difference for fractals is that the pattern reproduced must be detailed.",
"title": "Introduction"
},
{
"paragraph_id": 9,
"text": "This idea of being detailed relates to another feature that can be understood without much mathematical background: Having a fractal dimension greater than its topological dimension, for instance, refers to how a fractal scales compared to how geometric shapes are usually perceived. A straight line, for instance, is conventionally understood to be one-dimensional; if such a figure is rep-tiled into pieces each 1/3 the length of the original, then there are always three equal pieces. A solid square is understood to be two-dimensional; if such a figure is rep-tiled into pieces each scaled down by a factor of 1/3 in both dimensions, there are a total of 3 = 9 pieces.",
"title": "Introduction"
},
{
"paragraph_id": 10,
"text": "We see that for ordinary self-similar objects, being n-dimensional means that when it is rep-tiled into pieces each scaled down by a scale-factor of 1/r, there are a total of r pieces. Now, consider the Koch curve. It can be rep-tiled into four sub-copies, each scaled down by a scale-factor of 1/3. So, strictly by analogy, we can consider the \"dimension\" of the Koch curve as being the unique real number D that satisfies 3 = 4. This number is called the fractal dimension of the Koch curve; it is not the conventionally perceived dimension of a curve. In general, a key property of fractals is that the fractal dimension differs from the conventionally understood dimension (formally called the topological dimension).",
"title": "Introduction"
},
{
"paragraph_id": 11,
"text": "This also leads to understanding a third feature, that fractals as mathematical equations are \"nowhere differentiable\". In a concrete sense, this means fractals cannot be measured in traditional ways. To elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring an infinitely \"wiggly\" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter.",
"title": "Introduction"
},
{
"paragraph_id": 12,
"text": "",
"title": "Introduction"
},
{
"paragraph_id": 13,
"text": "The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way. A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense).",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In his writings, Leibniz used the term \"fractional exponents\", but lamented that \"Geometry\" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical \"monsters\". Thus, it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass presented the first definition of a function with a graph that would today be considered a fractal, having the non-intuitive property of being everywhere continuous but nowhere differentiable at the Royal Prussian Academy of Sciences.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In addition, the quotient difference becomes arbitrarily large as the summation index increases. Not long after that, in 1883, Georg Cantor, who attended lectures by Weierstrass, published examples of subsets of the real line known as Cantor sets, which had unusual properties and are now recognized as fractals. Also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called \"self-inverse\" fractals.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "",
"title": "History"
},
{
"paragraph_id": 17,
"text": "One of the next milestones came in 1904, when Helge von Koch, extending ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic definition, gave a more geometric definition including hand-drawn images of a similar function, which is now called the Koch snowflake. Another milestone came a decade later in 1915, when Wacław Sierpiński constructed his famous triangle then, one year later, his carpet. By 1918, two French mathematicians, Pierre Fatou and Gaston Julia, though working independently, arrived essentially simultaneously at results describing what is now seen as fractal behaviour associated with mapping complex numbers and iterative functions and leading to further ideas about attractors and repellors (i.e., points that attract or repel other points), which have become very important in the study of fractals.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Very shortly after that work was submitted, by March 1918, Felix Hausdorff expanded the definition of \"dimension\", significantly for the evolution of the definition of fractals, to allow for sets to have non-integer dimensions. The idea of self-similar curves was taken further by Paul Lévy, who, in his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole, described a new fractal curve, the Lévy C curve.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Different researchers have postulated that without the aid of modern computer graphics, early investigators were limited to what they could depict in manual drawings, so lacked the means to visualize the beauty and appreciate some of the implications of many of the patterns they had discovered (the Julia set, for instance, could only be visualized through a few iterations as very simple drawings). That changed, however, in the 1960s, when Benoit Mandelbrot started writing about self-similarity in papers such as How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension, which built on earlier work by Lewis Fry Richardson.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1975, Mandelbrot solidified hundreds of years of thought and mathematical development in coining the word \"fractal\" and illustrated his mathematical definition with striking computer-constructed visualizations. These images, such as of his canonical Mandelbrot set, captured the popular imagination; many of them were based on recursion, leading to the popular meaning of the term \"fractal\".",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he introduced his software for generating and rendering fractally generated landscapes.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "",
"title": "History"
},
{
"paragraph_id": 24,
"text": "One often cited description that Mandelbrot published to describe geometric fractals is \"a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole\"; this is generally helpful but limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and the unusual relationship fractals have with the space they are embedded in.",
"title": "Definition and characteristics"
},
{
"paragraph_id": 25,
"text": "One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. In 1975 when Mandelbrot coined the word \"fractal\", he did so to denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension. However, this requirement is not met by space-filling curves such as the Hilbert curve.",
"title": "Definition and characteristics"
},
{
"paragraph_id": 26,
"text": "Because of the trouble involved in finding one definition for fractals, some argue that fractals should not be strictly defined at all. According to Falconer, fractals should be only generally characterized by a gestalt of the following features;",
"title": "Definition and characteristics"
},
{
"paragraph_id": 27,
"text": "As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar without having other typically fractal features. A straight line, for instance, is self-similar but not fractal because it lacks detail, and is easily described in Euclidean language without a need for recursion.",
"title": "Definition and characteristics"
},
{
"paragraph_id": 28,
"text": "",
"title": "Common techniques for generating fractals"
},
{
"paragraph_id": 29,
"text": "Images of fractals can be created by fractal generating programs. Because of the butterfly effect, a small change in a single variable can have an unpredictable outcome.",
"title": "Common techniques for generating fractals"
},
{
"paragraph_id": 30,
"text": "Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed elsewhere. Images and other outputs of modelling are normally referred to as being \"fractals\" even if they do not have strictly fractal characteristics, such as when it is possible to zoom into a region of the fractal image that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals.",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "Modeled fractals may be sounds, digital images, electrochemical patterns, circadian rhythms, etc. Fractal patterns have been reconstructed in physical 3-dimensional space and virtually, often called \"in silico\" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques.",
"title": "Applications"
},
{
"paragraph_id": 32,
"text": "The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms.",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "",
"title": "Applications"
},
{
"paragraph_id": 34,
"text": "Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The connection between fractals and leaves, for instance, is currently being used to determine how much carbon is contained in trees. Phenomena known to have fractal features include:",
"title": "Applications"
},
{
"paragraph_id": 35,
"text": "Fractals often appear in the realm of living organisms where they arise through branching processes and other complex pattern formation. Ian Wong and co-workers have shown that migrating cells can form fractals by clustering and branching. Nerve cells function through processes at the cell surface, with phenomena that are enhanced by largely increasing the surface to volume ratio. As a consequence nerve cells often are found to form into fractal patterns. These processes are crucial in cell physiology and different pathologies.",
"title": "Applications"
},
{
"paragraph_id": 36,
"text": "Multiple subcellular structures also are found to assemble into fractals. Diego Krapf has shown that through branching processes the actin filaments in human cells assemble into fractal patterns. Similarly Matthias Weiss showed that the endoplasmic reticulum displays fractal features. The current understanding is that fractals are ubiquitous in cell biology, from proteins, to organelles, to whole cells.",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "Since 1999 numerous scientific groups have performed fractal analysis on over 50 paintings created by Jackson Pollock by pouring paint directly onto horizontal canvasses.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "Recently, fractal analysis has been used to achieve a 93% success rate in distinguishing real from imitation Pollocks. Cognitive neuroscientists have shown that Pollock's fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals.",
"title": "Applications"
},
{
"paragraph_id": 39,
"text": "Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns. It involves pressing paint between two surfaces and pulling them apart.",
"title": "Applications"
},
{
"paragraph_id": 40,
"text": "Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art, games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and even cornrow hairstyles. Hokky Situngkir also suggested the similar properties in Indonesian traditional art, batik, and ornaments found in traditional houses.",
"title": "Applications"
},
{
"paragraph_id": 41,
"text": "Ethnomathematician Ron Eglash has discussed the planned layout of Benin city using fractals as the basis, not only in the city itself and the villages but even in the rooms of houses. He commented that \"When Europeans first came to Africa, they considered the architecture very disorganised and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn't even discovered yet.\"",
"title": "Applications"
},
{
"paragraph_id": 42,
"text": "In a 1996 interview with Michael Silverblatt, David Foster Wallace explained that the structure of the first draft of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle (a.k.a. Sierpinski gasket), but that the edited novel is \"more like a lopsided Sierpinsky Gasket\".",
"title": "Applications"
},
{
"paragraph_id": 43,
"text": "Some works by the Dutch artist M. C. Escher, such as Circle Limit III, contain shapes repeated to infinity that become smaller and smaller as they get near to the edges, in a pattern that would always look the same if zoomed in.",
"title": "Applications"
},
{
"paragraph_id": 44,
"text": "Aesthetics and Psychological Effects of Fractal Based Design: Highly prevalent in nature, fractal patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal ‘global-forest’ designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant well-being. These designs are composite fractal patterns consisting of individual fractal ‘tree-seeds’ which combine to create a ‘global fractal forest.’ The local ‘tree-seed’ patterns, global configuration of tree-seed locations, and overall resulting ‘global-forest’ patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal (‘tree-seed’) patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity ‘global-forest’ patterns consisting of ‘tree-seed’ components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant well-being.",
"title": "Applications"
},
{
"paragraph_id": 45,
"text": "",
"title": "Applications"
},
{
"paragraph_id": 46,
"text": "Humans appear to be especially well-adapted to processing fractal patterns with D values between 1.3 and 1.5. When humans view fractal patterns with D values between 1.3 and 1.5, this tends to reduce physiological stress.",
"title": "Applications"
}
]
| In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory. One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two raised to the power of two. Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two to the power of three. However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension. Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line. Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century. There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is a reduced-size copy of the whole." Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants". The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes. | 2001-07-27T10:18:41Z | 2023-12-19T03:01:37Z | [
"Template:Rp",
"Template:Portal",
"Template:Annotated link",
"Template:Cite OED",
"Template:Citation",
"Template:Webarchive",
"Template:Cite news",
"Template:Lang",
"Template:Div col",
"Template:Div col end",
"Template:Commons",
"Template:Wikibooks",
"Template:Fractals",
"Template:Cite journal",
"Template:Chaos theory",
"Template:Mathematical art",
"Template:Further",
"Template:Authority control",
"Template:Short description",
"Template:Patterns in nature",
"Template:Reflist",
"Template:Cbignore",
"Template:ISBN",
"Template:Use mdy dates",
"Template:Anchor",
"Template:See also",
"Template:Cite web",
"Template:Other uses",
"Template:Convert",
"Template:Main",
"Template:Cite book",
"Template:Doi",
"Template:Isbn"
]
| https://en.wikipedia.org/wiki/Fractal |
10,915 | Fluid | In physics, a fluid is a liquid, gas, or other material that may continuously move and deform (flow) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them.
Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can have both fluid and solid properties. Non-Newtonian fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied. Substances with a very high viscosity such as pitch appear to behave like a solid (see pitch drop experiment) as well. In particle physics, the concept is extended to include fluidic matters other than liquids or gases. A fluid in medicine or biology refers to any liquid constituent of the body (body fluid), whereas "liquid" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids (e.g. "drink plenty of fluids"). In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils.
Fluids display properties such as:
These properties are typically a function of their inability to support a shear stress in static equilibrium. In contrast, solids respond to shear either with a spring-like restoring force, which means that deformations are reversible, or they require a certain initial stress before they deform (see plasticity).
Solids respond with restoring forces to both shear stresses and to normal stresses—both compressive and tensile. In contrast, ideal fluids only respond with restoring forces to normal stresses, called pressure: fluids can be subjected to both compressive stress, corresponding to positive pressure, and to tensile stress, corresponding to negative pressure. Both solids and liquids also have tensile strengths, which when exceeded in solids makes irreversible deformation and fracture, and in liquids causes the onset of cavitation.
Both solids and liquids have free surfaces, which cost some amount of free energy to form. In the case of solids, the amount of free energy to form a given unit of surface area is called surface energy, whereas for liquids the same quantity is called surface tension. The ability of liquids to flow results in different behaviour in response to surface tension than in solids, although in equilibrium both will try to minimise their surface energy: liquids tend to form rounded droplets, whereas pure solids tend to form crystals. Gases do not have free surfaces, and freely diffuse.
In a solid, shear stress is a function of strain, but in a fluid, shear stress is a function of strain rate. A consequence of this behavior is Pascal's law which describes the role of pressure in characterizing a fluid's state.
The behavior of fluids can be described by the Navier–Stokes equations—a set of partial differential equations which are based on:
The study of fluids is fluid mechanics, which is subdivided into fluid dynamics and fluid statics depending on whether the fluid is in motion.
Depending on the relationship between shear stress and the rate of strain and its derivatives, fluids can be characterized as one of the following:
Newtonian fluids follow Newton's law of viscosity and may be called viscous fluids.
Fluids may be classified by their compressibility:
Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are called perfect fluids. | [
{
"paragraph_id": 0,
"text": "In physics, a fluid is a liquid, gas, or other material that may continuously move and deform (flow) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can have both fluid and solid properties. Non-Newtonian fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied. Substances with a very high viscosity such as pitch appear to behave like a solid (see pitch drop experiment) as well. In particle physics, the concept is extended to include fluidic matters other than liquids or gases. A fluid in medicine or biology refers to any liquid constituent of the body (body fluid), whereas \"liquid\" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids (e.g. \"drink plenty of fluids\"). In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fluids display properties such as:",
"title": "Physics"
},
{
"paragraph_id": 3,
"text": "These properties are typically a function of their inability to support a shear stress in static equilibrium. In contrast, solids respond to shear either with a spring-like restoring force, which means that deformations are reversible, or they require a certain initial stress before they deform (see plasticity).",
"title": "Physics"
},
{
"paragraph_id": 4,
"text": "Solids respond with restoring forces to both shear stresses and to normal stresses—both compressive and tensile. In contrast, ideal fluids only respond with restoring forces to normal stresses, called pressure: fluids can be subjected to both compressive stress, corresponding to positive pressure, and to tensile stress, corresponding to negative pressure. Both solids and liquids also have tensile strengths, which when exceeded in solids makes irreversible deformation and fracture, and in liquids causes the onset of cavitation.",
"title": "Physics"
},
{
"paragraph_id": 5,
"text": "Both solids and liquids have free surfaces, which cost some amount of free energy to form. In the case of solids, the amount of free energy to form a given unit of surface area is called surface energy, whereas for liquids the same quantity is called surface tension. The ability of liquids to flow results in different behaviour in response to surface tension than in solids, although in equilibrium both will try to minimise their surface energy: liquids tend to form rounded droplets, whereas pure solids tend to form crystals. Gases do not have free surfaces, and freely diffuse.",
"title": "Physics"
},
{
"paragraph_id": 6,
"text": "In a solid, shear stress is a function of strain, but in a fluid, shear stress is a function of strain rate. A consequence of this behavior is Pascal's law which describes the role of pressure in characterizing a fluid's state.",
"title": "Modelling"
},
{
"paragraph_id": 7,
"text": "The behavior of fluids can be described by the Navier–Stokes equations—a set of partial differential equations which are based on:",
"title": "Modelling"
},
{
"paragraph_id": 8,
"text": "The study of fluids is fluid mechanics, which is subdivided into fluid dynamics and fluid statics depending on whether the fluid is in motion.",
"title": "Modelling"
},
{
"paragraph_id": 9,
"text": "Depending on the relationship between shear stress and the rate of strain and its derivatives, fluids can be characterized as one of the following:",
"title": "Modelling"
},
{
"paragraph_id": 10,
"text": "Newtonian fluids follow Newton's law of viscosity and may be called viscous fluids.",
"title": "Modelling"
},
{
"paragraph_id": 11,
"text": "Fluids may be classified by their compressibility:",
"title": "Modelling"
},
{
"paragraph_id": 12,
"text": "Newtonian and incompressible fluids do not actually exist, but are assumed to be for theoretical settlement. Virtual fluids that completely ignore the effects of viscosity and compressibility are called perfect fluids.",
"title": "Modelling"
}
]
| In physics, a fluid is a liquid, gas, or other material that may continuously move and deform (flow) under an applied shear stress, or external force. They have zero shear modulus, or, in simpler terms, are substances which cannot resist any shear force applied to them. Although the term fluid generally includes both the liquid and gas phases, its definition varies among branches of science. Definitions of solid vary as well, and depending on field, some substances can have both fluid and solid properties. Non-Newtonian fluids like Silly Putty appear to behave similar to a solid when a sudden force is applied. Substances with a very high viscosity such as pitch appear to behave like a solid as well. In particle physics, the concept is extended to include fluidic matters other than liquids or gases. A fluid in medicine or biology refers to any liquid constituent of the body, whereas "liquid" is not used in this sense. Sometimes liquids given for fluid replacement, either by drinking or by injection, are also called fluids. In hydraulics, fluid is a term which refers to liquids with certain properties, and is broader than (hydraulic) oils. | 2001-07-27T19:39:15Z | 2023-12-14T00:26:19Z | [
"Template:Cite web",
"Template:Cite journal",
"Template:Authority control",
"Template:Short description",
"Template:Refimprove",
"Template:Main",
"Template:Reflist",
"Template:Cite book",
"Template:About",
"Template:Distinguish",
"Template:Continuum mechanics",
"Template:Cite encyclopedia"
]
| https://en.wikipedia.org/wiki/Fluid |
10,916 | FAQ | A frequently asked questions (FAQ) list is often used in articles, websites, email lists, and online forums where common questions tend to recur, for example through posts or queries by new users related to common knowledge gaps. The purpose of a FAQ is generally to provide information on frequent questions or concerns; however, the format is a useful means of organizing information, and text consisting of questions and their answers may thus be called a FAQ regardless of whether the questions are actually frequently asked.
Since the acronym FAQ originated in textual media, its pronunciation varies. FAQ can be pronounced as an initialism, "F-A-Q", or as an acronym, "FAQ". Web designers often label a single list of questions as a "FAQ", such as on Google Search, while using "FAQs" to denote multiple lists of questions such as on United States Treasury sites. Use of "FAQ" to refer to a single frequently asked question, in and of itself, is less common.
While the name may be recent, the FAQ format itself is quite old. For example, Matthew Hopkins wrote The Discovery of Witches in 1648 as a list of questions and answers, introduced as "Certain Queries answered" . Many old catechisms are in a question-and-answer (Q&A) format. Summa Theologica, written by Thomas Aquinas in the second half of the 13th century, is a series of common questions about Christianity to which he wrote a series of replies. Plato's dialogues are even older.
The "FAQ" is an Internet textual tradition originating from the technical limitations of early mailing lists from NASA in the early 1980s. The first FAQ developed over several pre-Web years, starting from 1982 when storage was expensive. On ARPANET's SPACE mailing list, the presumption was that new users would download archived past messages through FTP. In practice this rarely happened, and the users tended to post questions to the mailing list instead of searching its archives. Repeating the "right" answers became tedious, and went against developing netiquette. A series of different measures were set up by loosely affiliated groups of computer system administrators, from regularly posted messages to netlib-like query email daemons. The acronym FAQ was developed between 1982 and 1985 by Eugene Miya of NASA for the SPACE mailing list. The format was then picked up on other mailing lists and Usenet newsgroups. Posting frequency changed to monthly, and finally weekly and daily across a variety of mailing lists and newsgroups. The first person to post a weekly FAQ was Jef Poskanzer to the Usenet net.graphics/comp.graphics newsgroups. Eugene Miya experimented with the first daily FAQ.
In some cases, informative documents not in the traditional FAQ style have also been described as FAQs, particularly the video game FAQ, which is often a detailed description of gameplay, including tips, secrets, and beginning-to-end guidance. Rarely are videogame FAQs in a question-and-answer format, although they may contain a short section of questions and answers.
Over time, the accumulated FAQs across all Usenet newsgroups sparked the creation of the "*.answers" moderated newsgroups such as comp.answers, misc.answers and sci.answers for crossposting and collecting FAQ across respective comp.*, misc.*, sci.* newsgroups.
The FAQ has become an important component of websites, either as a stand-alone page or as a website section with multiple subpages per question or topic. Embedded links to FAQ pages have become commonplace in website navigation bars, bodies, or footers. The FAQ page is an important consideration in web design, in order to achieve several goals of customer service and search engine optimization (SEO), including
Some content providers discourage the use of FAQs in place of restructuring content under logical headings. For example, the UK Government Digital Service does not use FAQs. | [
{
"paragraph_id": 0,
"text": "A frequently asked questions (FAQ) list is often used in articles, websites, email lists, and online forums where common questions tend to recur, for example through posts or queries by new users related to common knowledge gaps. The purpose of a FAQ is generally to provide information on frequent questions or concerns; however, the format is a useful means of organizing information, and text consisting of questions and their answers may thus be called a FAQ regardless of whether the questions are actually frequently asked.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since the acronym FAQ originated in textual media, its pronunciation varies. FAQ can be pronounced as an initialism, \"F-A-Q\", or as an acronym, \"FAQ\". Web designers often label a single list of questions as a \"FAQ\", such as on Google Search, while using \"FAQs\" to denote multiple lists of questions such as on United States Treasury sites. Use of \"FAQ\" to refer to a single frequently asked question, in and of itself, is less common.",
"title": ""
},
{
"paragraph_id": 2,
"text": "While the name may be recent, the FAQ format itself is quite old. For example, Matthew Hopkins wrote The Discovery of Witches in 1648 as a list of questions and answers, introduced as \"Certain Queries answered\" . Many old catechisms are in a question-and-answer (Q&A) format. Summa Theologica, written by Thomas Aquinas in the second half of the 13th century, is a series of common questions about Christianity to which he wrote a series of replies. Plato's dialogues are even older.",
"title": "Origins"
},
{
"paragraph_id": 3,
"text": "The \"FAQ\" is an Internet textual tradition originating from the technical limitations of early mailing lists from NASA in the early 1980s. The first FAQ developed over several pre-Web years, starting from 1982 when storage was expensive. On ARPANET's SPACE mailing list, the presumption was that new users would download archived past messages through FTP. In practice this rarely happened, and the users tended to post questions to the mailing list instead of searching its archives. Repeating the \"right\" answers became tedious, and went against developing netiquette. A series of different measures were set up by loosely affiliated groups of computer system administrators, from regularly posted messages to netlib-like query email daemons. The acronym FAQ was developed between 1982 and 1985 by Eugene Miya of NASA for the SPACE mailing list. The format was then picked up on other mailing lists and Usenet newsgroups. Posting frequency changed to monthly, and finally weekly and daily across a variety of mailing lists and newsgroups. The first person to post a weekly FAQ was Jef Poskanzer to the Usenet net.graphics/comp.graphics newsgroups. Eugene Miya experimented with the first daily FAQ.",
"title": "Origins"
},
{
"paragraph_id": 4,
"text": "In some cases, informative documents not in the traditional FAQ style have also been described as FAQs, particularly the video game FAQ, which is often a detailed description of gameplay, including tips, secrets, and beginning-to-end guidance. Rarely are videogame FAQs in a question-and-answer format, although they may contain a short section of questions and answers.",
"title": "Modern developments"
},
{
"paragraph_id": 5,
"text": "Over time, the accumulated FAQs across all Usenet newsgroups sparked the creation of the \"*.answers\" moderated newsgroups such as comp.answers, misc.answers and sci.answers for crossposting and collecting FAQ across respective comp.*, misc.*, sci.* newsgroups.",
"title": "Modern developments"
},
{
"paragraph_id": 6,
"text": "The FAQ has become an important component of websites, either as a stand-alone page or as a website section with multiple subpages per question or topic. Embedded links to FAQ pages have become commonplace in website navigation bars, bodies, or footers. The FAQ page is an important consideration in web design, in order to achieve several goals of customer service and search engine optimization (SEO), including",
"title": "Modern developments"
},
{
"paragraph_id": 7,
"text": "Some content providers discourage the use of FAQs in place of restructuring content under logical headings. For example, the UK Government Digital Service does not use FAQs.",
"title": "Criticism"
}
]
| A frequently asked questions (FAQ) list is often used in articles, websites, email lists, and online forums where common questions tend to recur, for example through posts or queries by new users related to common knowledge gaps. The purpose of a FAQ is generally to provide information on frequent questions or concerns; however, the format is a useful means of organizing information, and text consisting of questions and their answers may thus be called a FAQ regardless of whether the questions are actually frequently asked. Since the acronym FAQ originated in textual media, its pronunciation varies. FAQ can be pronounced as an initialism, "F-A-Q", or as an acronym, "FAQ". Web designers often label a single list of questions as a "FAQ", such as on Google Search, while using "FAQs" to denote multiple lists of questions such as on United States Treasury sites. Use of "FAQ" to refer to a single frequently asked question, in and of itself, is less common. | 2001-07-27T22:56:52Z | 2023-12-30T17:59:55Z | [
"Template:Webarchive",
"Template:Cite web",
"Template:Wiktionary",
"Template:Hatgrp",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite newsgroup",
"Template:Short description",
"Template:Cn"
]
| https://en.wikipedia.org/wiki/FAQ |
10,918 | Fibonacci sequence | In mathematics, the Fibonacci sequence is a sequence in which each number is the sum of the two preceding ones. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted Fn . The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the sequence begins
The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book Liber Abaci.
Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species.
Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences.
The Fibonacci numbers may be defined by the recurrence relation
and
for n > 1.
Under some older definitions, the value F 0 = 0 {\displaystyle F_{0}=0} is omitted, so that the sequence starts with F 1 = F 2 = 1 , {\displaystyle F_{1}=F_{2}=1,} and the recurrence F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} is valid for n > 2.
The first 20 Fibonacci numbers Fn are:
The Fibonacci sequence appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm+1.
Knowledge of the Fibonacci sequence was expressed as early as Pingala (c. 450 BC–200 BC). Singh cites Pingala's cryptic formula misrau cha ("the two are mixed") and scholars who interpret it in context as saying that the number of patterns for m beats (Fm+1) is obtained by adding one [S] to the Fm cases and one [L] to the Fm−1 cases. Bharata Muni also expresses knowledge of the sequence in the Natya Shastra (c. 100 BC–c. 350 AD). However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135):
Variations of two earlier meters [is the variation]... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21]... In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations].
Hemachandra (c. 1150) is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number ... of the next mātrā-vṛtta."
The Fibonacci sequence first appears in the book Liber Abaci (The Book of Calculation, 1202) by Fibonacci where it is used to calculate the growth of rabbit populations. Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that: a newly born breeding pair of rabbits are put in a field; each breeding pair mates at the age of one month, and at the end of their second month they always produce another pair of rabbits; and rabbits never die, but continue breeding forever. Fibonacci posed the puzzle: how many pairs will there be in one year?
At the end of the nth month, the number of pairs of rabbits is equal to the number of mature pairs (that is, the number of pairs in month n – 2) plus the number of pairs alive last month (month n – 1). The number in the nth month is the nth Fibonacci number.
The name "Fibonacci sequence" was first used by the 19th-century number theorist Édouard Lucas.
Like every sequence defined by a linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form expression. It has become known as Binet's formula, named after French mathematician Jacques Philippe Marie Binet, though it was already known by Abraham de Moivre and Daniel Bernoulli:
where
is the golden ratio, and ψ is its conjugate:
Since ψ = − φ − 1 {\displaystyle \psi =-\varphi ^{-1}} , this formula can also be written as
To see the relation between the sequence and these constants, note that φ and ψ are both solutions of the equation x 2 = x + 1 {\textstyle x^{2}=x+1} and thus x n = x n − 1 + x n − 2 , {\displaystyle x^{n}=x^{n-1}+x^{n-2},} so the powers of φ and ψ satisfy the Fibonacci recursion. In other words,
It follows that for any values a and b, the sequence defined by
satisfies the same recurrence,
If a and b are chosen so that U0 = 0 and U1 = 1 then the resulting sequence Un must be the Fibonacci sequence. This is the same as requiring a and b satisfy the system of equations:
which has solution
producing the required formula.
Taking the starting values U0 and U1 to be arbitrary constants, a more general solution is:
where
Since | ψ n 5 | < 1 2 {\textstyle \left|{\frac {\psi ^{n}}{\sqrt {5}}}\right|<{\frac {1}{2}}} for all n ≥ 0, the number Fn is the closest integer to φ n 5 {\displaystyle {\frac {\varphi ^{n}}{\sqrt {5}}}} . Therefore, it can be found by rounding, using the nearest integer function:
In fact, the rounding error is very small, being less than 0.1 for n ≥ 4, and less than 0.01 for n ≥ 8. This formula is easily inverted to find an index of a Fibonacci number F:
Instead using the floor function gives the largest index of a Fibonacci number that is not greater than F:
where log φ ( x ) = ln ( x ) / ln ( φ ) = log 10 ( x ) / log 10 ( φ ) {\displaystyle \log _{\varphi }(x)=\ln(x)/\ln(\varphi )=\log _{10}(x)/\log _{10}(\varphi )} , ln ( φ ) = 0.481211 … {\displaystyle \ln(\varphi )=0.481211\ldots } , and log 10 ( φ ) = 0.208987 … {\displaystyle \log _{10}(\varphi )=0.208987\ldots } .
Since Fn is asymptotic to φ n / 5 {\displaystyle \varphi ^{n}/{\sqrt {5}}} , the number of digits in Fn is asymptotic to n log 10 φ ≈ 0.2090 n {\displaystyle n\log _{10}\varphi \approx 0.2090\,n} . As a consequence, for every integer d > 1 there are either 4 or 5 Fibonacci numbers with d decimal digits.
More generally, in the base b representation, the number of digits in Fn is asymptotic to n log b φ = n log φ log b . {\displaystyle n\log _{b}\varphi ={\frac {n\log \varphi }{\log b}}.}
Johannes Kepler observed that the ratio of consecutive Fibonacci numbers converges. He wrote that "as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost", and concluded that these ratios approach the golden ratio φ : {\displaystyle \varphi \colon }
This convergence holds regardless of the starting values U 0 {\displaystyle U_{0}} and U 1 {\displaystyle U_{1}} , unless U 1 = − U 0 / φ {\displaystyle U_{1}=-U_{0}/\varphi } . This can be verified using Binet's formula. For example, the initial values 3 and 2 generate the sequence 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, ... . The ratio of consecutive terms in this sequence shows the same convergence towards the golden ratio.
In general, lim n → ∞ F n + m F n = φ m {\displaystyle \lim _{n\to \infty }{\frac {F_{n+m}}{F_{n}}}=\varphi ^{m}} , because the ratios between consecutive Fibonacci numbers approaches φ {\displaystyle \varphi } .
Since the golden ratio satisfies the equation
this expression can be used to decompose higher powers φ n {\displaystyle \varphi ^{n}} as a linear function of lower powers, which in turn can be decomposed all the way down to a linear combination of φ {\displaystyle \varphi } and 1. The resulting recurrence relationships yield Fibonacci numbers as the linear coefficients:
This equation can be proved by induction on n ≥ 1:
For ψ = − 1 / φ {\displaystyle \psi =-1/\varphi } , it is also the case that ψ 2 = ψ + 1 {\displaystyle \psi ^{2}=\psi +1} and it is also the case that
These expressions are also true for n < 1 if the Fibonacci sequence Fn is extended to negative integers using the Fibonacci rule F n = F n + 2 − F n + 1 . {\displaystyle F_{n}=F_{n+2}-F_{n+1}.}
Binet's formula provides a proof that a positive integer x is a Fibonacci number if and only if at least one of 5 x 2 + 4 {\displaystyle 5x^{2}+4} or 5 x 2 − 4 {\displaystyle 5x^{2}-4} is a perfect square. This is because Binet's formula, which can be written as F n = ( φ n − ( − 1 ) n φ − n ) / 5 {\displaystyle F_{n}=(\varphi ^{n}-(-1)^{n}\varphi ^{-n})/{\sqrt {5}}} , can be multiplied by 5 φ n {\displaystyle {\sqrt {5}}\varphi ^{n}} and solved as a quadratic equation in φ n {\displaystyle \varphi ^{n}} via the quadratic formula:
Comparing this to φ n = F n φ + F n − 1 = ( F n 5 + F n + 2 F n − 1 ) / 2 {\displaystyle \varphi ^{n}=F_{n}\varphi +F_{n-1}=(F_{n}{\sqrt {5}}+F_{n}+2F_{n-1})/2} , it follows that
In particular, the left-hand side is a perfect square.
A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is
alternatively denoted
which yields F → n = A n F → 0 {\displaystyle {\vec {F}}_{n}=\mathbf {A} ^{n}{\vec {F}}_{0}} . The eigenvalues of the matrix A are φ = 1 2 ( 1 + 5 ) {\displaystyle \varphi ={\frac {1}{2}}(1+{\sqrt {5}})} and ψ = − φ − 1 = 1 2 ( 1 − 5 ) {\displaystyle \psi =-\varphi ^{-1}={\frac {1}{2}}(1-{\sqrt {5}})} corresponding to the respective eigenvectors
and
As the initial value is
it follows that the nth term is
From this, the nth element in the Fibonacci series may be read off directly as a closed-form expression:
Equivalently, the same computation may be performed by diagonalization of A through use of its eigendecomposition:
where Λ = ( φ 0 0 − φ − 1 ) {\displaystyle \Lambda ={\begin{pmatrix}\varphi &0\\0&-\varphi ^{-1}\end{pmatrix}}} and S = ( φ − φ − 1 1 1 ) . {\displaystyle S={\begin{pmatrix}\varphi &-\varphi ^{-1}\\1&1\end{pmatrix}}.} The closed-form expression for the nth element in the Fibonacci series is therefore given by
which again yields
The matrix A has a determinant of −1, and thus it is a 2 × 2 unimodular matrix.
This property can be understood in terms of the continued fraction representation for the golden ratio:
The Fibonacci numbers occur as the ratio of successive convergents of the continued fraction for φ, and the matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1. The matrix representation gives the following closed-form expression for the Fibonacci numbers:
For a given n, this matrix can be computed in O(log(n)) arithmetic operations, using the exponentiation by squaring method.
Taking the determinant of both sides of this equation yields Cassini's identity,
Moreover, since AA = A for any square matrix A, the following identities can be derived (they are obtained from two different coefficients of the matrix product, and one may easily deduce the second one from the first one by changing n into n + 1),
In particular, with m = n,
These last two identities provide a way to compute Fibonacci numbers recursively in O(log(n)) arithmetic operations and in time O(M(n) log(n)), where M(n) is the time for the multiplication of two numbers of n digits. This matches the time for computing the nth Fibonacci number from the closed-form matrix formula, but with fewer redundant steps if one avoids recomputing an already computed Fibonacci number (recursion with memoization).
Most identities involving Fibonacci numbers can be proved using combinatorial arguments using the fact that F n {\displaystyle F_{n}} can be interpreted as the number of (possibly empty) sequences of 1s and 2s whose sum is n − 1 {\displaystyle n-1} . This can be taken as the definition of F n {\displaystyle F_{n}} with the conventions F 0 = 0 {\displaystyle F_{0}=0} , meaning no such sequence exists whose sum is −1, and F 1 = 1 {\displaystyle F_{1}=1} , meaning the empty sequence "adds up" to 0. In the following, | . . . | {\displaystyle |{...}|} is the cardinality of a set:
In this manner the recurrence relation
may be understood by dividing the F n {\displaystyle F_{n}} sequences into two non-overlapping sets where all sequences either begin with 1 or 2:
Excluding the first element, the remaining terms in each sequence sum to n − 2 {\displaystyle n-2} or n − 3 {\displaystyle n-3} and the cardinality of each set is F n − 1 {\displaystyle F_{n-1}} or F n − 2 {\displaystyle F_{n-2}} giving a total of F n − 1 + F n − 2 {\displaystyle F_{n-1}+F_{n-2}} sequences, showing this is equal to F n {\displaystyle F_{n}} .
In a similar manner it may be shown that the sum of the first Fibonacci numbers up to the nth is equal to the (n + 2)nd Fibonacci number minus 1. In symbols:
This may be seen by dividing all sequences summing to n + 1 {\displaystyle n+1} based on the location of the first 2. Specifically, each set consists of those sequences that start { 2 , . . . } , { 1 , 2 , . . . } , . . . , {\displaystyle \{2,...\},\{1,2,...\},...,} until the last two sets { { 1 , 1 , . . . , 1 , 2 } } , { { 1 , 1 , . . . , 1 } } {\displaystyle \{\{1,1,...,1,2\}\},\{\{1,1,...,1\}\}} each with cardinality 1.
Following the same logic as before, by summing the cardinality of each set we see that
... where the last two terms have the value F 1 = 1 {\displaystyle F_{1}=1} . From this it follows that ∑ i = 1 n F i = F n + 2 − 1 {\displaystyle \sum _{i=1}^{n}F_{i}=F_{n+2}-1} .
A similar argument, grouping the sums by the position of the first 1 rather than the first 2 gives two more identities:
and
In words, the sum of the first Fibonacci numbers with odd index up to F 2 n − 1 {\displaystyle F_{2n-1}} is the (2n)th Fibonacci number, and the sum of the first Fibonacci numbers with even index up to F 2 n {\displaystyle F_{2n}} is the (2n + 1)st Fibonacci number minus 1.
A different trick may be used to prove
or in words, the sum of the squares of the first Fibonacci numbers up to F n {\displaystyle F_{n}} is the product of the nth and (n + 1)st Fibonacci numbers. To see this, begin with a Fibonacci rectangle of size F n × F n + 1 {\displaystyle F_{n}\times F_{n+1}} and decompose it into squares of size F n , F n − 1 , . . . , F 1 {\displaystyle F_{n},F_{n-1},...,F_{1}} ; from this the identity follows by comparing areas:
The sequence ( F n ) n ∈ N {\displaystyle (F_{n})_{n\in \mathbb {N} }} is also considered using the symbolic method. More precisely, this sequence corresponds to a specifiable combinatorial class. The specification of this sequence is Seq ( Z + Z 2 ) {\displaystyle \operatorname {Seq} ({\mathcal {Z+Z^{2}}})} . Indeed, as stated above, the n {\displaystyle n} -th Fibonacci number equals the number of combinatorial compositions (ordered partitions) of n − 1 {\displaystyle n-1} using terms 1 and 2.
It follows that the ordinary generating function of the Fibonacci sequence, ∑ i = 0 ∞ F i z i {\displaystyle \sum _{i=0}^{\infty }F_{i}z^{i}} , is the rational function z 1 − z − z 2 . {\displaystyle {\frac {z}{1-z-z^{2}}}.}
Fibonacci identities often can be easily proved using mathematical induction.
For example, reconsider
Adding F n + 1 {\displaystyle F_{n+1}} to both sides gives
and so we have the formula for n + 1 {\displaystyle n+1}
Similarly, add F n + 1 2 {\displaystyle {F_{n+1}}^{2}} to both sides of
to give
The Binet formula is
This can be used to prove Fibonacci identities.
For example, to prove that ∑ i = 1 n F i = F n + 2 − 1 {\textstyle \sum _{i=1}^{n}F_{i}=F_{n+2}-1} note that the left hand side multiplied by 5 {\displaystyle {\sqrt {5}}} becomes
as required, using the facts φ ψ = − 1 {\textstyle \varphi \psi =-1} and φ − ψ = 5 {\textstyle \varphi -\psi ={\sqrt {5}}} to simplify the equations.
Numerous other identities can be derived using various methods. Here are some of them:
Cassini's identity states that
Catalan's identity is a generalization:
where Ln is the n-th Lucas number. The last is an identity for doubling n; other identities of this type are
by Cassini's identity.
These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number.
More generally,
or alternatively
Putting k = 2 in this formula, one gets again the formulas of the end of above section Matrix form.
The generating function of the Fibonacci sequence is the power series
This series is convergent for any complex number z {\displaystyle z} satisfying | z | < 1 / φ , {\displaystyle |z|<1/\varphi ,} and its sum has a simple closed form:
This can be proved by multiplying by ( 1 − z − z 2 ) {\textstyle (1-z-z^{2})} :
where all terms involving z k {\displaystyle z^{k}} for k ≥ 2 {\displaystyle k\geq 2} cancel out because of the defining Fibonacci recurrence relation.
The partial fraction decomposition is given by
where φ = 1 2 ( 1 + 5 ) {\textstyle \varphi ={\tfrac {1}{2}}\left(1+{\sqrt {5}}\right)} is the golden ratio and ψ = 1 2 ( 1 − 5 ) {\displaystyle \psi ={\tfrac {1}{2}}\left(1-{\sqrt {5}}\right)} is its conjugate.
The related function z ↦ − s ( − 1 / z ) {\textstyle z\mapsto -s\left(-1/z\right)} is the generating function for the negafibonacci numbers, and s ( z ) {\displaystyle s(z)} satisfies the functional equation
Using z {\displaystyle z} equal to any of 0.01, 0.001, 0.0001, etc. lays out the first Fibonacci numbers in the decimal expansion of s ( z ) {\displaystyle s(z)} . For example, s ( 0.001 ) = 0.001 0.998999 = 1000 998999 = 0.001001002003005008013021 … . {\displaystyle s(0.001)={\frac {0.001}{0.998999}}={\frac {1000}{998999}}=0.001001002003005008013021\ldots .}
Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, the sum of every odd-indexed reciprocal Fibonacci number can be written as
and the sum of squared reciprocal Fibonacci numbers as
If we add 1 to each Fibonacci number in the first sum, there is also the closed form
and there is a nested sum of squared Fibonacci numbers giving the reciprocal of the golden ratio,
The sum of all even-indexed reciprocal Fibonacci numbers is
with the Lambert series L ( q ) := ∑ k = 1 ∞ q k 1 − q k , {\displaystyle \textstyle L(q):=\sum _{k=1}^{\infty }{\frac {q^{k}}{1-q^{k}}},} since 1 F 2 k = 5 ( ψ 2 k 1 − ψ 2 k − ψ 4 k 1 − ψ 4 k ) . {\displaystyle \textstyle {\frac {1}{F_{2k}}}={\sqrt {5}}\left({\frac {\psi ^{2k}}{1-\psi ^{2k}}}-{\frac {\psi ^{4k}}{1-\psi ^{4k}}}\right)\!.}
So the reciprocal Fibonacci constant is
Moreover, this number has been proved irrational by Richard André-Jeannin.
Millin's series gives the identity
which follows from the closed form for its partial sums as N tends to infinity:
Every third number of the sequence is even (a multiple of F 3 = 2 {\displaystyle F_{3}=2} ) and, more generally, every kth number of the sequence is a multiple of Fk. Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property
where gcd is the greatest common divisor function.
In particular, any three consecutive Fibonacci numbers are pairwise coprime because both F 1 = 1 {\displaystyle F_{1}=1} and F 2 = 1 {\displaystyle F_{2}=1} . That is,
for every n.
Every prime number p divides a Fibonacci number that can be determined by the value of p modulo 5. If p is congruent to 1 or 4 modulo 5, then p divides Fp−1, and if p is congruent to 2 or 3 modulo 5, then, p divides Fp+1. The remaining case is that p = 5, and in this case p divides Fp.
These cases can be combined into a single, non-piecewise formula, using the Legendre symbol:
The above formula can be used as a primality test in the sense that if
where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that n is a prime, and if it fails to hold, then n is definitely not a prime. If n is composite and satisfies the formula, then n is a Fibonacci pseudoprime. When m is large – say a 500-bit number – then we can calculate Fm (mod n) efficiently using the matrix form. Thus
Here the matrix power A is calculated using modular exponentiation, which can be adapted to matrices.
A Fibonacci prime is a Fibonacci number that is prime. The first few are:
Fibonacci primes with thousands of digits have been found, but it is not known whether there are infinitely many.
Fkn is divisible by Fn, so, apart from F4 = 3, any Fibonacci prime must have a prime index. As there are arbitrarily long runs of composite numbers, there are therefore also arbitrarily long runs of composite Fibonacci numbers.
No Fibonacci number greater than F6 = 8 is one greater or one less than a prime number.
The only nontrivial square Fibonacci number is 144. Attila Pethő proved in 2001 that there is only a finite number of perfect power Fibonacci numbers. In 2006, Y. Bugeaud, M. Mignotte, and S. Siksek proved that 8 and 144 are the only such non-trivial perfect powers.
1, 3, 21, and 55 are the only triangular Fibonacci numbers, which was conjectured by Vern Hoggatt and proved by Luo Ming.
No Fibonacci number can be a perfect number. More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect.
With the exceptions of 1, 8 and 144 (F1 = F2, F6 and F12) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 (F6 and F12) are the only Fibonacci numbers that are the product of other Fibonacci numbers.
The divisibility of Fibonacci numbers by a prime p is related to the Legendre symbol ( p 5 ) {\displaystyle \left({\tfrac {p}{5}}\right)} which is evaluated as follows:
If p is a prime number then
For example,
It is not known whether there exists a prime p such that
Such primes (if there are any) would be called Wall–Sun–Sun primes.
Also, if p ≠ 5 is an odd prime number then:
Example 1. p = 7, in this case p ≡ 3 (mod 4) and we have:
Example 2. p = 11, in this case p ≡ 3 (mod 4) and we have:
Example 3. p = 13, in this case p ≡ 1 (mod 4) and we have:
Example 4. p = 29, in this case p ≡ 1 (mod 4) and we have:
For odd n, all odd prime divisors of Fn are congruent to 1 modulo 4, implying that all odd divisors of Fn (as the products of odd prime divisors) are congruent to 1 modulo 4.
For example,
All known factors of Fibonacci numbers F(i ) for all i < 50000 are collected at the relevant repositories.
If the members of the Fibonacci sequence are taken mod n, the resulting sequence is periodic with period at most 6n. The lengths of the periods for various n form the so-called Pisano periods. Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular n, the Pisano period may be found as an instance of cycle detection.
The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients.
Some specific examples that are close, in some sense, to the Fibonacci sequence include:
The Fibonacci numbers occur as the sums of binomial coefficients in the "shallow" diagonals of Pascal's triangle:
This can be proved by expanding the generating function
and collecting like terms of x n {\displaystyle x^{n}} .
To see how the formula is used, we can arrange the sums by the number of terms present:
which is ( 5 0 ) + ( 4 1 ) + ( 3 2 ) {\displaystyle {\binom {5}{0}}+{\binom {4}{1}}+{\binom {3}{2}}} , where we are choosing the positions of k twos from n−k−1 terms.
These numbers also give the solution to certain enumerative problems, the most common of which is that of counting the number of ways of writing a given number n as an ordered sum of 1s and 2s (called compositions); there are Fn+1 ways to do this (equivalently, it's also the number of domino tilings of the 2 × n {\displaystyle 2\times n} rectangle). For example, there are F5+1 = F6 = 8 ways one can climb a staircase of 5 steps, taking one or two steps at a time:
The figure shows that 8 can be decomposed into 5 (the number of ways to climb 4 steps, followed by a single-step) plus 3 (the number of ways to climb 3 steps, followed by a double-step). The same reasoning is applied recursively until a single step, of which there is only one way to climb.
The Fibonacci numbers can be found in different ways among the set of binary strings, or equivalently, among the subsets of a given set.
Fibonacci sequences appear in biological settings, such as branching in trees, arrangement of leaves on a stem, the fruitlets of a pineapple, the flowering of artichoke, the arrangement of a pine cone, and the family tree of honeybees. Kepler pointed out the presence of the Fibonacci sequence in nature, using it to explain the (golden ratio-related) pentagonal form of some flowers. Field daisies most often have petals in counts of Fibonacci numbers. In 1830, K. F. Schimper and A. Braun discovered that the parastichies (spiral phyllotaxis) of plants were frequently expressed as fractions involving Fibonacci numbers.
Przemysław Prusinkiewicz advanced the idea that real instances can in part be understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars.
A model for the pattern of florets in the head of a sunflower was proposed by Helmut Vogel [de] in 1979. This has the form
where n is the index number of the floret and c is a constant scaling factor; the florets thus lie on Fermat's spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form F( j):F( j + 1), the nearest neighbors of floret number n are those at n ± F( j) for some index j, which depends on r, the distance from the center. Sunflowers and similar flowers most commonly have spirals of florets in clockwise and counter-clockwise directions in the amount of adjacent Fibonacci numbers, typically counted by the outermost range of radii.
Fibonacci numbers also appear in the pedigrees of idealized honeybees, according to the following rules:
Thus, a male bee always has one parent, and a female bee has two. If one traces the pedigree of any male bee (1 bee), he has 1 parent (1 bee), 2 grandparents, 3 great-grandparents, 5 great-great-grandparents, and so on. This sequence of numbers of parents is the Fibonacci sequence. The number of ancestors at each level, Fn, is the number of female ancestors, which is Fn−1, plus the number of male ancestors, which is Fn−2. This is under the unrealistic assumption that the ancestors at each level are otherwise unrelated.
It has been noticed that the number of possible ancestors on the human X chromosome inheritance line at a given ancestral generation also follows the Fibonacci sequence. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the "origin" of his own X chromosome ( F 1 = 1 {\displaystyle F_{1}=1} ), and at his parents' generation, his X chromosome came from a single parent ( F 2 = 1 {\displaystyle F_{2}=1} ). The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome ( F 3 = 2 {\displaystyle F_{3}=2} ). The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome ( F 4 = 3 {\displaystyle F_{4}=3} ). Five great-great-grandparents contributed to the male descendant's X chromosome ( F 5 = 5 {\displaystyle F_{5}=5} ), etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.) | [
{
"paragraph_id": 0,
"text": "In mathematics, the Fibonacci sequence is a sequence in which each number is the sum of the two preceding ones. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted Fn . The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the sequence begins",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book Liber Abaci.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Fibonacci numbers may be defined by the recurrence relation",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "and",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "for n > 1.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "Under some older definitions, the value F 0 = 0 {\\displaystyle F_{0}=0} is omitted, so that the sequence starts with F 1 = F 2 = 1 , {\\displaystyle F_{1}=F_{2}=1,} and the recurrence F n = F n − 1 + F n − 2 {\\displaystyle F_{n}=F_{n-1}+F_{n-2}} is valid for n > 2.",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "The first 20 Fibonacci numbers Fn are:",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "The Fibonacci sequence appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm+1.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Knowledge of the Fibonacci sequence was expressed as early as Pingala (c. 450 BC–200 BC). Singh cites Pingala's cryptic formula misrau cha (\"the two are mixed\") and scholars who interpret it in context as saying that the number of patterns for m beats (Fm+1) is obtained by adding one [S] to the Fm cases and one [L] to the Fm−1 cases. Bharata Muni also expresses knowledge of the sequence in the Natya Shastra (c. 100 BC–c. 350 AD). However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135):",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Variations of two earlier meters [is the variation]... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21]... In this way, the process should be followed in all mātrā-vṛttas [prosodic combinations].",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Hemachandra (c. 1150) is credited with knowledge of the sequence as well, writing that \"the sum of the last and the one before the last is the number ... of the next mātrā-vṛtta.\"",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Fibonacci sequence first appears in the book Liber Abaci (The Book of Calculation, 1202) by Fibonacci where it is used to calculate the growth of rabbit populations. Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that: a newly born breeding pair of rabbits are put in a field; each breeding pair mates at the age of one month, and at the end of their second month they always produce another pair of rabbits; and rabbits never die, but continue breeding forever. Fibonacci posed the puzzle: how many pairs will there be in one year?",
"title": "History"
},
{
"paragraph_id": 14,
"text": "At the end of the nth month, the number of pairs of rabbits is equal to the number of mature pairs (that is, the number of pairs in month n – 2) plus the number of pairs alive last month (month n – 1). The number in the nth month is the nth Fibonacci number.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The name \"Fibonacci sequence\" was first used by the 19th-century number theorist Édouard Lucas.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Like every sequence defined by a linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form expression. It has become known as Binet's formula, named after French mathematician Jacques Philippe Marie Binet, though it was already known by Abraham de Moivre and Daniel Bernoulli:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 17,
"text": "where",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 18,
"text": "is the golden ratio, and ψ is its conjugate:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 19,
"text": "Since ψ = − φ − 1 {\\displaystyle \\psi =-\\varphi ^{-1}} , this formula can also be written as",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 20,
"text": "To see the relation between the sequence and these constants, note that φ and ψ are both solutions of the equation x 2 = x + 1 {\\textstyle x^{2}=x+1} and thus x n = x n − 1 + x n − 2 , {\\displaystyle x^{n}=x^{n-1}+x^{n-2},} so the powers of φ and ψ satisfy the Fibonacci recursion. In other words,",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 21,
"text": "It follows that for any values a and b, the sequence defined by",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 22,
"text": "satisfies the same recurrence,",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 23,
"text": "If a and b are chosen so that U0 = 0 and U1 = 1 then the resulting sequence Un must be the Fibonacci sequence. This is the same as requiring a and b satisfy the system of equations:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 24,
"text": "which has solution",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 25,
"text": "producing the required formula.",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 26,
"text": "Taking the starting values U0 and U1 to be arbitrary constants, a more general solution is:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 27,
"text": "where",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 28,
"text": "Since | ψ n 5 | < 1 2 {\\textstyle \\left|{\\frac {\\psi ^{n}}{\\sqrt {5}}}\\right|<{\\frac {1}{2}}} for all n ≥ 0, the number Fn is the closest integer to φ n 5 {\\displaystyle {\\frac {\\varphi ^{n}}{\\sqrt {5}}}} . Therefore, it can be found by rounding, using the nearest integer function:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 29,
"text": "In fact, the rounding error is very small, being less than 0.1 for n ≥ 4, and less than 0.01 for n ≥ 8. This formula is easily inverted to find an index of a Fibonacci number F:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 30,
"text": "Instead using the floor function gives the largest index of a Fibonacci number that is not greater than F:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 31,
"text": "where log φ ( x ) = ln ( x ) / ln ( φ ) = log 10 ( x ) / log 10 ( φ ) {\\displaystyle \\log _{\\varphi }(x)=\\ln(x)/\\ln(\\varphi )=\\log _{10}(x)/\\log _{10}(\\varphi )} , ln ( φ ) = 0.481211 … {\\displaystyle \\ln(\\varphi )=0.481211\\ldots } , and log 10 ( φ ) = 0.208987 … {\\displaystyle \\log _{10}(\\varphi )=0.208987\\ldots } .",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 32,
"text": "Since Fn is asymptotic to φ n / 5 {\\displaystyle \\varphi ^{n}/{\\sqrt {5}}} , the number of digits in Fn is asymptotic to n log 10 φ ≈ 0.2090 n {\\displaystyle n\\log _{10}\\varphi \\approx 0.2090\\,n} . As a consequence, for every integer d > 1 there are either 4 or 5 Fibonacci numbers with d decimal digits.",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 33,
"text": "More generally, in the base b representation, the number of digits in Fn is asymptotic to n log b φ = n log φ log b . {\\displaystyle n\\log _{b}\\varphi ={\\frac {n\\log \\varphi }{\\log b}}.}",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 34,
"text": "Johannes Kepler observed that the ratio of consecutive Fibonacci numbers converges. He wrote that \"as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost\", and concluded that these ratios approach the golden ratio φ : {\\displaystyle \\varphi \\colon }",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 35,
"text": "This convergence holds regardless of the starting values U 0 {\\displaystyle U_{0}} and U 1 {\\displaystyle U_{1}} , unless U 1 = − U 0 / φ {\\displaystyle U_{1}=-U_{0}/\\varphi } . This can be verified using Binet's formula. For example, the initial values 3 and 2 generate the sequence 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, ... . The ratio of consecutive terms in this sequence shows the same convergence towards the golden ratio.",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 36,
"text": "In general, lim n → ∞ F n + m F n = φ m {\\displaystyle \\lim _{n\\to \\infty }{\\frac {F_{n+m}}{F_{n}}}=\\varphi ^{m}} , because the ratios between consecutive Fibonacci numbers approaches φ {\\displaystyle \\varphi } .",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 37,
"text": "Since the golden ratio satisfies the equation",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 38,
"text": "this expression can be used to decompose higher powers φ n {\\displaystyle \\varphi ^{n}} as a linear function of lower powers, which in turn can be decomposed all the way down to a linear combination of φ {\\displaystyle \\varphi } and 1. The resulting recurrence relationships yield Fibonacci numbers as the linear coefficients:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 39,
"text": "This equation can be proved by induction on n ≥ 1:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 40,
"text": "For ψ = − 1 / φ {\\displaystyle \\psi =-1/\\varphi } , it is also the case that ψ 2 = ψ + 1 {\\displaystyle \\psi ^{2}=\\psi +1} and it is also the case that",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 41,
"text": "These expressions are also true for n < 1 if the Fibonacci sequence Fn is extended to negative integers using the Fibonacci rule F n = F n + 2 − F n + 1 . {\\displaystyle F_{n}=F_{n+2}-F_{n+1}.}",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 42,
"text": "Binet's formula provides a proof that a positive integer x is a Fibonacci number if and only if at least one of 5 x 2 + 4 {\\displaystyle 5x^{2}+4} or 5 x 2 − 4 {\\displaystyle 5x^{2}-4} is a perfect square. This is because Binet's formula, which can be written as F n = ( φ n − ( − 1 ) n φ − n ) / 5 {\\displaystyle F_{n}=(\\varphi ^{n}-(-1)^{n}\\varphi ^{-n})/{\\sqrt {5}}} , can be multiplied by 5 φ n {\\displaystyle {\\sqrt {5}}\\varphi ^{n}} and solved as a quadratic equation in φ n {\\displaystyle \\varphi ^{n}} via the quadratic formula:",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 43,
"text": "Comparing this to φ n = F n φ + F n − 1 = ( F n 5 + F n + 2 F n − 1 ) / 2 {\\displaystyle \\varphi ^{n}=F_{n}\\varphi +F_{n-1}=(F_{n}{\\sqrt {5}}+F_{n}+2F_{n-1})/2} , it follows that",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 44,
"text": "In particular, the left-hand side is a perfect square.",
"title": "Relation to the golden ratio"
},
{
"paragraph_id": 45,
"text": "A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is",
"title": "Matrix form"
},
{
"paragraph_id": 46,
"text": "alternatively denoted",
"title": "Matrix form"
},
{
"paragraph_id": 47,
"text": "which yields F → n = A n F → 0 {\\displaystyle {\\vec {F}}_{n}=\\mathbf {A} ^{n}{\\vec {F}}_{0}} . The eigenvalues of the matrix A are φ = 1 2 ( 1 + 5 ) {\\displaystyle \\varphi ={\\frac {1}{2}}(1+{\\sqrt {5}})} and ψ = − φ − 1 = 1 2 ( 1 − 5 ) {\\displaystyle \\psi =-\\varphi ^{-1}={\\frac {1}{2}}(1-{\\sqrt {5}})} corresponding to the respective eigenvectors",
"title": "Matrix form"
},
{
"paragraph_id": 48,
"text": "and",
"title": "Matrix form"
},
{
"paragraph_id": 49,
"text": "As the initial value is",
"title": "Matrix form"
},
{
"paragraph_id": 50,
"text": "it follows that the nth term is",
"title": "Matrix form"
},
{
"paragraph_id": 51,
"text": "From this, the nth element in the Fibonacci series may be read off directly as a closed-form expression:",
"title": "Matrix form"
},
{
"paragraph_id": 52,
"text": "Equivalently, the same computation may be performed by diagonalization of A through use of its eigendecomposition:",
"title": "Matrix form"
},
{
"paragraph_id": 53,
"text": "where Λ = ( φ 0 0 − φ − 1 ) {\\displaystyle \\Lambda ={\\begin{pmatrix}\\varphi &0\\\\0&-\\varphi ^{-1}\\end{pmatrix}}} and S = ( φ − φ − 1 1 1 ) . {\\displaystyle S={\\begin{pmatrix}\\varphi &-\\varphi ^{-1}\\\\1&1\\end{pmatrix}}.} The closed-form expression for the nth element in the Fibonacci series is therefore given by",
"title": "Matrix form"
},
{
"paragraph_id": 54,
"text": "which again yields",
"title": "Matrix form"
},
{
"paragraph_id": 55,
"text": "The matrix A has a determinant of −1, and thus it is a 2 × 2 unimodular matrix.",
"title": "Matrix form"
},
{
"paragraph_id": 56,
"text": "This property can be understood in terms of the continued fraction representation for the golden ratio:",
"title": "Matrix form"
},
{
"paragraph_id": 57,
"text": "The Fibonacci numbers occur as the ratio of successive convergents of the continued fraction for φ, and the matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1. The matrix representation gives the following closed-form expression for the Fibonacci numbers:",
"title": "Matrix form"
},
{
"paragraph_id": 58,
"text": "For a given n, this matrix can be computed in O(log(n)) arithmetic operations, using the exponentiation by squaring method.",
"title": "Matrix form"
},
{
"paragraph_id": 59,
"text": "Taking the determinant of both sides of this equation yields Cassini's identity,",
"title": "Matrix form"
},
{
"paragraph_id": 60,
"text": "Moreover, since AA = A for any square matrix A, the following identities can be derived (they are obtained from two different coefficients of the matrix product, and one may easily deduce the second one from the first one by changing n into n + 1),",
"title": "Matrix form"
},
{
"paragraph_id": 61,
"text": "In particular, with m = n,",
"title": "Matrix form"
},
{
"paragraph_id": 62,
"text": "These last two identities provide a way to compute Fibonacci numbers recursively in O(log(n)) arithmetic operations and in time O(M(n) log(n)), where M(n) is the time for the multiplication of two numbers of n digits. This matches the time for computing the nth Fibonacci number from the closed-form matrix formula, but with fewer redundant steps if one avoids recomputing an already computed Fibonacci number (recursion with memoization).",
"title": "Matrix form"
},
{
"paragraph_id": 63,
"text": "Most identities involving Fibonacci numbers can be proved using combinatorial arguments using the fact that F n {\\displaystyle F_{n}} can be interpreted as the number of (possibly empty) sequences of 1s and 2s whose sum is n − 1 {\\displaystyle n-1} . This can be taken as the definition of F n {\\displaystyle F_{n}} with the conventions F 0 = 0 {\\displaystyle F_{0}=0} , meaning no such sequence exists whose sum is −1, and F 1 = 1 {\\displaystyle F_{1}=1} , meaning the empty sequence \"adds up\" to 0. In the following, | . . . | {\\displaystyle |{...}|} is the cardinality of a set:",
"title": "Combinatorial identities"
},
{
"paragraph_id": 64,
"text": "In this manner the recurrence relation",
"title": "Combinatorial identities"
},
{
"paragraph_id": 65,
"text": "may be understood by dividing the F n {\\displaystyle F_{n}} sequences into two non-overlapping sets where all sequences either begin with 1 or 2:",
"title": "Combinatorial identities"
},
{
"paragraph_id": 66,
"text": "Excluding the first element, the remaining terms in each sequence sum to n − 2 {\\displaystyle n-2} or n − 3 {\\displaystyle n-3} and the cardinality of each set is F n − 1 {\\displaystyle F_{n-1}} or F n − 2 {\\displaystyle F_{n-2}} giving a total of F n − 1 + F n − 2 {\\displaystyle F_{n-1}+F_{n-2}} sequences, showing this is equal to F n {\\displaystyle F_{n}} .",
"title": "Combinatorial identities"
},
{
"paragraph_id": 67,
"text": "In a similar manner it may be shown that the sum of the first Fibonacci numbers up to the nth is equal to the (n + 2)nd Fibonacci number minus 1. In symbols:",
"title": "Combinatorial identities"
},
{
"paragraph_id": 68,
"text": "This may be seen by dividing all sequences summing to n + 1 {\\displaystyle n+1} based on the location of the first 2. Specifically, each set consists of those sequences that start { 2 , . . . } , { 1 , 2 , . . . } , . . . , {\\displaystyle \\{2,...\\},\\{1,2,...\\},...,} until the last two sets { { 1 , 1 , . . . , 1 , 2 } } , { { 1 , 1 , . . . , 1 } } {\\displaystyle \\{\\{1,1,...,1,2\\}\\},\\{\\{1,1,...,1\\}\\}} each with cardinality 1.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 69,
"text": "Following the same logic as before, by summing the cardinality of each set we see that",
"title": "Combinatorial identities"
},
{
"paragraph_id": 70,
"text": "... where the last two terms have the value F 1 = 1 {\\displaystyle F_{1}=1} . From this it follows that ∑ i = 1 n F i = F n + 2 − 1 {\\displaystyle \\sum _{i=1}^{n}F_{i}=F_{n+2}-1} .",
"title": "Combinatorial identities"
},
{
"paragraph_id": 71,
"text": "A similar argument, grouping the sums by the position of the first 1 rather than the first 2 gives two more identities:",
"title": "Combinatorial identities"
},
{
"paragraph_id": 72,
"text": "and",
"title": "Combinatorial identities"
},
{
"paragraph_id": 73,
"text": "In words, the sum of the first Fibonacci numbers with odd index up to F 2 n − 1 {\\displaystyle F_{2n-1}} is the (2n)th Fibonacci number, and the sum of the first Fibonacci numbers with even index up to F 2 n {\\displaystyle F_{2n}} is the (2n + 1)st Fibonacci number minus 1.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 74,
"text": "A different trick may be used to prove",
"title": "Combinatorial identities"
},
{
"paragraph_id": 75,
"text": "or in words, the sum of the squares of the first Fibonacci numbers up to F n {\\displaystyle F_{n}} is the product of the nth and (n + 1)st Fibonacci numbers. To see this, begin with a Fibonacci rectangle of size F n × F n + 1 {\\displaystyle F_{n}\\times F_{n+1}} and decompose it into squares of size F n , F n − 1 , . . . , F 1 {\\displaystyle F_{n},F_{n-1},...,F_{1}} ; from this the identity follows by comparing areas:",
"title": "Combinatorial identities"
},
{
"paragraph_id": 76,
"text": "",
"title": "Combinatorial identities"
},
{
"paragraph_id": 77,
"text": "The sequence ( F n ) n ∈ N {\\displaystyle (F_{n})_{n\\in \\mathbb {N} }} is also considered using the symbolic method. More precisely, this sequence corresponds to a specifiable combinatorial class. The specification of this sequence is Seq ( Z + Z 2 ) {\\displaystyle \\operatorname {Seq} ({\\mathcal {Z+Z^{2}}})} . Indeed, as stated above, the n {\\displaystyle n} -th Fibonacci number equals the number of combinatorial compositions (ordered partitions) of n − 1 {\\displaystyle n-1} using terms 1 and 2.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 78,
"text": "It follows that the ordinary generating function of the Fibonacci sequence, ∑ i = 0 ∞ F i z i {\\displaystyle \\sum _{i=0}^{\\infty }F_{i}z^{i}} , is the rational function z 1 − z − z 2 . {\\displaystyle {\\frac {z}{1-z-z^{2}}}.}",
"title": "Combinatorial identities"
},
{
"paragraph_id": 79,
"text": "Fibonacci identities often can be easily proved using mathematical induction.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 80,
"text": "For example, reconsider",
"title": "Combinatorial identities"
},
{
"paragraph_id": 81,
"text": "Adding F n + 1 {\\displaystyle F_{n+1}} to both sides gives",
"title": "Combinatorial identities"
},
{
"paragraph_id": 82,
"text": "and so we have the formula for n + 1 {\\displaystyle n+1}",
"title": "Combinatorial identities"
},
{
"paragraph_id": 83,
"text": "Similarly, add F n + 1 2 {\\displaystyle {F_{n+1}}^{2}} to both sides of",
"title": "Combinatorial identities"
},
{
"paragraph_id": 84,
"text": "to give",
"title": "Combinatorial identities"
},
{
"paragraph_id": 85,
"text": "The Binet formula is",
"title": "Combinatorial identities"
},
{
"paragraph_id": 86,
"text": "This can be used to prove Fibonacci identities.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 87,
"text": "For example, to prove that ∑ i = 1 n F i = F n + 2 − 1 {\\textstyle \\sum _{i=1}^{n}F_{i}=F_{n+2}-1} note that the left hand side multiplied by 5 {\\displaystyle {\\sqrt {5}}} becomes",
"title": "Combinatorial identities"
},
{
"paragraph_id": 88,
"text": "as required, using the facts φ ψ = − 1 {\\textstyle \\varphi \\psi =-1} and φ − ψ = 5 {\\textstyle \\varphi -\\psi ={\\sqrt {5}}} to simplify the equations.",
"title": "Combinatorial identities"
},
{
"paragraph_id": 89,
"text": "Numerous other identities can be derived using various methods. Here are some of them:",
"title": "Other identities"
},
{
"paragraph_id": 90,
"text": "Cassini's identity states that",
"title": "Other identities"
},
{
"paragraph_id": 91,
"text": "Catalan's identity is a generalization:",
"title": "Other identities"
},
{
"paragraph_id": 92,
"text": "where Ln is the n-th Lucas number. The last is an identity for doubling n; other identities of this type are",
"title": "Other identities"
},
{
"paragraph_id": 93,
"text": "by Cassini's identity.",
"title": "Other identities"
},
{
"paragraph_id": 94,
"text": "These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number.",
"title": "Other identities"
},
{
"paragraph_id": 95,
"text": "More generally,",
"title": "Other identities"
},
{
"paragraph_id": 96,
"text": "or alternatively",
"title": "Other identities"
},
{
"paragraph_id": 97,
"text": "Putting k = 2 in this formula, one gets again the formulas of the end of above section Matrix form.",
"title": "Other identities"
},
{
"paragraph_id": 98,
"text": "The generating function of the Fibonacci sequence is the power series",
"title": "Generating function"
},
{
"paragraph_id": 99,
"text": "This series is convergent for any complex number z {\\displaystyle z} satisfying | z | < 1 / φ , {\\displaystyle |z|<1/\\varphi ,} and its sum has a simple closed form:",
"title": "Generating function"
},
{
"paragraph_id": 100,
"text": "This can be proved by multiplying by ( 1 − z − z 2 ) {\\textstyle (1-z-z^{2})} :",
"title": "Generating function"
},
{
"paragraph_id": 101,
"text": "where all terms involving z k {\\displaystyle z^{k}} for k ≥ 2 {\\displaystyle k\\geq 2} cancel out because of the defining Fibonacci recurrence relation.",
"title": "Generating function"
},
{
"paragraph_id": 102,
"text": "The partial fraction decomposition is given by",
"title": "Generating function"
},
{
"paragraph_id": 103,
"text": "where φ = 1 2 ( 1 + 5 ) {\\textstyle \\varphi ={\\tfrac {1}{2}}\\left(1+{\\sqrt {5}}\\right)} is the golden ratio and ψ = 1 2 ( 1 − 5 ) {\\displaystyle \\psi ={\\tfrac {1}{2}}\\left(1-{\\sqrt {5}}\\right)} is its conjugate.",
"title": "Generating function"
},
{
"paragraph_id": 104,
"text": "The related function z ↦ − s ( − 1 / z ) {\\textstyle z\\mapsto -s\\left(-1/z\\right)} is the generating function for the negafibonacci numbers, and s ( z ) {\\displaystyle s(z)} satisfies the functional equation",
"title": "Generating function"
},
{
"paragraph_id": 105,
"text": "Using z {\\displaystyle z} equal to any of 0.01, 0.001, 0.0001, etc. lays out the first Fibonacci numbers in the decimal expansion of s ( z ) {\\displaystyle s(z)} . For example, s ( 0.001 ) = 0.001 0.998999 = 1000 998999 = 0.001001002003005008013021 … . {\\displaystyle s(0.001)={\\frac {0.001}{0.998999}}={\\frac {1000}{998999}}=0.001001002003005008013021\\ldots .}",
"title": "Generating function"
},
{
"paragraph_id": 106,
"text": "Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, the sum of every odd-indexed reciprocal Fibonacci number can be written as",
"title": "Reciprocal sums"
},
{
"paragraph_id": 107,
"text": "and the sum of squared reciprocal Fibonacci numbers as",
"title": "Reciprocal sums"
},
{
"paragraph_id": 108,
"text": "If we add 1 to each Fibonacci number in the first sum, there is also the closed form",
"title": "Reciprocal sums"
},
{
"paragraph_id": 109,
"text": "and there is a nested sum of squared Fibonacci numbers giving the reciprocal of the golden ratio,",
"title": "Reciprocal sums"
},
{
"paragraph_id": 110,
"text": "The sum of all even-indexed reciprocal Fibonacci numbers is",
"title": "Reciprocal sums"
},
{
"paragraph_id": 111,
"text": "with the Lambert series L ( q ) := ∑ k = 1 ∞ q k 1 − q k , {\\displaystyle \\textstyle L(q):=\\sum _{k=1}^{\\infty }{\\frac {q^{k}}{1-q^{k}}},} since 1 F 2 k = 5 ( ψ 2 k 1 − ψ 2 k − ψ 4 k 1 − ψ 4 k ) . {\\displaystyle \\textstyle {\\frac {1}{F_{2k}}}={\\sqrt {5}}\\left({\\frac {\\psi ^{2k}}{1-\\psi ^{2k}}}-{\\frac {\\psi ^{4k}}{1-\\psi ^{4k}}}\\right)\\!.}",
"title": "Reciprocal sums"
},
{
"paragraph_id": 112,
"text": "So the reciprocal Fibonacci constant is",
"title": "Reciprocal sums"
},
{
"paragraph_id": 113,
"text": "Moreover, this number has been proved irrational by Richard André-Jeannin.",
"title": "Reciprocal sums"
},
{
"paragraph_id": 114,
"text": "Millin's series gives the identity",
"title": "Reciprocal sums"
},
{
"paragraph_id": 115,
"text": "which follows from the closed form for its partial sums as N tends to infinity:",
"title": "Reciprocal sums"
},
{
"paragraph_id": 116,
"text": "Every third number of the sequence is even (a multiple of F 3 = 2 {\\displaystyle F_{3}=2} ) and, more generally, every kth number of the sequence is a multiple of Fk. Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property",
"title": "Primes and divisibility"
},
{
"paragraph_id": 117,
"text": "where gcd is the greatest common divisor function.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 118,
"text": "In particular, any three consecutive Fibonacci numbers are pairwise coprime because both F 1 = 1 {\\displaystyle F_{1}=1} and F 2 = 1 {\\displaystyle F_{2}=1} . That is,",
"title": "Primes and divisibility"
},
{
"paragraph_id": 119,
"text": "for every n.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 120,
"text": "Every prime number p divides a Fibonacci number that can be determined by the value of p modulo 5. If p is congruent to 1 or 4 modulo 5, then p divides Fp−1, and if p is congruent to 2 or 3 modulo 5, then, p divides Fp+1. The remaining case is that p = 5, and in this case p divides Fp.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 121,
"text": "These cases can be combined into a single, non-piecewise formula, using the Legendre symbol:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 122,
"text": "The above formula can be used as a primality test in the sense that if",
"title": "Primes and divisibility"
},
{
"paragraph_id": 123,
"text": "where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that n is a prime, and if it fails to hold, then n is definitely not a prime. If n is composite and satisfies the formula, then n is a Fibonacci pseudoprime. When m is large – say a 500-bit number – then we can calculate Fm (mod n) efficiently using the matrix form. Thus",
"title": "Primes and divisibility"
},
{
"paragraph_id": 124,
"text": "Here the matrix power A is calculated using modular exponentiation, which can be adapted to matrices.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 125,
"text": "A Fibonacci prime is a Fibonacci number that is prime. The first few are:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 126,
"text": "Fibonacci primes with thousands of digits have been found, but it is not known whether there are infinitely many.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 127,
"text": "Fkn is divisible by Fn, so, apart from F4 = 3, any Fibonacci prime must have a prime index. As there are arbitrarily long runs of composite numbers, there are therefore also arbitrarily long runs of composite Fibonacci numbers.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 128,
"text": "No Fibonacci number greater than F6 = 8 is one greater or one less than a prime number.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 129,
"text": "The only nontrivial square Fibonacci number is 144. Attila Pethő proved in 2001 that there is only a finite number of perfect power Fibonacci numbers. In 2006, Y. Bugeaud, M. Mignotte, and S. Siksek proved that 8 and 144 are the only such non-trivial perfect powers.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 130,
"text": "1, 3, 21, and 55 are the only triangular Fibonacci numbers, which was conjectured by Vern Hoggatt and proved by Luo Ming.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 131,
"text": "No Fibonacci number can be a perfect number. More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 132,
"text": "With the exceptions of 1, 8 and 144 (F1 = F2, F6 and F12) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 (F6 and F12) are the only Fibonacci numbers that are the product of other Fibonacci numbers.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 133,
"text": "The divisibility of Fibonacci numbers by a prime p is related to the Legendre symbol ( p 5 ) {\\displaystyle \\left({\\tfrac {p}{5}}\\right)} which is evaluated as follows:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 134,
"text": "If p is a prime number then",
"title": "Primes and divisibility"
},
{
"paragraph_id": 135,
"text": "",
"title": "Primes and divisibility"
},
{
"paragraph_id": 136,
"text": "For example,",
"title": "Primes and divisibility"
},
{
"paragraph_id": 137,
"text": "It is not known whether there exists a prime p such that",
"title": "Primes and divisibility"
},
{
"paragraph_id": 138,
"text": "Such primes (if there are any) would be called Wall–Sun–Sun primes.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 139,
"text": "Also, if p ≠ 5 is an odd prime number then:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 140,
"text": "Example 1. p = 7, in this case p ≡ 3 (mod 4) and we have:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 141,
"text": "Example 2. p = 11, in this case p ≡ 3 (mod 4) and we have:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 142,
"text": "Example 3. p = 13, in this case p ≡ 1 (mod 4) and we have:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 143,
"text": "Example 4. p = 29, in this case p ≡ 1 (mod 4) and we have:",
"title": "Primes and divisibility"
},
{
"paragraph_id": 144,
"text": "For odd n, all odd prime divisors of Fn are congruent to 1 modulo 4, implying that all odd divisors of Fn (as the products of odd prime divisors) are congruent to 1 modulo 4.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 145,
"text": "For example,",
"title": "Primes and divisibility"
},
{
"paragraph_id": 146,
"text": "All known factors of Fibonacci numbers F(i ) for all i < 50000 are collected at the relevant repositories.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 147,
"text": "If the members of the Fibonacci sequence are taken mod n, the resulting sequence is periodic with period at most 6n. The lengths of the periods for various n form the so-called Pisano periods. Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular n, the Pisano period may be found as an instance of cycle detection.",
"title": "Primes and divisibility"
},
{
"paragraph_id": 148,
"text": "The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients.",
"title": "Generalizations"
},
{
"paragraph_id": 149,
"text": "Some specific examples that are close, in some sense, to the Fibonacci sequence include:",
"title": "Generalizations"
},
{
"paragraph_id": 150,
"text": "The Fibonacci numbers occur as the sums of binomial coefficients in the \"shallow\" diagonals of Pascal's triangle:",
"title": "Applications"
},
{
"paragraph_id": 151,
"text": "This can be proved by expanding the generating function",
"title": "Applications"
},
{
"paragraph_id": 152,
"text": "and collecting like terms of x n {\\displaystyle x^{n}} .",
"title": "Applications"
},
{
"paragraph_id": 153,
"text": "To see how the formula is used, we can arrange the sums by the number of terms present:",
"title": "Applications"
},
{
"paragraph_id": 154,
"text": "which is ( 5 0 ) + ( 4 1 ) + ( 3 2 ) {\\displaystyle {\\binom {5}{0}}+{\\binom {4}{1}}+{\\binom {3}{2}}} , where we are choosing the positions of k twos from n−k−1 terms.",
"title": "Applications"
},
{
"paragraph_id": 155,
"text": "These numbers also give the solution to certain enumerative problems, the most common of which is that of counting the number of ways of writing a given number n as an ordered sum of 1s and 2s (called compositions); there are Fn+1 ways to do this (equivalently, it's also the number of domino tilings of the 2 × n {\\displaystyle 2\\times n} rectangle). For example, there are F5+1 = F6 = 8 ways one can climb a staircase of 5 steps, taking one or two steps at a time:",
"title": "Applications"
},
{
"paragraph_id": 156,
"text": "The figure shows that 8 can be decomposed into 5 (the number of ways to climb 4 steps, followed by a single-step) plus 3 (the number of ways to climb 3 steps, followed by a double-step). The same reasoning is applied recursively until a single step, of which there is only one way to climb.",
"title": "Applications"
},
{
"paragraph_id": 157,
"text": "The Fibonacci numbers can be found in different ways among the set of binary strings, or equivalently, among the subsets of a given set.",
"title": "Applications"
},
{
"paragraph_id": 158,
"text": "Fibonacci sequences appear in biological settings, such as branching in trees, arrangement of leaves on a stem, the fruitlets of a pineapple, the flowering of artichoke, the arrangement of a pine cone, and the family tree of honeybees. Kepler pointed out the presence of the Fibonacci sequence in nature, using it to explain the (golden ratio-related) pentagonal form of some flowers. Field daisies most often have petals in counts of Fibonacci numbers. In 1830, K. F. Schimper and A. Braun discovered that the parastichies (spiral phyllotaxis) of plants were frequently expressed as fractions involving Fibonacci numbers.",
"title": "Applications"
},
{
"paragraph_id": 159,
"text": "Przemysław Prusinkiewicz advanced the idea that real instances can in part be understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars.",
"title": "Applications"
},
{
"paragraph_id": 160,
"text": "A model for the pattern of florets in the head of a sunflower was proposed by Helmut Vogel [de] in 1979. This has the form",
"title": "Applications"
},
{
"paragraph_id": 161,
"text": "where n is the index number of the floret and c is a constant scaling factor; the florets thus lie on Fermat's spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form F( j):F( j + 1), the nearest neighbors of floret number n are those at n ± F( j) for some index j, which depends on r, the distance from the center. Sunflowers and similar flowers most commonly have spirals of florets in clockwise and counter-clockwise directions in the amount of adjacent Fibonacci numbers, typically counted by the outermost range of radii.",
"title": "Applications"
},
{
"paragraph_id": 162,
"text": "Fibonacci numbers also appear in the pedigrees of idealized honeybees, according to the following rules:",
"title": "Applications"
},
{
"paragraph_id": 163,
"text": "Thus, a male bee always has one parent, and a female bee has two. If one traces the pedigree of any male bee (1 bee), he has 1 parent (1 bee), 2 grandparents, 3 great-grandparents, 5 great-great-grandparents, and so on. This sequence of numbers of parents is the Fibonacci sequence. The number of ancestors at each level, Fn, is the number of female ancestors, which is Fn−1, plus the number of male ancestors, which is Fn−2. This is under the unrealistic assumption that the ancestors at each level are otherwise unrelated.",
"title": "Applications"
},
{
"paragraph_id": 164,
"text": "It has been noticed that the number of possible ancestors on the human X chromosome inheritance line at a given ancestral generation also follows the Fibonacci sequence. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the \"origin\" of his own X chromosome ( F 1 = 1 {\\displaystyle F_{1}=1} ), and at his parents' generation, his X chromosome came from a single parent ( F 2 = 1 {\\displaystyle F_{2}=1} ). The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome ( F 3 = 2 {\\displaystyle F_{3}=2} ). The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome ( F 4 = 3 {\\displaystyle F_{4}=3} ). Five great-great-grandparents contributed to the male descendant's X chromosome ( F 5 = 5 {\\displaystyle F_{5}=5} ), etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.)",
"title": "Applications"
}
]
| In mathematics, the Fibonacci sequence is a sequence in which each number is the sum of the two preceding ones. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted Fn . The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes from 1 and 2. Starting from 0 and 1, the sequence begins The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book Liber Abaci. Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species. Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences. | 2001-10-11T17:17:00Z | 2023-12-08T15:47:49Z | [
"Template:Efn",
"Template:Snd",
"Template:In Our Time",
"Template:OEIS el",
"Template:See also",
"Template:Wikibooks",
"Template:Circa",
"Template:Nowrap",
"Template:Math",
"Template:Clear",
"Template:Wikiquote",
"Template:Metallic ratios",
"Template:Fibonacci",
"Template:Short description",
"Template:YouTube",
"Template:Reflist",
"Template:Mvar",
"Template:Annotated link",
"Template:Cite OEIS",
"Template:Series (mathematics)",
"Template:For",
"Template:Further",
"Template:Springer",
"Template:Authority control",
"Template:Main",
"Template:Sfn",
"Template:Ill",
"Template:Slink",
"Template:Notelist",
"Template:Citation",
"Template:Interwiki extra",
"Template:Lang",
"Template:MathWorld",
"Template:Classes of natural numbers",
"Template:Anchor"
]
| https://en.wikipedia.org/wiki/Fibonacci_sequence |
10,923 | Fontainebleau | Fontainebleau (/ˈfɒntɪnbloʊ/ FON-tin-bloh, US also /-bluː/ -bloo, French: [fɔ̃tɛnblo] ) is a commune in the metropolitan area of Paris, France. It is located 55.5 kilometres (34.5 mi) south-southeast of the centre of Paris. Fontainebleau is a sub-prefecture of the Seine-et-Marne department, and it is the seat of the arrondissement of Fontainebleau. The commune has the largest land area in the Île-de-France region; it is the only one to cover a larger area than Paris itself. The commune is closest to Seine-et-Marne Prefecture, Melun.
Fontainebleau, together with the neighbouring commune of Avon and three other smaller communes, form an urban area of 36,724 inhabitants (2018). This urban area is a satellite of Paris.
Fontainebleau is renowned for the large and scenic forest of Fontainebleau, a favourite weekend getaway for Parisians, as well as for the historic Château de Fontainebleau, which once belonged to the kings of France. It is also the home of INSEAD, one of the world's most elite business schools.
Inhabitants of Fontainebleau are sometimes called Bellifontains.
Fontainebleau was recorded in the Latinised forms Fons Bleaudi, Fons Bliaudi, and Fons Blaadi in the 12th and 13th centuries, as Fontem blahaud in 1137, as Fontaine belle eau (folk etymology "fountain of beautiful water") in the 16th century, as Fontainebleau and Fontaine belle eau in 1630, and as the invented, fanciful Latin Fons Bellaqueus in the 17th century, which is the origin of the fanciful name Bellifontains of the inhabitants. Contrary to the folk etymology, the name comes from the medieval compound noun of fontaine, meaning spring (fountainhead) and fountain, and blitwald, consisting of the Germanic personal name Blit and the Germanic word for forest.
This hamlet was endowed with a royal hunting lodge and a chapel by Louis VII in the middle of the twelfth century. A century later, Louis IX, also called Saint Louis, who held Fontainebleau in high esteem and referred to it as "his wilderness", had a country house and a hospital constructed there.
Philip the Fair was born there in 1268 and died there in 1314. In all, thirty-four sovereigns, from Louis VI, the Fat, (1081–1137) to Napoleon III (1808–1873), spent time at Fontainebleau.
The connection between the town of Fontainebleau and the French monarchy was reinforced with the transformation of the royal country house into a true royal palace, the Palace of Fontainebleau. This was accomplished by the great builder-king, Francis I (1494–1547), who, in the largest of his many construction projects, reconstructed, expanded, and transformed the royal château at Fontainebleau into a residence that became his favourite, as well as the residence of his mistress, Anne, duchess of Étampes.
From the sixteenth to the eighteenth century, every monarch, from Francis I to Louis XV, made important renovations at the Palace of Fontainebleau, including demolitions, reconstructions, additions, and embellishments of various descriptions, all of which endowed it with a character that is a bit heterogeneous, but harmonious nonetheless.
On 18 October 1685, Louis XIV signed the Edict of Fontainebleau there. Also known as the Revocation of the Edict of Nantes, this royal fiat reversed the permission granted to the Huguenots in 1598 to worship publicly in specified locations and hold certain other privileges. The result was that a large number of Protestants were forced to convert to the Catholic faith, killed, or forced into exile, mainly in the Low Countries, Prussia and in England.
The 1762 Treaty of Fontainebleau, a secret agreement between France and Spain concerning the Louisiana territory in North America, was concluded here. Also, preliminary negotiations, held before the 1763 Treaty of Paris was signed, ending the Seven Years' War, were at Fontainebleau.
During the French Revolution, Fontainebleau was temporarily renamed Fontaine-la-Montagne, meaning "Fountain by the Mountain". (The mountain referred to is the series of rocky formations located in the forest of Fontainebleau.)
On 29 October 1807, Manuel Godoy, chancellor to the Spanish king, Charles IV and Napoleon signed the Treaty of Fontainebleau, which authorized the passage of French troops through Spanish territories so that they might invade Portugal.
On 20 June 1812, Pope Pius VII arrived at the château of Fontainebleau, after a secret transfer from Savona, accompanied by his personal physician, Balthazard Claraz. In poor health, the Pope was the prisoner of Napoleon, and he remained in his genteel prison at Fontainebleau for nineteen months. From June 1812 until 23 January 1814, the Pope never left his apartments.
On 20 April 1814, Napoleon Bonaparte, shortly before his first abdication, bid farewell to the Old Guard, the renowned grognards (gripers) who had served with him since his first campaigns, in the "White Horse Courtyard" (la cour du Cheval Blanc) at the Palace of Fontainebleau. (The courtyard has since been renamed the "Courtyard of Goodbyes".) According to contemporary sources, the occasion was very moving. The 1814 Treaty of Fontainebleau stripped Napoleon of his powers (but not his title as Emperor of the French) and sent him into exile on Elba.
Until the 19th century, Fontainebleau was a village and a suburb of Avon. Later, it developed as an independent residential city.
For the 1924 Summer Olympics, the town played host to the riding portion of the modern pentathlon event. This event took place near a golf course.
In July and August 1946, the town hosted the Franco-Vietnamese Conference, intended to find a solution to the long-contested struggle for Vietnam's independence from France, but the conference ended in failure.
Fontainebleau also hosted the general staff of the Allied Forces in Central Europe (Allied Forces Center or AFCENT) and the land forces command (LANDCENT); the air forces command (AIRCENT) was located nearby at Camp Guynemer. These facilities were in place from the inception of NATO until France's partial withdrawal from NATO in 1967 when the United States returned those bases to French control. NATO moved AFCENT to Brunssum in the Netherlands and AIRCENT to Ramstein in West Germany. (The Supreme Headquarters Allied Powers Europe, also known as SHAPE, was located at Rocquencourt, west of Paris, quite a distance from Fontainebleau).
In 2008, the men's World Championship of Real Tennis (Jeu de Paume) was held in the tennis court of the Chateau. The real tennis World Championship is the oldest in sport and Fontainebleau has one of only two active courts in France.
Fontainebleau is a popular tourist destination; each year, 300,000 people visit the palace and more than 13 million people visit the forest.
The forest of Fontainebleau surrounds the town and dozens of nearby villages. It is protected by France's Office National des Forêts, and it is recognised as a French national park. It is managed in order that its wild plants and trees, such as the rare service tree of Fontainebleau, and its populations of birds, mammals, and butterflies, can be conserved. It is a former royal hunting park often visited by hikers and horse riders. The forest is also well regarded for bouldering and is particularly popular among climbers, as it is the biggest developed area of that kind in the world.
The Royal Château de Fontainebleau is a large palace where the kings of France took their ease. It is also the site where the French royal court, from 1528 onwards, entertained the body of new ideas that became known as the Renaissance.
The European (and historic) campus of the INSEAD business school is located at the edge of Fontainebleau, by the Lycee Francois Couperin. INSEAD students live in local accommodations around the Fontainebleau area, and especially in the surrounding towns.
The graves of G. I. Gurdjieff and Katherine Mansfield can be found in the cemetery at Avon.
Fontainebleau is served by two stations on the Transilien Paris–Lyon rail line: Fontainebleau–Avon and Thomery. Fontainebleau–Avon station, the station closest to the centre of Fontainebleau, is located near the dividing-line between the commune of Fontainebleau and the commune of Avon, on the Avon side of the border.
Fontainebleau has a campus of the Centre hospitalier Sud Seine et Marne.
Fontainebleau is twinned with the following cities: | [
{
"paragraph_id": 0,
"text": "Fontainebleau (/ˈfɒntɪnbloʊ/ FON-tin-bloh, US also /-bluː/ -bloo, French: [fɔ̃tɛnblo] ) is a commune in the metropolitan area of Paris, France. It is located 55.5 kilometres (34.5 mi) south-southeast of the centre of Paris. Fontainebleau is a sub-prefecture of the Seine-et-Marne department, and it is the seat of the arrondissement of Fontainebleau. The commune has the largest land area in the Île-de-France region; it is the only one to cover a larger area than Paris itself. The commune is closest to Seine-et-Marne Prefecture, Melun.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fontainebleau, together with the neighbouring commune of Avon and three other smaller communes, form an urban area of 36,724 inhabitants (2018). This urban area is a satellite of Paris.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fontainebleau is renowned for the large and scenic forest of Fontainebleau, a favourite weekend getaway for Parisians, as well as for the historic Château de Fontainebleau, which once belonged to the kings of France. It is also the home of INSEAD, one of the world's most elite business schools.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Inhabitants of Fontainebleau are sometimes called Bellifontains.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Fontainebleau was recorded in the Latinised forms Fons Bleaudi, Fons Bliaudi, and Fons Blaadi in the 12th and 13th centuries, as Fontem blahaud in 1137, as Fontaine belle eau (folk etymology \"fountain of beautiful water\") in the 16th century, as Fontainebleau and Fontaine belle eau in 1630, and as the invented, fanciful Latin Fons Bellaqueus in the 17th century, which is the origin of the fanciful name Bellifontains of the inhabitants. Contrary to the folk etymology, the name comes from the medieval compound noun of fontaine, meaning spring (fountainhead) and fountain, and blitwald, consisting of the Germanic personal name Blit and the Germanic word for forest.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "This hamlet was endowed with a royal hunting lodge and a chapel by Louis VII in the middle of the twelfth century. A century later, Louis IX, also called Saint Louis, who held Fontainebleau in high esteem and referred to it as \"his wilderness\", had a country house and a hospital constructed there.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Philip the Fair was born there in 1268 and died there in 1314. In all, thirty-four sovereigns, from Louis VI, the Fat, (1081–1137) to Napoleon III (1808–1873), spent time at Fontainebleau.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The connection between the town of Fontainebleau and the French monarchy was reinforced with the transformation of the royal country house into a true royal palace, the Palace of Fontainebleau. This was accomplished by the great builder-king, Francis I (1494–1547), who, in the largest of his many construction projects, reconstructed, expanded, and transformed the royal château at Fontainebleau into a residence that became his favourite, as well as the residence of his mistress, Anne, duchess of Étampes.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "From the sixteenth to the eighteenth century, every monarch, from Francis I to Louis XV, made important renovations at the Palace of Fontainebleau, including demolitions, reconstructions, additions, and embellishments of various descriptions, all of which endowed it with a character that is a bit heterogeneous, but harmonious nonetheless.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "On 18 October 1685, Louis XIV signed the Edict of Fontainebleau there. Also known as the Revocation of the Edict of Nantes, this royal fiat reversed the permission granted to the Huguenots in 1598 to worship publicly in specified locations and hold certain other privileges. The result was that a large number of Protestants were forced to convert to the Catholic faith, killed, or forced into exile, mainly in the Low Countries, Prussia and in England.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 1762 Treaty of Fontainebleau, a secret agreement between France and Spain concerning the Louisiana territory in North America, was concluded here. Also, preliminary negotiations, held before the 1763 Treaty of Paris was signed, ending the Seven Years' War, were at Fontainebleau.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "During the French Revolution, Fontainebleau was temporarily renamed Fontaine-la-Montagne, meaning \"Fountain by the Mountain\". (The mountain referred to is the series of rocky formations located in the forest of Fontainebleau.)",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On 29 October 1807, Manuel Godoy, chancellor to the Spanish king, Charles IV and Napoleon signed the Treaty of Fontainebleau, which authorized the passage of French troops through Spanish territories so that they might invade Portugal.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "On 20 June 1812, Pope Pius VII arrived at the château of Fontainebleau, after a secret transfer from Savona, accompanied by his personal physician, Balthazard Claraz. In poor health, the Pope was the prisoner of Napoleon, and he remained in his genteel prison at Fontainebleau for nineteen months. From June 1812 until 23 January 1814, the Pope never left his apartments.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On 20 April 1814, Napoleon Bonaparte, shortly before his first abdication, bid farewell to the Old Guard, the renowned grognards (gripers) who had served with him since his first campaigns, in the \"White Horse Courtyard\" (la cour du Cheval Blanc) at the Palace of Fontainebleau. (The courtyard has since been renamed the \"Courtyard of Goodbyes\".) According to contemporary sources, the occasion was very moving. The 1814 Treaty of Fontainebleau stripped Napoleon of his powers (but not his title as Emperor of the French) and sent him into exile on Elba.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Until the 19th century, Fontainebleau was a village and a suburb of Avon. Later, it developed as an independent residential city.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "For the 1924 Summer Olympics, the town played host to the riding portion of the modern pentathlon event. This event took place near a golf course.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In July and August 1946, the town hosted the Franco-Vietnamese Conference, intended to find a solution to the long-contested struggle for Vietnam's independence from France, but the conference ended in failure.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Fontainebleau also hosted the general staff of the Allied Forces in Central Europe (Allied Forces Center or AFCENT) and the land forces command (LANDCENT); the air forces command (AIRCENT) was located nearby at Camp Guynemer. These facilities were in place from the inception of NATO until France's partial withdrawal from NATO in 1967 when the United States returned those bases to French control. NATO moved AFCENT to Brunssum in the Netherlands and AIRCENT to Ramstein in West Germany. (The Supreme Headquarters Allied Powers Europe, also known as SHAPE, was located at Rocquencourt, west of Paris, quite a distance from Fontainebleau).",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 2008, the men's World Championship of Real Tennis (Jeu de Paume) was held in the tennis court of the Chateau. The real tennis World Championship is the oldest in sport and Fontainebleau has one of only two active courts in France.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Fontainebleau is a popular tourist destination; each year, 300,000 people visit the palace and more than 13 million people visit the forest.",
"title": "Tourism"
},
{
"paragraph_id": 21,
"text": "The forest of Fontainebleau surrounds the town and dozens of nearby villages. It is protected by France's Office National des Forêts, and it is recognised as a French national park. It is managed in order that its wild plants and trees, such as the rare service tree of Fontainebleau, and its populations of birds, mammals, and butterflies, can be conserved. It is a former royal hunting park often visited by hikers and horse riders. The forest is also well regarded for bouldering and is particularly popular among climbers, as it is the biggest developed area of that kind in the world.",
"title": "Tourism"
},
{
"paragraph_id": 22,
"text": "The Royal Château de Fontainebleau is a large palace where the kings of France took their ease. It is also the site where the French royal court, from 1528 onwards, entertained the body of new ideas that became known as the Renaissance.",
"title": "Tourism"
},
{
"paragraph_id": 23,
"text": "The European (and historic) campus of the INSEAD business school is located at the edge of Fontainebleau, by the Lycee Francois Couperin. INSEAD students live in local accommodations around the Fontainebleau area, and especially in the surrounding towns.",
"title": "Tourism"
},
{
"paragraph_id": 24,
"text": "The graves of G. I. Gurdjieff and Katherine Mansfield can be found in the cemetery at Avon.",
"title": "Tourism"
},
{
"paragraph_id": 25,
"text": "Fontainebleau is served by two stations on the Transilien Paris–Lyon rail line: Fontainebleau–Avon and Thomery. Fontainebleau–Avon station, the station closest to the centre of Fontainebleau, is located near the dividing-line between the commune of Fontainebleau and the commune of Avon, on the Avon side of the border.",
"title": "Transport"
},
{
"paragraph_id": 26,
"text": "Fontainebleau has a campus of the Centre hospitalier Sud Seine et Marne.",
"title": "Hospital"
},
{
"paragraph_id": 27,
"text": "Fontainebleau is twinned with the following cities:",
"title": "Twinning"
}
]
| Fontainebleau is a commune in the metropolitan area of Paris, France. It is located 55.5 kilometres (34.5 mi) south-southeast of the centre of Paris. Fontainebleau is a sub-prefecture of the Seine-et-Marne department, and it is the seat of the arrondissement of Fontainebleau. The commune has the largest land area in the Île-de-France region; it is the only one to cover a larger area than Paris itself. The commune is closest to Seine-et-Marne Prefecture, Melun. Fontainebleau, together with the neighbouring commune of Avon and three other smaller communes, form an urban area of 36,724 inhabitants (2018). This urban area is a satellite of Paris. Fontainebleau is renowned for the large and scenic forest of Fontainebleau, a favourite weekend getaway for Parisians, as well as for the historic Château de Fontainebleau, which once belonged to the kings of France. It is also the home of INSEAD, one of the world's most elite business schools. Inhabitants of Fontainebleau are sometimes called Bellifontains. | 2002-02-25T15:43:11Z | 2023-12-27T22:59:14Z | [
"Template:Wikivoyage",
"Template:Seine-et-Marne communes",
"Template:Convert",
"Template:Flagicon",
"Template:Interlanguage link multi",
"Template:Cite web",
"Template:Paris Metropolitan Area",
"Template:1924 Summer Olympic venues",
"Template:Olympic venues modern pentathlon",
"Template:Authority control",
"Template:Use dmy dates",
"Template:IPA-fr",
"Template:Reflist",
"Template:ISBN",
"Template:Official website",
"Template:IPAc-en",
"Template:Infobox French commune",
"Template:Respell",
"Template:Historical populations",
"Template:In lang",
"Template:Commons category",
"Template:Other uses"
]
| https://en.wikipedia.org/wiki/Fontainebleau |
10,929 | Fighter aircraft | Fighter aircraft (early on also pursuit aircraft) are military aircraft designed primarily for air-to-air combat. In military conflict, the role of fighter aircraft is to establish air superiority of the battlespace. Domination of the airspace above a battlefield permits bombers and attack aircraft to engage in tactical and strategic bombing of enemy targets.
The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters.
Many modern fighter aircraft also have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, and these include the interceptor, heavy fighter, and night fighter.
Since World War I, achieving and maintaining air superiority has been considered essential for victory in conventional warfare.
Fighters continued to be developed throughout World War I, to deny enemy aircraft and dirigibles the ability to gather information by reconnaissance over the battlefield. Early fighters were very small and lightly armed by later standards, and most were biplanes built with a wooden frame covered with fabric, and a maximum airspeed of about 100 mph (160 km/h). As control of the airspace over armies became increasingly important, all of the major powers developed fighters to support their military operations. Between the wars, wood was largely replaced in part or whole by metal tubing, and finally aluminum stressed skin structures (monocoque) began to predominate.
By World War II, most fighters were all-metal monoplanes armed with batteries of machine guns or cannons and some were capable of speeds approaching 400 mph (640 km/h). Most fighters up to this point had one engine, but a number of twin-engine fighters were built; however they were found to be outmatched against single-engine fighters and were relegated to other tasks, such as night fighters equipped with primitive radar sets.
By the end of the war, turbojet engines were replacing piston engines as the means of propulsion, further increasing aircraft speed. Since the weight of the turbojet engine was far less than a piston engine, having two engines was no longer a handicap and one or two were used, depending on requirements. This in turn required the development of ejection seats so the pilot could escape, and G-suits to counter the much greater forces being applied to the pilot during maneuvers.
In the 1950s, radar was fitted to day fighters, since due to ever increasing air-to-air weapon ranges, pilots could no longer see far enough ahead to prepare for the opposition. Subsequently, radar capabilities grew enormously and are now the primary method of target acquisition. Wings were made thinner and swept back to reduce transonic drag, which required new manufacturing methods to obtain sufficient strength. Skins were no longer sheet metal riveted to a structure, but milled from large slabs of alloy. The sound barrier was broken, and after a few false starts due to required changes in controls, speeds quickly reached Mach 2, past which aircraft cannot maneuver sufficiently to avoid attack.
Air-to-air missiles largely replaced guns and rockets in the early 1960s since both were believed unusable at the speeds being attained, however the Vietnam War showed that guns still had a role to play, and most fighters built since then are fitted with cannon (typically between 20 and 30 mm (0.79 and 1.18 in) in caliber) in addition to missiles. Most modern combat aircraft can carry at least a pair of air-to-air missiles.
In the 1970s, turbofans replaced turbojets, improving fuel economy enough that the last piston engine support aircraft could be replaced with jets, making multi-role combat aircraft possible. Honeycomb structures began to replace milled structures, and the first composite components began to appear on components subjected to little stress.
With the steady improvements in computers, defensive systems have become increasingly efficient. To counter this, stealth technologies have been pursued by the United States, Russia, India and China. The first step was to find ways to reduce the aircraft's reflectivity to radar waves by burying the engines, eliminating sharp corners and diverting any reflections away from the radar sets of opposing forces. Various materials were found to absorb the energy from radar waves, and were incorporated into special finishes that have since found widespread application. Composite structures have become widespread, including major structural components, and have helped to counterbalance the steady increases in aircraft weight—most modern fighters are larger and heavier than World War II medium bombers.
Because of the importance of air superiority, since the early days of aerial combat armed forces have constantly competed to develop technologically superior fighters and to deploy these fighters in greater numbers, and fielding a viable fighter fleet consumes a substantial proportion of the defense budgets of modern armed forces.
The global combat aircraft market was worth $45.75 billion in 2017 and is projected by Frost & Sullivan at $47.2 billion in 2026: 35% modernization programs and 65% aircraft purchases, dominated by the Lockheed Martin F-35 with 3,000 deliveries over 20 years.
A fighter aircraft is primarily designed for air-to-air combat. A given type may be designed for specific combat conditions, and in some cases for additional roles such as air-to-ground fighting. Historically the British Royal Flying Corps and Royal Air Force referred to them as "scouts" until the early 1920s, while the U.S. Army called them "pursuit" aircraft until the late 1940s. The UK changed to calling them fighters in the 1920s, while the US Army did so in the 1940s. A short-range fighter designed to defend against incoming enemy aircraft is known as an interceptor.
Recognized classes of fighter include:
Of these, the Fighter-bomber, reconnaissance fighter and strike fighter classes are dual-role, possessing qualities of the fighter alongside some other battlefield role. Some fighter designs may be developed in variants performing other roles entirely, such as ground attack or unarmed reconnaissance. This may be for political or national security reasons, for advertising purposes, or other reasons.
The Sopwith Camel and other "fighting scouts" of World War I performed a great deal of ground-attack work. In World War II, the USAAF and RAF often favored fighters over dedicated light bombers or dive bombers, and types such as the Republic P-47 Thunderbolt and Hawker Hurricane that were no longer competitive as aerial combat fighters were relegated to ground attack. Several aircraft, such as the F-111 and F-117, have received fighter designations though they had no fighter capability due to political or other reasons. The F-111B variant was originally intended for a fighter role with the U.S. Navy, but it was canceled. This blurring follows the use of fighters from their earliest days for "attack" or "strike" operations against ground targets by means of strafing or dropping small bombs and incendiaries. Versatile multi role fighter-bombers such as the McDonnell Douglas F/A-18 Hornet are a less expensive option than having a range of specialized aircraft types.
Some of the most expensive fighters such as the US Grumman F-14 Tomcat, McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor and Russian Sukhoi Su-27 were employed as all-weather interceptors as well as air superiority fighter aircraft, while commonly developing air-to-ground roles late in their careers. An interceptor is generally an aircraft intended to target (or intercept) bombers and so often trades maneuverability for climb rate.
As a part of military nomenclature, a letter is often assigned to various types of aircraft to indicate their use, along with a number to indicate the specific aircraft. The letters used to designate a fighter differ in various countries. In the English-speaking world, "F" is often now used to indicate a fighter (e.g. Lockheed Martin F-35 Lightning II or Supermarine Spitfire F.22), though "P" used to be used in the US for pursuit (e.g. Curtiss P-40 Warhawk), a translation of the French "C" (Dewoitine D.520 C.1) for Chasseur while in Russia "I" was used for Istrebitel, or exterminator (Polikarpov I-16).
As fighter types have proliferated, the air superiority fighter emerged as a specific role at the pinnacle of speed, maneuverability, and air-to-air weapon systems – able to hold its own against all other fighters and establish its dominance in the skies above the battlefield.
The interceptor is a fighter designed specifically to intercept and engage approaching enemy aircraft. There are two general classes of interceptor: relatively lightweight aircraft in the point-defence role, built for fast reaction, high performance and with a short range, and heavier aircraft with more comprehensive avionics and designed to fly at night or in all weathers and to operate over longer ranges. Originating during World War I, by 1929 this class of fighters had become known as the interceptor.
The equipment necessary for daytime flight is inadequate when flying at night or in poor visibility. The night fighter was developed during World War I with additional equipment to aid the pilot in flying straight, navigating and finding the target. From modified variants of the Royal Aircraft Factory B.E.2c in 1915, the night fighter has evolved into the highly capable all-weather fighter.
The strategic fighter is a fast, heavily armed and long-range type, able to act as an escort fighter protecting bombers, to carry out offensive sorties of its own as a penetration fighter and maintain standing patrols at significant distance from its home base.
Bombers are vulnerable due to their low speed, large size and poor maneuvrability. The escort fighter was developed during World War II to come between the bombers and enemy attackers as a protective shield. The primary requirement was for long range, with several heavy fighters given the role. However they too proved unwieldy and vulnerable, so as the war progressed techniques such as drop tanks were developed to extend the range of more nimble conventional fighters.
The penetration fighter is typically also fitted for the ground-attack role, and so is able to defend itself while conducting attack sorties.
The word "fighter" was first used to describe a two-seat aircraft carrying a machine gun (mounted on a pedestal) and its operator as well as the pilot. Although the term was coined in the United Kingdom, the first examples were the French Voisin pushers beginning in 1910, and a Voisin III would be the first to shoot down another aircraft, on 5 October 1914.
However at the outbreak of World War I, front-line aircraft were mostly unarmed and used almost exclusively for reconnaissance. On 15 August 1914, Miodrag Tomić encountered an enemy airplane while on a reconnaissance flight over Austria-Hungary which fired at his aircraft with a revolver, so Tomić fired back. It was believed to be the first exchange of fire between aircraft. Within weeks, all Serbian and Austro-Hungarian aircraft were armed.
Another type of military aircraft formed the basis for an effective "fighter" in the modern sense of the word. It was based on small fast aircraft developed before the war for air racing such with the Gordon Bennett Cup and Schneider Trophy. The military scout airplane was not expected to carry serious armament, but rather to rely on speed to "scout" a location, and return quickly to report, making it a flying horse. British scout aircraft, in this sense, included the Sopwith Tabloid and Bristol Scout. The French and the Germans didn't have an equivalent as they used two seaters for reconnaissance, such as the Morane-Saulnier L, but would later modify pre-war racing aircraft into armed single seaters. It was quickly found that these were of little use since the pilot couldn't record what he saw while also flying, while military leaders usually ignored what the pilots reported.
Attempts were made with handheld weapons such as pistols and rifles and even light machine guns, but these were ineffective and cumbersome. The next advance came with the fixed forward-firing machine gun, so that the pilot pointed the entire aircraft at the target and fired the gun, instead of relying on a second gunner. Roland Garros bolted metal deflector plates to the propeller so that it would not shoot itself out of the sky and a number of Morane-Saulnier Ns were modified. The technique proved effective, however the deflected bullets were still highly dangerous.
Soon after the commencement of the war, pilots armed themselves with pistols, carbines, grenades, and an assortment of improvised weapons. Many of these proved ineffective as the pilot had to fly his airplane while attempting to aim a handheld weapon and make a difficult deflection shot. The first step in finding a real solution was to mount the weapon on the aircraft, but the propeller remained a problem since the best direction to shoot is straight ahead. Numerous solutions were tried. A second crew member behind the pilot could aim and fire a swivel-mounted machine gun at enemy airplanes; however, this limited the area of coverage chiefly to the rear hemisphere, and effective coordination of the pilot's maneuvering with the gunner's aiming was difficult. This option was chiefly employed as a defensive measure on two-seater reconnaissance aircraft from 1915 on. Both the SPAD S.A and the Royal Aircraft Factory B.E.9 added a second crewman ahead of the engine in a pod but this was both hazardous to the second crewman and limited performance. The Sopwith L.R.T.Tr. similarly added a pod on the top wing with no better luck.
An alternative was to build a "pusher" scout such as the Airco DH.2, with the propeller mounted behind the pilot. The main drawback was that the high drag of a pusher type's tail structure made it slower than a similar "tractor" aircraft. A better solution for a single seat scout was to mount the machine gun (rifles and pistols having been dispensed with) to fire forwards but outside the propeller arc. Wing guns were tried but the unreliable weapons available required frequent clearing of jammed rounds and misfires and remained impractical until after the war. Mounting the machine gun over the top wing worked well and was used long after the ideal solution was found. The Nieuport 11 of 1916 used this system with considerable success, however, this placement made aiming and reloading difficult but would continue to be used throughout the war as the weapons used were lighter and had a higher rate of fire than synchronized weapons. The British Foster mounting and several French mountings were specifically designed for this kind of application, fitted with either the Hotchkiss or Lewis Machine gun, which due to their design were unsuitable for synchronizing. The need to arm a tractor scout with a forward-firing gun whose bullets passed through the propeller arc was evident even before the outbreak of war and inventors in both France and Germany devised mechanisms that could time the firing of the individual rounds to avoid hitting the propeller blades. Franz Schneider, a Swiss engineer, had patented such a device in Germany in 1913, but his original work was not followed up. French aircraft designer Raymond Saulnier patented a practical device in April 1914, but trials were unsuccessful because of the propensity of the machine gun employed to hang fire due to unreliable ammunition. In December 1914, French aviator Roland Garros asked Saulnier to install his synchronization gear on Garros' Morane-Saulnier Type L parasol monoplane. Unfortunately the gas-operated Hotchkiss machine gun he was provided had an erratic rate of fire and it was impossible to synchronize it with the propeller. As an interim measure, the propeller blades were fitted with metal wedges to protect them from ricochets. Garros' modified monoplane first flew in March 1915 and he began combat operations soon after. Garros scored three victories in three weeks before he himself was downed on 18 April and his airplane, along with its synchronization gear and propeller was captured by the Germans. Meanwhile, the synchronization gear (called the Stangensteuerung in German, for "pushrod control system") devised by the engineers of Anthony Fokker's firm was the first system to enter service. It would usher in what the British called the "Fokker scourge" and a period of air superiority for the German forces, making the Fokker Eindecker monoplane a feared name over the Western Front, despite its being an adaptation of an obsolete pre-war French Morane-Saulnier racing airplane, with poor flight characteristics and a by now mediocre performance. The first Eindecker victory came on 1 July 1915, when Leutnant Kurt Wintgens, of Feldflieger Abteilung 6 on the Western Front, downed a Morane-Saulnier Type L. His was one of five Fokker M.5K/MG prototypes for the Eindecker, and was armed with a synchronized aviation version of the Parabellum MG14 machine gun. The success of the Eindecker kicked off a competitive cycle of improvement among the combatants, both sides striving to build ever more capable single-seat fighters. The Albatros D.I and Sopwith Pup of 1916 set the classic pattern followed by fighters for about twenty years. Most were biplanes and only rarely monoplanes or triplanes. The strong box structure of the biplane provided a rigid wing that allowed the accurate control essential for dogfighting. They had a single operator, who flew the aircraft and also controlled its armament. They were armed with one or two Maxim or Vickers machine guns, which were easier to synchronize than other types, firing through the propeller arc. Gun breeches were in front of the pilot, with obvious implications in case of accidents, but jams could be cleared in flight, while aiming was simplified.
The use of metal aircraft structures was pioneered before World War I by Breguet but would find its biggest proponent in Anthony Fokker, who used chrome-molybdenum steel tubing for the fuselage structure of all his fighter designs, while the innovative German engineer Hugo Junkers developed two all-metal, single-seat fighter monoplane designs with cantilever wings: the strictly experimental Junkers J 2 private-venture aircraft, made with steel, and some forty examples of the Junkers D.I, made with corrugated duralumin, all based on his experience in creating the pioneering Junkers J 1 all-metal airframe technology demonstration aircraft of late 1915. While Fokker would pursue steel tube fuselages with wooden wings until the late 1930s, and Junkers would focus on corrugated sheet metal, Dornier was the first to build a fighter (the Dornier-Zeppelin D.I) made with pre-stressed sheet aluminum and having cantilevered wings, a form that would replace all others in the 1930s. As collective combat experience grew, the more successful pilots such as Oswald Boelcke, Max Immelmann, and Edward Mannock developed innovative tactical formations and maneuvers to enhance their air units' combat effectiveness.
Allied and – before 1918 – German pilots of World War I were not equipped with parachutes, so in-flight fires or structural failures were often fatal. Parachutes were well-developed by 1918 having previously been used by balloonists, and were adopted by the German flying services during the course of that year. The well known and feared Manfred von Richthofen, the "Red Baron", was wearing one when he was killed, but the allied command continued to oppose their use on various grounds.
In April 1917, during a brief period of German aerial supremacy a British pilot's average life expectancy was calculated to average 93 flying hours, or about three weeks of active service. More than 50,000 airmen from both sides died during the war.
Fighter development stagnated between the wars, especially in the United States and the United Kingdom, where budgets were small. In France, Italy and Russia, where large budgets continued to allow major development, both monoplanes and all metal structures were common. By the end of the 1920s, however, those countries overspent themselves and were overtaken in the 1930s by those powers that hadn't been spending heavily, namely the British, the Americans and the Germans.
Given limited budgets, air forces were conservative in aircraft design, and biplanes remained popular with pilots for their agility, and remained in service long after they ceased to be competitive. Designs such as the Gloster Gladiator, Fiat CR.42 Falco, and Polikarpov I-15 were common even in the late 1930s, and many were still in service as late as 1942. Up until the mid-1930s, the majority of fighters in the US, the UK, Italy and Russia remained fabric-covered biplanes.
Fighter armament eventually began to be mounted inside the wings, outside the arc of the propeller, though most designs retained two synchronized machine guns directly ahead of the pilot, where they were more accurate (that being the strongest part of the structure, reducing the vibration to which the guns were subjected). Shooting with this traditional arrangement was also easier because the guns shot directly ahead in the direction of the aircraft's flight, up to the limit of the guns range; unlike wing-mounted guns which to be effective required to be harmonised, that is, preset to shoot at an angle by ground crews so that their bullets would converge on a target area a set distance ahead of the fighter. Rifle-caliber .30 and .303 in (7.62 and 7.70 mm) calibre guns remained the norm, with larger weapons either being too heavy and cumbersome or deemed unnecessary against such lightly built aircraft. It was not considered unreasonable to use World War I-style armament to counter enemy fighters as there was insufficient air-to-air combat during most of the period to disprove this notion.
The rotary engine, popular during World War I, quickly disappeared, its development having reached the point where rotational forces prevented more fuel and air from being delivered to the cylinders, which limited horsepower. They were replaced chiefly by the stationary radial engine though major advances led to inline engines gaining ground with several exceptional engines—including the 1,145 cu in (18,760 cm) V-12 Curtiss D-12. Aircraft engines increased in power several-fold over the period, going from a typical 180 hp (130 kW) in the 900 kg (2,000 lb) Fokker D.VII of 1918 to 900 hp (670 kW) in the 2,500 kg (5,500 lb) Curtiss P-36 of 1936. The debate between the sleek in-line engines versus the more reliable radial models continued, with naval air forces preferring the radial engines, and land-based forces often choosing inlines. Radial designs did not require a separate (and vulnerable) radiator, but had increased drag. Inline engines often had a better power-to-weight ratio.
Some air forces experimented with "heavy fighters" (called "destroyers" by the Germans). These were larger, usually twin-engined aircraft, sometimes adaptations of light or medium bomber types. Such designs typically had greater internal fuel capacity (thus longer range) and heavier armament than their single-engine counterparts. In combat, they proved vulnerable to more agile single-engine fighters.
The primary driver of fighter innovation, right up to the period of rapid re-armament in the late 1930s, were not military budgets, but civilian aircraft racing. Aircraft designed for these races introduced innovations like streamlining and more powerful engines that would find their way into the fighters of World War II. The most significant of these was the Schneider Trophy races, where competition grew so fierce, only national governments could afford to enter.
At the very end of the inter-war period in Europe came the Spanish Civil War. This was just the opportunity the German Luftwaffe, Italian Regia Aeronautica, and the Soviet Union's Voenno-Vozdushnye Sily needed to test their latest aircraft. Each party sent numerous aircraft types to support their sides in the conflict. In the dogfights over Spain, the latest Messerschmitt Bf 109 fighters did well, as did the Soviet Polikarpov I-16. The later German design was earlier in its design cycle, and had more room for development and the lessons learned led to greatly improved models in World War II. The Russians failed to keep up and despite newer models coming into service, I-16s remaining the most common Soviet front-line fighter into 1942 despite being outclassed by the improved Bf 109s in World War II. For their part, the Italians developed several monoplanes such as the Fiat G.50 Freccia, but being short on funds, were forced to continue operating obsolete Fiat CR.42 Falco biplanes.
From the early 1930s the Japanese were at war against both the Chinese Nationalists and the Russians in China, and used the experience to improve both training and aircraft, replacing biplanes with modern cantilever monoplanes and creating a cadre of exceptional pilots. In the United Kingdom, at the behest of Neville Chamberlain (more famous for his 'peace in our time' speech), the entire British aviation industry was retooled, allowing it to change quickly from fabric covered metal framed biplanes to cantilever stressed skin monoplanes in time for the war with Germany, a process that France attempted to emulate, but too late to counter the German invasion. The period of improving the same biplane design over and over was now coming to an end, and the Hawker Hurricane and Supermarine Spitfire started to supplant the Gloster Gladiator and Hawker Fury biplanes but many biplanes remained in front-line service well past the start of World War II. While not a combatant in Spain, they too absorbed many of the lessons in time to use them.
The Spanish Civil War also provided an opportunity for updating fighter tactics. One of the innovations was the development of the "finger-four" formation by the German pilot Werner Mölders. Each fighter squadron (German: Staffel) was divided into several flights (Schwärme) of four aircraft. Each Schwarm was divided into two Rotten, which was a pair of aircraft. Each Rotte was composed of a leader and a wingman. This flexible formation allowed the pilots to maintain greater situational awareness, and the two Rotten could split up at any time and attack on their own. The finger-four would be widely adopted as the fundamental tactical formation during World War Two, including by the British and later the Americans.
World War II featured fighter combat on a larger scale than any other conflict to date. German Field Marshal Erwin Rommel noted the effect of airpower: "Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage…" Throughout the war, fighters performed their conventional role in establishing air superiority through combat with other fighters and through bomber interception, and also often performed roles such as tactical air support and reconnaissance.
Fighter design varied widely among combatants. The Japanese and Italians favored lightly armed and armored but highly maneuverable designs such as the Japanese Nakajima Ki-27, Nakajima Ki-43 and Mitsubishi A6M Zero and the Italian Fiat G.50 Freccia and Macchi MC.200. In contrast, designers in the United Kingdom, Germany, the Soviet Union, and the United States believed that the increased speed of fighter aircraft would create g-forces unbearable to pilots who attempted maneuvering dogfights typical of the First World War, and their fighters were instead optimized for speed and firepower. In practice, while light, highly maneuverable aircraft did possess some advantages in fighter-versus-fighter combat, those could usually be overcome by sound tactical doctrine, and the design approach of the Italians and Japanese made their fighters ill-suited as interceptors or attack aircraft.
During the invasion of Poland and the Battle of France, Luftwaffe fighters—primarily the Messerschmitt Bf 109—held air superiority, and the Luftwaffe played a major role in German victories in these campaigns. During the Battle of Britain, however, British Hurricanes and Spitfires proved roughly equal to Luftwaffe fighters. Additionally Britain's radar-based Dowding system directing fighters onto German attacks and the advantages of fighting above Britain's home territory allowed the RAF to deny Germany air superiority, saving the UK from possible German invasion and dealing the Axis a major defeat early in the Second World War. On the Eastern Front, Soviet fighter forces were overwhelmed during the opening phases of Operation Barbarossa. This was a result of the tactical surprise at the outset of the campaign, the leadership vacuum within the Soviet military left by the Great Purge, and the general inferiority of Soviet designs at the time, such as the obsolescent Polikarpov I-15 biplane and the I-16. More modern Soviet designs, including the Mikoyan-Gurevich MiG-3, LaGG-3 and Yakolev Yak-1, had not yet arrived in numbers and in any case were still inferior to the Messerschmitt Bf 109. As a result, during the early months of these campaigns, Axis air forces destroyed large numbers of Red Air Force aircraft on the ground and in one-sided dogfights. In the later stages on the Eastern Front, Soviet training and leadership improved, as did their equipment. By 1942 Soviet designs such as the Yakovlev Yak-9 and Lavochkin La-5 had performance comparable to the German Bf 109 and Focke-Wulf Fw 190. Also, significant numbers of British, and later U.S., fighter aircraft were supplied to aid the Soviet war effort as part of Lend-Lease, with the Bell P-39 Airacobra proving particularly effective in the lower-altitude combat typical of the Eastern Front. The Soviets were also helped indirectly by the American and British bombing campaigns, which forced the Luftwaffe to shift many of its fighters away from the Eastern Front in defense against these raids. The Soviets increasingly were able to challenge the Luftwaffe, and while the Luftwaffe maintained a qualitative edge over the Red Air Force for much of the war, the increasing numbers and efficacy of the Soviet Air Force were critical to the Red Army's efforts at turning back and eventually annihilating the Wehrmacht.
Meanwhile, air combat on the Western Front had a much different character. Much of this combat focused on the strategic bombing campaigns of the RAF and the USAAF against German industry intended to wear down the Luftwaffe. Axis fighter aircraft focused on defending against Allied bombers while Allied fighters' main role was as bomber escorts. The RAF raided German cities at night, and both sides developed radar-equipped night fighters for these battles. The Americans, in contrast, flew daylight bombing raids into Germany delivering the Combined Bomber Offensive. Unescorted Consolidated B-24 Liberators and Boeing B-17 Flying Fortress bombers, however, proved unable to fend off German interceptors (primarily Bf 109s and Fw 190s). With the later arrival of long range fighters, particularly the North American P-51 Mustang, American fighters were able to escort far into Germany on daylight raids and by ranging ahead attrited the Luftwaffe to establish control of the skies over Western Europe.
By the time of Operation Overlord in June 1944, the Allies had gained near complete air superiority over the Western Front. This cleared the way both for intensified strategic bombing of German cities and industries, and for the tactical bombing of battlefield targets. With the Luftwaffe largely cleared from the skies, Allied fighters increasingly served as ground attack aircraft.
Allied fighters, by gaining air superiority over the European battlefield, played a crucial role in the eventual defeat of the Axis, which Reichmarshal Hermann Göring, commander of the German Luftwaffe summed up when he said: "When I saw Mustangs over Berlin, I knew the jig was up."
Major air combat during the war in the Pacific began with the entry of the Western Allies following Japan's attack against Pearl Harbor. The Imperial Japanese Navy Air Service primarily operated the Mitsubishi A6M Zero, and the Imperial Japanese Army Air Service flew the Nakajima Ki-27 and the Nakajima Ki-43, initially enjoying great success, as these fighters generally had better range, maneuverability, speed and climb rates than their Allied counterparts. Additionally, Japanese pilots were well trained and many were combat veterans from Japan's campaigns in China. They quickly gained air superiority over the Allies, who at this stage of the war were often disorganized, under-trained and poorly equipped, and Japanese air power contributed significantly to their successes in the Philippines, Malaysia and Singapore, the Dutch East Indies and Burma.
By mid-1942, the Allies began to regroup and while some Allied aircraft such as the Brewster Buffalo and the P-39 Airacobra were hopelessly outclassed by fighters like Japan's Mitsubishi A6M Zero, others such as the Army's Curtiss P-40 Warhawk and the Navy's Grumman F4F Wildcat possessed attributes such as superior firepower, ruggedness and dive speed, and the Allies soon developed tactics (such as the Thach Weave) to take advantage of these strengths. These changes soon paid dividends, as the Allied ability to deny Japan air superiority was critical to their victories at Coral Sea, Midway, Guadalcanal and New Guinea. In China, the Flying Tigers also used the same tactics with some success, although they were unable to stem the tide of Japanese advances there. By 1943, the Allies began to gain the upper hand in the Pacific Campaign's air campaigns. Several factors contributed to this shift. First, the Lockheed P-38 Lightning and second-generation Allied fighters such as the Grumman F6 Hellcat and later the Vought F4 Corsair, the Republic P-47 Thunderbolt and the North American P-51 Mustang, began arriving in numbers. These fighters outperformed Japanese fighters in all respects except maneuverability. Other problems with Japan's fighter aircraft also became apparent as the war progressed, such as their lack of armor and light armament, which had been typical of all pre-war fighters worldwide, but the problem was particularly difficult to rectify on the Japanese designs. This made them inadequate as either bomber-interceptors or ground-attack aircraft, roles Allied fighters were still able to fill. Most importantly, Japan's training program failed to provide enough well-trained pilots to replace losses. In contrast, the Allies improved both the quantity and quality of pilots graduating from their training programs. By mid-1944, Allied fighters had gained air superiority throughout the theater, which would not be contested again during the war. The extent of Allied quantitative and qualitative superiority by this point in the war was demonstrated during the Battle of the Philippine Sea, a lopsided Allied victory in which Japanese fliers were shot down in such numbers and with such ease that American fighter pilots likened it to a great 'turkey shoot'. Late in the war, Japan began to produce new fighters such as the Nakajima Ki-84 and the Kawanishi N1K to replace the Zero, but only in small numbers, and by then Japan lacked the trained pilots or sufficient fuel to mount an effective challenge to Allied attacks. During the closing stages of the war, Japan's fighter arm could not seriously challenge raids over Japan by American Boeing B-29 Superfortresses, and was largely reduced to Kamikaze attacks.
Fighter technology advanced rapidly during the Second World War. Piston-engines, which powered the vast majority of World War II fighters, grew more powerful: at the beginning of the war fighters typically had engines producing between 1,000 hp (750 kW) and 1,400 hp (1,000 kW), while by the end of the war many could produce over 2,000 hp (1,500 kW). For example, the Spitfire, one of the few fighters in continuous production throughout the war, was in 1939 powered by a 1,030 hp (770 kW) Merlin II, while variants produced in 1945 were equipped with the 2,035 hp (1,517 kW) Rolls-Royce Griffon 61. Nevertheless, these fighters could only achieve modest increases in top speed due to problems of compressibility created as aircraft and their propellers approached the sound barrier, and it was apparent that propeller-driven aircraft were approaching the limits of their performance. German jet and rocket-powered fighters entered combat in 1944, too late to impact the war's outcome. The same year the Allies' only operational jet fighter, the Gloster Meteor, also entered service.m World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51 Mustang, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds. Armament also advanced during the war. The rifle-caliber machine guns that were common on prewar fighters could not easily down the more rugged warplanes of the era. Air forces began to replace or supplement them with cannons, which fired explosive shells that could blast a hole in an enemy aircraft – rather than relying on kinetic energy from a solid bullet striking a critical component of the aircraft, such as a fuel line or control cable, or the pilot. Cannons could bring down even heavy bombers with just a few hits, but their slower rate of fire made it difficult to hit fast-moving fighters in a dogfight. Eventually, most fighters mounted cannons, sometimes in combination with machine guns. The British epitomized this shift. Their standard early war fighters mounted eight .303 in (7.7 mm) caliber machine guns, but by mid-war they often featured a combination of machine guns and 20 mm (0.79 in) cannons, and late in the war often only cannons. The Americans, in contrast, had problems producing a cannon design, so instead placed multiple .50 in (12.7 mm) heavy machine guns on their fighters. Fighters were also increasingly fitted with bomb racks and air-to-surface ordnance such as bombs or rockets beneath their wings, and pressed into close air support roles as fighter-bombers. Although they carried less ordnance than light and medium bombers, and generally had a shorter range, they were cheaper to produce and maintain and their maneuverability made it easier for them to hit moving targets such as motorized vehicles. Moreover, if they encountered enemy fighters, their ordnance (which reduced lift and increased drag and therefore decreased performance) could be jettisoned and they could engage enemy fighters, which eliminated the need for fighter escorts that bombers required.
Heavily armed fighters such as Germany's Focke-Wulf Fw 190, Britain's Hawker Typhoon and Hawker Tempest, and America's Curtiss P-40, F4 Corsair, P-47 Thunderbolt and P-38 Lightning all excelled as fighter-bombers, and since the Second World War ground attack has become an important secondary capability of many fighters. World War II also saw the first use of airborne radar on fighters. The primary purpose of these radars was to help night fighters locate enemy bombers and fighters. Because of the bulkiness of these radar sets, they could not be carried on conventional single-engined fighters and instead were typically retrofitted to larger heavy fighters or light bombers such as Germany's Messerschmitt Bf 110 and Junkers Ju 88, Britain's de Havilland Mosquito and Bristol Beaufighter, and America's Douglas A-20, which then served as night fighters. The Northrop P-61 Black Widow, a purpose-built night fighter, was the only fighter of the war that incorporated radar into its original design. Britain and America cooperated closely in the development of airborne radar, and Germany's radar technology generally lagged slightly behind Anglo-American efforts, while other combatants developed few radar-equipped fighters.
Several prototype fighter programs begun early in 1945 continued on after the war and led to advanced piston-engine fighters that entered production and operational service in 1946. A typical example is the Lavochkin La-9 'Fritz', which was an evolution of the successful wartime Lavochkin La-7 'Fin'. Working through a series of prototypes, the La-120, La-126 and La-130, the Lavochkin design bureau sought to replace the La-7's wooden airframe with a metal one, as well as fit a laminar flow wing to improve maneuver performance, and increased armament. The La-9 entered service in August 1946 and was produced until 1948; it also served as the basis for the development of a long-range escort fighter, the La-11 'Fang', of which nearly 1200 were produced 1947–51. Over the course of the Korean War, however, it became obvious that the day of the piston-engined fighter was coming to a close and that the future would lie with the jet fighter.
This period also witnessed experimentation with jet-assisted piston engine aircraft. La-9 derivatives included examples fitted with two underwing auxiliary pulsejet engines (the La-9RD) and a similarly mounted pair of auxiliary ramjet engines (the La-138); however, neither of these entered service. One that did enter service – with the U.S. Navy in March 1945 – was the Ryan FR-1 Fireball; production was halted with the war's end on VJ-Day, with only 66 having been delivered, and the type was withdrawn from service in 1947. The USAAF had ordered its first 13 mixed turboprop-turbojet-powered pre-production prototypes of the Consolidated Vultee XP-81 fighter, but this program was also canceled by VJ Day, with 80% of the engineering work completed.
The first rocket-powered aircraft was the Lippisch Ente, which made a successful maiden flight in March 1928. The only pure rocket aircraft ever mass-produced was the Messerschmitt Me 163B Komet in 1944, one of several German World War II projects aimed at developing high speed, point-defense aircraft. Later variants of the Me 262 (C-1a and C-2b) were also fitted with "mixed-power" jet/rocket powerplants, while earlier models were fitted with rocket boosters, but were not mass-produced with these modifications.
The USSR experimented with a rocket-powered interceptor in the years immediately following World War II, the Mikoyan-Gurevich I-270. Only two were built.
In the 1950s, the British developed mixed-power jet designs employing both rocket and jet engines to cover the performance gap that existed in turbojet designs. The rocket was the main engine for delivering the speed and height required for high-speed interception of high-level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure the aircraft was able to make a powered landing rather than risking an unpredictable gliding return.
The Saunders-Roe SR.53 was a successful design, and was planned for production when economics forced the British to curtail most aircraft programs in the late 1950s. Furthermore, rapid advancements in jet engine technology rendered mixed-power aircraft designs like Saunders-Roe's SR.53 (and the following SR.177) obsolete. The American Republic XF-91 Thunderceptor –the first U.S. fighter to exceed Mach 1 in level flight– met a similar fate for the same reason, and no hybrid rocket-and-jet-engine fighter design has ever been placed into service.
The only operational implementation of mixed propulsion was Rocket-Assisted Take Off (RATO), a system rarely used in fighters, such as with the zero-length launch, RATO-based takeoff scheme from special launch platforms, tested out by both the United States and the Soviet Union, and made obsolete with advancements in surface-to-air missile technology.
It has become common in the aviation community to classify jet fighters by "generations" for historical purposes. No official definitions of these generations exist; rather, they represent the notion of stages in the development of fighter-design approaches, performance capabilities, and technological evolution. Different authors have packed jet fighters into different generations. For example, Richard P. Hallion of the Secretary of the Air Force's Action Group classified the F-16 as a sixth-generation jet fighter.
The timeframes associated with each generation remain inexact and are only indicative of the period during which their design philosophies and technology employment enjoyed a prevailing influence on fighter design and development. These timeframes also encompass the peak period of service entry for such aircraft.
The first generation of jet fighters comprised the initial, subsonic jet-fighter designs introduced late in World War II (1939–1945) and in the early post-war period. They differed little from their piston-engined counterparts in appearance, and many employed unswept wings. Guns and cannon remained the principal armament. The need to obtain a decisive advantage in maximum speed pushed the development of turbojet-powered aircraft forward. Top speeds for fighters rose steadily throughout World War II as more powerful piston engines developed, and they approached transonic flight-speeds where the efficiency of propellers drops off, making further speed increases nearly impossible.
The first jets developed during World War II and saw combat in the last two years of the war. Messerschmitt developed the first operational jet fighter, the Me 262A, primarily serving with the Luftwaffe's JG 7, the world's first jet-fighter wing. It was considerably faster than contemporary piston-driven aircraft, and in the hands of a competent pilot, proved quite difficult for Allied pilots to defeat. The Luftwaffe never deployed the design in numbers sufficient to stop the Allied air campaign, and a combination of fuel shortages, pilot losses, and technical difficulties with the engines kept the number of sorties low. Nevertheless, the Me 262 indicated the obsolescence of piston-driven aircraft. Spurred by reports of the German jets, Britain's Gloster Meteor entered production soon after, and the two entered service around the same time in 1944. Meteors commonly served to intercept the V-1 flying bomb, as they were faster than available piston-engined fighters at the low altitudes used by the flying bombs. Nearer the end of World War II, the first military jet-powered light-fighter design, the Luftwaffe intended the Heinkel He 162A Spatz (sparrow) to serve as a simple jet fighter for German home defense, with a few examples seeing squadron service with JG 1 by April 1945. By the end of the war almost all work on piston-powered fighters had ended. A few designs combining piston- and jet-engines for propulsion – such as the Ryan FR Fireball – saw brief use, but by the end of the 1940s virtually all new fighters were jet-powered.
Despite their advantages, the early jet-fighters were far from perfect. The operational lifespan of turbines were very short and engines were temperamental, while power could be adjusted only slowly and acceleration was poor (even if top speed was higher) compared to the final generation of piston fighters. Many squadrons of piston-engined fighters remained in service until the early to mid-1950s, even in the air forces of the major powers (though the types retained were the best of the World War II designs). Innovations including ejection seats, air brakes and all-moving tailplanes became widespread in this period.
The Americans began using jet fighters operationally after World War II, the wartime Bell P-59 having proven a failure. The Lockheed P-80 Shooting Star (soon re-designated F-80) was more prone to wave drag than the swept-wing Me 262, but had a cruise speed (660 km/h (410 mph)) as high as the maximum speed attainable by many piston-engined fighters. The British designed several new jets, including the distinctive single-engined twin boom de Havilland Vampire which Britain sold to the air forces of many nations.
The British transferred the technology of the Rolls-Royce Nene jet-engine to the Soviets, who soon put it to use in their advanced Mikoyan-Gurevich MiG-15 fighter, which used fully swept wings that allowed flying closer to the speed of sound than straight-winged designs such as the F-80. The MiG-15s' top speed of 1,075 km/h (668 mph) proved quite a shock to the American F-80 pilots who encountered them in the Korean War, along with their armament of two 23 mm (0.91 in) cannons and a single 37 mm (1.5 in) cannon. Nevertheless, in the first jet-versus-jet dogfight, which occurred during the Korean War on 8 November 1950, an F-80 shot down two North Korean MiG-15s.
The Americans responded by rushing their own swept-wing fighter – the North American F-86 Sabre – into battle against the MiGs, which had similar transsonic performance. The two aircraft had different strengths and weaknesses, but were similar enough that victory could go either way. While the Sabres focused primarily on downing MiGs and scored favorably against those flown by the poorly-trained North Koreans, the MiGs in turn decimated US bomber formations and forced the withdrawal of numerous American types from operational service.
The world's navies also transitioned to jets during this period, despite the need for catapult-launching of the new aircraft. The U.S. Navy adopted the Grumman F9F Panther as their primary jet fighter in the Korean War period, and it was one of the first jet fighters to employ an afterburner. The de Havilland Sea Vampire became the Royal Navy's first jet fighter. Radar was used on specialized night-fighters such as the Douglas F3D Skyknight, which also downed MiGs over Korea, and later fitted to the McDonnell F2H Banshee and swept-wing Vought F7U Cutlass and McDonnell F3H Demon as all-weather / night fighters. Early versions of Infra-red (IR) air-to-air missiles (AAMs) such as the AIM-9 Sidewinder and radar-guided missiles such as the AIM-7 Sparrow whose descendants remain in use as of 2021, were first introduced on swept-wing subsonic Demon and Cutlass naval fighters.
Technological breakthroughs, lessons learned from the aerial battles of the Korean War, and a focus on conducting operations in a nuclear warfare environment shaped the development of second-generation fighters. Technological advances in aerodynamics, propulsion and aerospace building-materials (primarily aluminum alloys) permitted designers to experiment with aeronautical innovations such as swept wings, delta wings, and area-ruled fuselages. Widespread use of afterburning turbojet engines made these the first production aircraft to break the sound barrier, and the ability to sustain supersonic speeds in level flight became a common capability amongst fighters of this generation.
Fighter designs also took advantage of new electronics technologies that made effective radars small enough to carry aboard smaller aircraft. Onboard radars permitted detection of enemy aircraft beyond visual range, thereby improving the handoff of targets by longer-ranged ground-based warning- and tracking-radars. Similarly, advances in guided-missile development allowed air-to-air missiles to begin supplementing the gun as the primary offensive weapon for the first time in fighter history. During this period, passive-homing infrared-guided (IR) missiles became commonplace, but early IR missile sensors had poor sensitivity and a very narrow field of view (typically no more than 30°), which limited their effective use to only close-range, tail-chase engagements. Radar-guided (RF) missiles were introduced as well, but early examples proved unreliable. These semi-active radar homing (SARH) missiles could track and intercept an enemy aircraft "painted" by the launching aircraft's onboard radar. Medium- and long-range RF air-to-air missiles promised to open up a new dimension of "beyond-visual-range" (BVR) combat, and much effort concentrated on further development of this technology.
The prospect of a potential third world war featuring large mechanized armies and nuclear-weapon strikes led to a degree of specialization along two design approaches: interceptors, such as the English Electric Lightning and Mikoyan-Gurevich MiG-21F; and fighter-bombers, such as the Republic F-105 Thunderchief and the Sukhoi Su-7B. Dogfighting, per se, became de-emphasized in both cases. The interceptor was an outgrowth of the vision that guided missiles would completely replace guns and combat would take place at beyond-visual ranges. As a result, strategists designed interceptors with a large missile-payload and a powerful radar, sacrificing agility in favor of high speed, altitude ceiling and rate of climb. With a primary air-defense role, emphasis was placed on the ability to intercept strategic bombers flying at high altitudes. Specialized point-defense interceptors often had limited range and few, if any, ground-attack capabilities. Fighter-bombers could swing between air-superiority and ground-attack roles, and were often designed for a high-speed, low-altitude dash to deliver their ordnance. Television- and IR-guided air-to-surface missiles were introduced to augment traditional gravity bombs, and some were also equipped to deliver a nuclear bomb.
The third generation witnessed continued maturation of second-generation innovations, but it is most marked by renewed emphases on maneuverability and on traditional ground-attack capabilities. Over the course of the 1960s, increasing combat experience with guided missiles demonstrated that combat would devolve into close-in dogfights. Analog avionics began to appear, replacing older "steam-gauge" cockpit instrumentation. Enhancements to the aerodynamic performance of third-generation fighters included flight control surfaces such as canards, powered slats, and blown flaps. A number of technologies would be tried for vertical/short takeoff and landing, but thrust vectoring would be successful on the Harrier.
Growth in air-combat capability focused on the introduction of improved air-to-air missiles, radar systems, and other avionics. While guns remained standard equipment (early models of F-4 being a notable exception), air-to-air missiles became the primary weapons for air-superiority fighters, which employed more sophisticated radars and medium-range RF AAMs to achieve greater "stand-off" ranges, however, kill probabilities proved unexpectedly low for RF missiles due to poor reliability and improved electronic countermeasures (ECM) for spoofing radar seekers. Infrared-homing AAMs saw their fields of view expand to 45°, which strengthened their tactical usability. Nevertheless, the low dogfight loss-exchange ratios experienced by American fighters in the skies over Vietnam led the U.S. Navy to establish its famous "TOPGUN" fighter-weapons school, which provided a graduate-level curriculum to train fleet fighter-pilots in advanced Air Combat Maneuvering (ACM) and Dissimilar air combat training (DACT) tactics and techniques. This era also saw an expansion in ground-attack capabilities, principally in guided missiles, and witnessed the introduction of the first truly effective avionics for enhanced ground attack, including terrain-avoidance systems. Air-to-surface missiles (ASM) equipped with electro-optical (E-O) contrast seekers – such as the initial model of the widely used AGM-65 Maverick – became standard weapons, and laser-guided bombs (LGBs) became widespread in an effort to improve precision-attack capabilities. Guidance for such precision-guided munitions (PGM) was provided by externally-mounted targeting pods, which were introduced in the mid-1960s.
The third generation also led to the development of new automatic-fire weapons, primarily chain-guns that use an electric motor to drive the mechanism of a cannon. This allowed a plane to carry a single multi-barrel weapon (such as the 20 mm (0.79 in) Vulcan), and provided greater accuracy and rates of fire. Powerplant reliability increased, and jet engines became "smokeless" to make it harder to sight aircraft at long distances.
Dedicated ground-attack aircraft (like the Grumman A-6 Intruder, SEPECAT Jaguar and LTV A-7 Corsair II) offered longer range, more sophisticated night-attack systems or lower cost than supersonic fighters. With variable-geometry wings, the supersonic F-111 introduced the Pratt & Whitney TF30, the first turbofan equipped with afterburner. The ambitious project sought to create a versatile common fighter for many roles and services. It would serve well as an all-weather bomber, but lacked the performance to defeat other fighters. The McDonnell F-4 Phantom was designed to capitalize on radar and missile technology as an all-weather interceptor, but emerged as a versatile strike-bomber nimble enough to prevail in air combat, adopted by the U.S. Navy, Air Force and Marine Corps. Despite numerous shortcomings that would not be fully addressed until newer fighters, the Phantom claimed 280 aerial kills (more than any other U.S. fighter) over Vietnam. With range and payload capabilities that rivaled that of World War II bombers such as B-24 Liberator, the Phantom would become a highly successful multirole aircraft.
Fourth-generation fighters continued the trend towards multirole configurations, and were equipped with increasingly sophisticated avionics- and weapon-systems. Fighter designs were significantly influenced by the Energy-Maneuverability (E-M) theory developed by Colonel John Boyd and mathematician Thomas Christie, based upon Boyd's combat experience in the Korean War and as a fighter-tactics instructor during the 1960s. E-M theory emphasized the value of aircraft-specific energy maintenance as an advantage in fighter combat. Boyd perceived maneuverability as the primary means of getting "inside" an adversary's decision-making cycle, a process Boyd called the "OODA loop" (for "Observation-Orientation-Decision-Action"). This approach emphasized aircraft designs capable of performing "fast transients" – quick changes in speed, altitude, and direction – as opposed to relying chiefly on high speed alone.
E-M characteristics were first applied to the McDonnell Douglas F-15 Eagle, but Boyd and his supporters believed these performance parameters called for a small, lightweight aircraft with a larger, higher-lift wing. The small size would minimize drag and increase the thrust-to-weight ratio, while the larger wing would minimize wing loading; while the reduced wing loading tends to lower top speed and can cut range, it increases payload capacity and the range reduction can be compensated for by increased fuel in the larger wing. The efforts of Boyd's "Fighter mafia" would result in the General Dynamics F-16 Fighting Falcon (now Lockheed Martin's).
The F-16's maneuverability was further enhanced by its slight aerodynamic instability. This technique, called "relaxed static stability" (RSS), was made possible by introduction of the "fly-by-wire" (FBW) flight-control system (FLCS), which in turn was enabled by advances in computers and in system-integration techniques. Analog avionics, required to enable FBW operations, became a fundamental requirement, but began to be replaced by digital flight-control systems in the latter half of the 1980s. Likewise, Full Authority Digital Engine Controls (FADEC) to electronically manage powerplant performance was introduced with the Pratt & Whitney F100 turbofan. The F-16's sole reliance on electronics and wires to relay flight commands, instead of the usual cables and mechanical linkage controls, earned it the sobriquet of "the electric jet". Electronic FLCS and FADEC quickly became essential components of all subsequent fighter designs.
Other innovative technologies introduced in fourth-generation fighters included pulse-Doppler fire-control radars (providing a "look-down/shoot-down" capability), head-up displays (HUD), "hands on throttle-and-stick" (HOTAS) controls, and multi-function displays (MFD), all essential equipment as of 2019. Aircraft designers began to incorporate composite materials in the form of bonded-aluminum honeycomb structural elements and graphite epoxy laminate skins to reduce weight. Infrared search-and-track (IRST) sensors became widespread for air-to-ground weapons delivery, and appeared for air-to-air combat as well. "All-aspect" IR AAM became standard air superiority weapons, which permitted engagement of enemy aircraft from any angle (although the field of view remained relatively limited). The first long-range active-radar-homing RF AAM entered service with the AIM-54 Phoenix, which solely equipped the Grumman F-14 Tomcat, one of the few variable-sweep-wing fighter designs to enter production. Even with the tremendous advancement of air-to-air missiles in this era, internal guns were standard equipment.
Another revolution came in the form of a stronger reliance on ease of maintenance, which led to standardization of parts, reductions in the numbers of access panels and lubrication points, and overall parts reduction in more complicated equipment like the engines. Some early jet fighters required 50 man-hours of work by a ground crew for every hour the aircraft was in the air; later models substantially reduced this to allow faster turn-around times and more sorties in a day. Some modern military aircraft only require 10-man-hours of work per hour of flight time, and others are even more efficient.
Aerodynamic innovations included variable-camber wings and exploitation of the vortex lift effect to achieve higher angles of attack through the addition of leading-edge extension devices such as strakes.
Unlike interceptors of the previous eras, most fourth-generation air-superiority fighters were designed to be agile dogfighters (although the Mikoyan MiG-31 and Panavia Tornado ADV are notable exceptions). The continually rising cost of fighters, however, continued to emphasize the value of multirole fighters. The need for both types of fighters led to the "high/low mix" concept, which envisioned a high-capability and high-cost core of dedicated air-superiority fighters (like the F-15 and Su-27) supplemented by a larger contingent of lower-cost multi-role fighters (such as the F-16 and MiG-29).
Most fourth-generation fighters, such as the McDonnell Douglas F/A-18 Hornet, HAL Tejas, JF-17 and Dassault Mirage 2000, are true multirole warplanes, designed as such from the start. This was facilitated by multimode avionics that could switch seamlessly between air and ground modes. The earlier approaches of adding on strike capabilities or designing separate models specialized for different roles generally became passé (with the Panavia Tornado being an exception in this regard). Attack roles were generally assigned to dedicated ground-attack aircraft such as the Sukhoi Su-25 and the A-10 Thunderbolt II.
A typical US Air Force fighter wing of the period might contain a mix of one air superiority squadron (F-15C), one strike fighter squadron (F-15E), and two multirole fighter squadrons (F-16C). Perhaps the most novel technology introduced for combat aircraft was stealth, which involves the use of special "low-observable" (L-O) materials and design techniques to reduce the susceptibility of an aircraft to detection by the enemy's sensor systems, particularly radars. The first stealth aircraft introduced were the Lockheed F-117 Nighthawk attack aircraft (introduced in 1983) and the Northrop Grumman B-2 Spirit bomber (first flew in 1989). Although no stealthy fighters per se appeared among the fourth generation, some radar-absorbent coatings and other L-O treatments developed for these programs are reported to have been subsequently applied to fourth-generation fighters.
The end of the Cold War in 1992 led many governments to significantly decrease military spending as a "peace dividend". Air force inventories were cut. Research and development programs working on "fifth-generation" fighters took serious hits. Many programs were canceled during the first half of the 1990s, and those that survived were "stretched out". While the practice of slowing the pace of development reduces annual investment expenses, it comes at the penalty of increased overall program and unit costs over the long-term. In this instance, however, it also permitted designers to make use of the tremendous achievements being made in the fields of computers, avionics and other flight electronics, which had become possible largely due to the advances made in microchip and semiconductor technologies in the 1980s and 1990s. This opportunity enabled designers to develop fourth-generation designs – or redesigns – with significantly enhanced capabilities. These improved designs have become known as "Generation 4.5" fighters, recognizing their intermediate nature between the 4th and 5th generations, and their contribution in furthering development of individual fifth-generation technologies.
The primary characteristics of this sub-generation are the application of advanced digital avionics and aerospace materials, modest signature reduction (primarily RF "stealth"), and highly integrated systems and weapons. These fighters have been designed to operate in a "network-centric" battlefield environment and are principally multirole aircraft. Key weapons technologies introduced include beyond-visual-range (BVR) AAMs; Global Positioning System (GPS)–guided weapons, solid-state phased-array radars; helmet-mounted sights; and improved secure, jamming-resistant datalinks. Thrust vectoring to further improve transient maneuvering capabilities has also been adopted by many 4.5th generation fighters, and uprated powerplants have enabled some designs to achieve a degree of "supercruise" ability. Stealth characteristics are focused primarily on frontal-aspect radar cross section (RCS) signature-reduction techniques including radar-absorbent materials (RAM), L-O coatings and limited shaping techniques.
"Half-generation" designs are either based on existing airframes or are based on new airframes following similar design theory to previous iterations; however, these modifications have introduced the structural use of composite materials to reduce weight, greater fuel fractions to increase range, and signature reduction treatments to achieve lower RCS compared to their predecessors. Prime examples of such aircraft, which are based on new airframe designs making extensive use of carbon-fiber composites, include the Eurofighter Typhoon, Dassault Rafale, Saab JAS 39 Gripen, and HAL Tejas Mark 1A.
Apart from these fighter jets, most of the 4.5 generation aircraft are actually modified variants of existing airframes from the earlier fourth generation fighter jets. Such fighter jets are generally heavier and examples include the Boeing F/A-18E/F Super Hornet, which is an evolution of the F/A-18 Hornet, the F-15E Strike Eagle, which is a ground-attack/multi-role variant of the F-15 Eagle, the Su-30SM and Su-35S modified variants of the Sukhoi Su-27, and the MiG-35 upgraded version of the Mikoyan MiG-29. The Su-30SM/Su-35S and MiG-35 feature thrust vectoring engine nozzles to enhance maneuvering. The upgraded version of F-16 is also considered a member of the 4.5 generation aircraft.
Generation 4.5 fighters first entered service in the early 1990s, and most of them are still being produced and evolved. It is quite possible that they may continue in production alongside fifth-generation fighters due to the expense of developing the advanced level of stealth technology needed to achieve aircraft designs featuring very low observables (VLO), which is one of the defining features of fifth-generation fighters. Of the 4.5th generation designs, the Strike Eagle, Super Hornet, Typhoon, Gripen, and Rafale have been used in combat.
The U.S. government has defined 4.5 generation fighter aircraft as those that "(1) have advanced capabilities, including— (A) AESA radar; (B) high capacity data-link; and (C) enhanced avionics; and (2) have the ability to deploy current and reasonably foreseeable advanced armaments."
Currently the cutting edge of fighter design, fifth-generation fighters are characterized by being designed from the start to operate in a network-centric combat environment, and to feature extremely low, all-aspect, multi-spectral signatures employing advanced materials and shaping techniques. They have multifunction AESA radars with high-bandwidth, low-probability of intercept (LPI) data transmission capabilities. The infra-red search and track sensors incorporated for air-to-air combat as well as for air-to-ground weapons delivery in the 4.5th generation fighters are now fused in with other sensors for Situational Awareness IRST or SAIRST, which constantly tracks all targets of interest around the aircraft so the pilot need not guess when he glances. These sensors, along with advanced avionics, glass cockpits, helmet-mounted sights (not currently on F-22), and improved secure, jamming-resistant LPI datalinks are highly integrated to provide multi-platform, multi-sensor data fusion for vastly improved situational awareness while easing the pilot's workload. Avionics suites rely on extensive use of very high-speed integrated circuit (VHSIC) technology, common modules, and high-speed data buses. Overall, the integration of all these elements is claimed to provide fifth-generation fighters with a "first-look, first-shot, first-kill capability".
A key attribute of fifth-generation fighters is a small radar cross-section. Great care has been taken in designing its layout and internal structure to minimize RCS over a broad bandwidth of detection and tracking radar frequencies; furthermore, to maintain its VLO signature during combat operations, primary weapons are carried in internal weapon bays that are only briefly opened to permit weapon launch. Furthermore, stealth technology has advanced to the point where it can be employed without a tradeoff with aerodynamics performance, in contrast to previous stealth efforts. Some attention has also been paid to reducing IR signatures, especially on the F-22. Detailed information on these signature-reduction techniques is classified, but in general includes special shaping approaches, thermoset and thermoplastic materials, extensive structural use of advanced composites, conformal sensors, heat-resistant coatings, low-observable wire meshes to cover intake and cooling vents, heat ablating tiles on the exhaust troughs (seen on the Northrop YF-23), and coating internal and external metal areas with radar-absorbent materials and paint (RAM/RAP).
The AESA radar offers unique capabilities for fighters (and it is also quickly becoming essential for Generation 4.5 aircraft designs, as well as being retrofitted onto some fourth-generation aircraft). In addition to its high resistance to ECM and LPI features, it enables the fighter to function as a sort of "mini-AWACS", providing high-gain electronic support measures (ESM) and electronic warfare (EW) jamming functions. Other technologies common to this latest generation of fighters includes integrated electronic warfare system (INEWS) technology, integrated communications, navigation, and identification (CNI) avionics technology, centralized "vehicle health monitoring" systems for ease of maintenance, fiber optics data transmission, stealth technology and even hovering capabilities. Maneuver performance remains important and is enhanced by thrust-vectoring, which also helps reduce takeoff and landing distances. Supercruise may or may not be featured; it permits flight at supersonic speeds without the use of the afterburner – a device that significantly increases IR signature when used in full military power.
Such aircraft are sophisticated and expensive. The fifth generation was ushered in by the Lockheed Martin/Boeing F-22 Raptor in late 2005. The U.S. Air Force originally planned to acquire 650 F-22s, but now only 187 will be built. As a result, its unit flyaway cost (FAC) is around US$150 million. To spread the development costs – and production base – more broadly, the Joint Strike Fighter (JSF) program enrolls eight other countries as cost- and risk-sharing partners. Altogether, the nine partner nations anticipate procuring over 3,000 Lockheed Martin F-35 Lightning II fighters at an anticipated average FAC of $80–85 million. The F-35, however, is designed to be a family of three aircraft, a conventional take-off and landing (CTOL) fighter, a short take-off and vertical landing (STOVL) fighter, and a Catapult Assisted Take Off But Arrested Recovery (CATOBAR) fighter, each of which has a different unit price and slightly varying specifications in terms of fuel capacity (and therefore range), size and payload.
Other countries have initiated fifth-generation fighter development projects. In December 2010, it was discovered that China is developing the 5th generation fighter Chengdu J-20. The J-20 took its maiden flight in January 2011. The Shenyang FC-31 took its maiden flight on 31 October 2012, and developed a carrier-based version based on Chinese aircraft carriers. United Aircraft Corporation with Russia's Mikoyan LMFS and Sukhoi Su-75 Checkmate plan, Sukhoi Su-57 became the first fifth-generation fighter jets in service with the Russian Aerospace Forces on 2020, and launch missiles in the Russo-Ukrainian War in 2022. Japan is exploring its technical feasibility to produce fifth-generation fighters. India is developing the Advanced Medium Combat Aircraft (AMCA), a medium weight stealth fighter jet designated to enter into serial production by late 2030s. India also had initiated a joint fifth generation heavy fighter with Russia called the FGFA. As of 2018 May, the project is suspected to have not yielded desired progress or results for India and has been put on hold or dropped altogether. Other countries considering fielding an indigenous or semi-indigenous advanced fifth generation aircraft include South Korea, Sweden, Turkey and Pakistan.
As of November 2018, France, Germany, China, Japan, Russia, the United Kingdom and the United States have announced the development of a sixth-generation aircraft program.
France and Germany will develop a joint sixth-generation fighter to replace their current fleet of Dassault Rafales, Eurofighter Typhoons, and Panavia Tornados by 2035. The overall development will be led by a collaboration of Dassault and Airbus, while the engines will reportedly be jointly developed by Safran and MTU Aero Engines. Thales and MBDA are also seeking a stake in the project. Spain officially joined the Franco-German project to develop a Next-Generation Fighter (NGF) that will form part of a broader Future Combat Air Systems (FCAS) with the signing of a letter of intent (LOI) on February 14, 2019." />
Currently at the concept stage, the first sixth-generation jet fighter is expected to enter service in the United States Navy in 2025–30 period. The USAF seeks a new fighter for the 2030–50 period named the "Next Generation Tactical Aircraft" ("Next Gen TACAIR"). The US Navy looks to replace its F/A-18E/F Super Hornets beginning in 2025 with the Next Generation Air Dominance air superiority fighter.
The United Kingdom's proposed stealth fighter is being developed by a European consortium called Team Tempest, consisting of BAE Systems, Rolls-Royce, Leonardo S.p.A. and MBDA. The aircraft is intended to enter service in 2035.
Fighters were typically armed with guns only for air to air combat up through the late 1950s, though unguided rockets for mostly air to ground use and limited air to air use were deployed in WWII. From the late 1950s forward guided missiles came into use for air to air combat. Throughout this history fighters which by surprise or maneuver attain a good firing position have achieved the kill about one third to one half the time, no matter what weapons were carried. The only major historic exception to this has been the low effectiveness shown by guided missiles in the first one to two decades of their existence. From WWI to the present, fighter aircraft have featured machine guns and automatic cannons as weapons, and they are still considered as essential back-up weapons today. The power of air-to-air guns has increased greatly over time, and has kept them relevant in the guided missile era. In WWI two rifle (approximately 0.30) caliber machine guns was the typical armament, producing a weight of fire of about 0.4 kg (0.88 lb) per second. In WWII rifle caliber machine guns also remained common, though usually in larger numbers or supplemented with much heavier 0.50 caliber machine guns or cannons. The standard WWII American fighter armament of six 0.50-cal (12.7mm) machine guns fired a bullet weight of approximately 3.7 kg/sec (8.1 lbs/sec), at a muzzle velocity of 856 m/s (2,810 ft/s). British and German aircraft tended to use a mix of machine guns and autocannon, the latter firing explosive projectiles. Later British fighters were exclusively cannon-armed, the US were not able to produce a reliable cannon in high numbers and most fighters remained equipped only with heavy machine guns despite the US Navy pressing for a change to 20mm.
Post war 20–30 mm revolver cannon and rotary cannon were introduced. The modern M61 Vulcan 20 mm rotary cannon that is standard on current American fighters fires a projectile weight of about 10 kg/s (22 lb/s), nearly three times that of six 0.50-cal machine guns, with higher velocity of 1,052 m/s (3450 ft/s) supporting a flatter trajectory, and with exploding projectiles. Modern fighter gun systems also feature ranging radar and lead computing electronic gun sights to ease the problem of aim point to compensate for projectile drop and time of flight (target lead) in the complex three dimensional maneuvering of air-to-air combat. However, getting in position to use the guns is still a challenge. The range of guns is longer than in the past but still quite limited compared to missiles, with modern gun systems having a maximum effective range of approximately 1,000 meters. High probability of kill also requires firing to usually occur from the rear hemisphere of the target. Despite these limits, when pilots are well trained in air-to-air gunnery and these conditions are satisfied, gun systems are tactically effective and highly cost efficient. The cost of a gun firing pass is far less than firing a missile, and the projectiles are not subject to the thermal and electronic countermeasures than can sometimes defeat missiles. When the enemy can be approached to within gun range, the lethality of guns is approximately a 25% to 50% chance of "kill per firing pass".
The range limitations of guns, and the desire to overcome large variations in fighter pilot skill and thus achieve higher force effectiveness, led to the development of the guided air-to-air missile. There are two main variations, heat-seeking (infrared homing), and radar guided. Radar missiles are typically several times heavier and more expensive than heat-seekers, but with longer range, greater destructive power, and ability to track through clouds.
The highly successful AIM-9 Sidewinder heat-seeking (infrared homing) short-range missile was developed by the United States Navy in the 1950s. These small missiles are easily carried by lighter fighters, and provide effective ranges of approximately 10 to 35 km (~6 to 22 miles). Beginning with the AIM-9L in 1977, subsequent versions of Sidewinder have added all-aspect capability, the ability to use the lower heat of air to skin friction on the target aircraft to track from the front and sides. The latest (2003 service entry) AIM-9X also features "off-boresight" and "lock on after launch" capabilities, which allow the pilot to make a quick launch of a missile to track a target anywhere within the pilot's vision. The AIM-9X development cost was U.S. $3 billion in mid to late 1990s dollars, and 2015 per unit procurement cost is $0.6 million each. The missile weighs 85.3 kg (188 lbs), and has a maximum range of 35 km (22 miles) at higher altitudes. Like most air-to-air missiles, lower altitude range can be as limited as only about one third of maximum due to higher drag and less ability to coast downward.
The effectiveness of infrared homing missiles was only 7% early in the Vietnam War, but improved to approximately 15%–40% over the course of the war. The AIM-4 Falcon used by the USAF had kill rates of approximately 7% and was considered a failure. The AIM-9B Sidewinder introduced later achieved 15% kill rates, and the further improved AIM-9D and J models reached 19%. The AIM-9G used in the last year of the Vietnam air war achieved 40%. Israel used almost totally guns in the 1967 Six-Day War, achieving 60 kills and 10 losses. However, Israel made much more use of steadily improving heat-seeking missiles in the 1973 Yom Kippur War. In this extensive conflict Israel scored 171 of 261 total kills with heat-seeking missiles (65.5%), 5 kills with radar guided missiles (1.9%), and 85 kills with guns (32.6%). The AIM-9L Sidewinder scored 19 kills out of 26 fired missiles (73%) in the 1982 Falklands War. But, in a conflict against opponents using thermal countermeasures, the United States only scored 11 kills out of 48 fired (Pk = 23%) with the follow-on AIM-9M in the 1991 Gulf War.
Radar guided missiles fall into two main missile guidance types. In the historically more common semi-active radar homing case the missile homes in on radar signals transmitted from launching aircraft and reflected from the target. This has the disadvantage that the firing aircraft must maintain radar lock on the target and is thus less free to maneuver and more vulnerable to attack. A widely deployed missile of this type was the AIM-7 Sparrow, which entered service in 1954 and was produced in improving versions until 1997. In more advanced active radar homing the missile is guided to the vicinity of the target by internal data on its projected position, and then "goes active" with an internally carried small radar system to conduct terminal guidance to the target. This eliminates the requirement for the firing aircraft to maintain radar lock, and thus greatly reduces risk. A prominent example is the AIM-120 AMRAAM, which was first fielded in 1991 as the AIM-7 replacement, and which has no firm retirement date as of 2016. The current AIM-120D version has a maximum high altitude range of greater than 160 km (>99 miles), and cost approximately $2.4 million each (2016). As is typical with most other missiles, range at lower altitude may be as little as one third that of high altitude.
In the Vietnam air war radar missile kill reliability was approximately 10% at shorter ranges, and even worse at longer ranges due to reduced radar return and greater time for the target aircraft to detect the incoming missile and take evasive action. At one point in the Vietnam war, the U.S. Navy fired 50 AIM-7 Sparrow radar guided missiles in a row without a hit. Between 1958 and 1982 in five wars there were 2,014 combined heat-seeking and radar guided missile firings by fighter pilots engaged in air-to-air combat, achieving 528 kills, of which 76 were radar missile kills, for a combined effectiveness of 26%. However, only four of the 76 radar missile kills were in the beyond-visual-range mode intended to be the strength of radar guided missiles. The United States invested over $10 billion in air-to-air radar missile technology from the 1950s to the early 1970s. Amortized over actual kills achieved by the U.S. and its allies, each radar guided missile kill thus cost over $130 million. The defeated enemy aircraft were for the most part older MiG-17s, −19s, and −21s, with new cost of $0.3 million to $3 million each. Thus, the radar missile investment over that period far exceeded the value of enemy aircraft destroyed, and furthermore had very little of the intended BVR effectiveness.
However, continuing heavy development investment and rapidly advancing electronic technology led to significant improvement in radar missile reliabilities from the late 1970s onward. Radar guided missiles achieved 75% Pk (9 kills out of 12 shots) in operations in the Gulf War in 1991. The percentage of kills achieved by radar guided missiles also surpassed 50% of total kills for the first time by 1991. Since 1991, 20 of 61 kills worldwide have been beyond-visual-range using radar missiles. Discounting an accidental friendly fire kill, in operational use the AIM-120D (the current main American radar guided missile) has achieved 9 kills out of 16 shots for a 56% Pk. Six of these kills were BVR, out of 13 shots, for a 46% BVR Pk. Though all these kills were against less capable opponents who were not equipped with operating radar, electronic countermeasures, or a comparable weapon themselves, the BVR Pk was a significant improvement from earlier eras. However, a current concern is electronic countermeasures to radar missiles, which are thought to be reducing the effectiveness of the AIM-120D. Some experts believe that as of 2016 the European Meteor missile, the Russian R-37M, and the Chinese PL-15 are more resistant to countermeasures and more effective than the AIM-120D.
Now that higher reliabilities have been achieved, both types of missiles allow the fighter pilot to often avoid the risk of the short-range dogfight, where only the more experienced and skilled fighter pilots tend to prevail, and where even the finest fighter pilot can simply get unlucky. Taking maximum advantage of complicated missile parameters in both attack and defense against competent opponents does take considerable experience and skill, but against surprised opponents lacking comparable capability and countermeasures, air-to-air missile warfare is relatively simple. By partially automating air-to-air combat and reducing reliance on gun kills mostly achieved by only a small expert fraction of fighter pilots, air-to-air missiles now serve as highly effective force multipliers. | [
{
"paragraph_id": 0,
"text": "Fighter aircraft (early on also pursuit aircraft) are military aircraft designed primarily for air-to-air combat. In military conflict, the role of fighter aircraft is to establish air superiority of the battlespace. Domination of the airspace above a battlefield permits bombers and attack aircraft to engage in tactical and strategic bombing of enemy targets.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Many modern fighter aircraft also have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, and these include the interceptor, heavy fighter, and night fighter.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since World War I, achieving and maintaining air superiority has been considered essential for victory in conventional warfare.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Fighters continued to be developed throughout World War I, to deny enemy aircraft and dirigibles the ability to gather information by reconnaissance over the battlefield. Early fighters were very small and lightly armed by later standards, and most were biplanes built with a wooden frame covered with fabric, and a maximum airspeed of about 100 mph (160 km/h). As control of the airspace over armies became increasingly important, all of the major powers developed fighters to support their military operations. Between the wars, wood was largely replaced in part or whole by metal tubing, and finally aluminum stressed skin structures (monocoque) began to predominate.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "By World War II, most fighters were all-metal monoplanes armed with batteries of machine guns or cannons and some were capable of speeds approaching 400 mph (640 km/h). Most fighters up to this point had one engine, but a number of twin-engine fighters were built; however they were found to be outmatched against single-engine fighters and were relegated to other tasks, such as night fighters equipped with primitive radar sets.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "By the end of the war, turbojet engines were replacing piston engines as the means of propulsion, further increasing aircraft speed. Since the weight of the turbojet engine was far less than a piston engine, having two engines was no longer a handicap and one or two were used, depending on requirements. This in turn required the development of ejection seats so the pilot could escape, and G-suits to counter the much greater forces being applied to the pilot during maneuvers.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 1950s, radar was fitted to day fighters, since due to ever increasing air-to-air weapon ranges, pilots could no longer see far enough ahead to prepare for the opposition. Subsequently, radar capabilities grew enormously and are now the primary method of target acquisition. Wings were made thinner and swept back to reduce transonic drag, which required new manufacturing methods to obtain sufficient strength. Skins were no longer sheet metal riveted to a structure, but milled from large slabs of alloy. The sound barrier was broken, and after a few false starts due to required changes in controls, speeds quickly reached Mach 2, past which aircraft cannot maneuver sufficiently to avoid attack.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Air-to-air missiles largely replaced guns and rockets in the early 1960s since both were believed unusable at the speeds being attained, however the Vietnam War showed that guns still had a role to play, and most fighters built since then are fitted with cannon (typically between 20 and 30 mm (0.79 and 1.18 in) in caliber) in addition to missiles. Most modern combat aircraft can carry at least a pair of air-to-air missiles.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 1970s, turbofans replaced turbojets, improving fuel economy enough that the last piston engine support aircraft could be replaced with jets, making multi-role combat aircraft possible. Honeycomb structures began to replace milled structures, and the first composite components began to appear on components subjected to little stress.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "With the steady improvements in computers, defensive systems have become increasingly efficient. To counter this, stealth technologies have been pursued by the United States, Russia, India and China. The first step was to find ways to reduce the aircraft's reflectivity to radar waves by burying the engines, eliminating sharp corners and diverting any reflections away from the radar sets of opposing forces. Various materials were found to absorb the energy from radar waves, and were incorporated into special finishes that have since found widespread application. Composite structures have become widespread, including major structural components, and have helped to counterbalance the steady increases in aircraft weight—most modern fighters are larger and heavier than World War II medium bombers.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Because of the importance of air superiority, since the early days of aerial combat armed forces have constantly competed to develop technologically superior fighters and to deploy these fighters in greater numbers, and fielding a viable fighter fleet consumes a substantial proportion of the defense budgets of modern armed forces.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The global combat aircraft market was worth $45.75 billion in 2017 and is projected by Frost & Sullivan at $47.2 billion in 2026: 35% modernization programs and 65% aircraft purchases, dominated by the Lockheed Martin F-35 with 3,000 deliveries over 20 years.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "A fighter aircraft is primarily designed for air-to-air combat. A given type may be designed for specific combat conditions, and in some cases for additional roles such as air-to-ground fighting. Historically the British Royal Flying Corps and Royal Air Force referred to them as \"scouts\" until the early 1920s, while the U.S. Army called them \"pursuit\" aircraft until the late 1940s. The UK changed to calling them fighters in the 1920s, while the US Army did so in the 1940s. A short-range fighter designed to defend against incoming enemy aircraft is known as an interceptor.",
"title": "Classification"
},
{
"paragraph_id": 14,
"text": "Recognized classes of fighter include:",
"title": "Classification"
},
{
"paragraph_id": 15,
"text": "Of these, the Fighter-bomber, reconnaissance fighter and strike fighter classes are dual-role, possessing qualities of the fighter alongside some other battlefield role. Some fighter designs may be developed in variants performing other roles entirely, such as ground attack or unarmed reconnaissance. This may be for political or national security reasons, for advertising purposes, or other reasons.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "The Sopwith Camel and other \"fighting scouts\" of World War I performed a great deal of ground-attack work. In World War II, the USAAF and RAF often favored fighters over dedicated light bombers or dive bombers, and types such as the Republic P-47 Thunderbolt and Hawker Hurricane that were no longer competitive as aerial combat fighters were relegated to ground attack. Several aircraft, such as the F-111 and F-117, have received fighter designations though they had no fighter capability due to political or other reasons. The F-111B variant was originally intended for a fighter role with the U.S. Navy, but it was canceled. This blurring follows the use of fighters from their earliest days for \"attack\" or \"strike\" operations against ground targets by means of strafing or dropping small bombs and incendiaries. Versatile multi role fighter-bombers such as the McDonnell Douglas F/A-18 Hornet are a less expensive option than having a range of specialized aircraft types.",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "Some of the most expensive fighters such as the US Grumman F-14 Tomcat, McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor and Russian Sukhoi Su-27 were employed as all-weather interceptors as well as air superiority fighter aircraft, while commonly developing air-to-ground roles late in their careers. An interceptor is generally an aircraft intended to target (or intercept) bombers and so often trades maneuverability for climb rate.",
"title": "Classification"
},
{
"paragraph_id": 18,
"text": "As a part of military nomenclature, a letter is often assigned to various types of aircraft to indicate their use, along with a number to indicate the specific aircraft. The letters used to designate a fighter differ in various countries. In the English-speaking world, \"F\" is often now used to indicate a fighter (e.g. Lockheed Martin F-35 Lightning II or Supermarine Spitfire F.22), though \"P\" used to be used in the US for pursuit (e.g. Curtiss P-40 Warhawk), a translation of the French \"C\" (Dewoitine D.520 C.1) for Chasseur while in Russia \"I\" was used for Istrebitel, or exterminator (Polikarpov I-16).",
"title": "Classification"
},
{
"paragraph_id": 19,
"text": "As fighter types have proliferated, the air superiority fighter emerged as a specific role at the pinnacle of speed, maneuverability, and air-to-air weapon systems – able to hold its own against all other fighters and establish its dominance in the skies above the battlefield.",
"title": "Classification"
},
{
"paragraph_id": 20,
"text": "The interceptor is a fighter designed specifically to intercept and engage approaching enemy aircraft. There are two general classes of interceptor: relatively lightweight aircraft in the point-defence role, built for fast reaction, high performance and with a short range, and heavier aircraft with more comprehensive avionics and designed to fly at night or in all weathers and to operate over longer ranges. Originating during World War I, by 1929 this class of fighters had become known as the interceptor.",
"title": "Classification"
},
{
"paragraph_id": 21,
"text": "The equipment necessary for daytime flight is inadequate when flying at night or in poor visibility. The night fighter was developed during World War I with additional equipment to aid the pilot in flying straight, navigating and finding the target. From modified variants of the Royal Aircraft Factory B.E.2c in 1915, the night fighter has evolved into the highly capable all-weather fighter.",
"title": "Classification"
},
{
"paragraph_id": 22,
"text": "The strategic fighter is a fast, heavily armed and long-range type, able to act as an escort fighter protecting bombers, to carry out offensive sorties of its own as a penetration fighter and maintain standing patrols at significant distance from its home base.",
"title": "Classification"
},
{
"paragraph_id": 23,
"text": "Bombers are vulnerable due to their low speed, large size and poor maneuvrability. The escort fighter was developed during World War II to come between the bombers and enemy attackers as a protective shield. The primary requirement was for long range, with several heavy fighters given the role. However they too proved unwieldy and vulnerable, so as the war progressed techniques such as drop tanks were developed to extend the range of more nimble conventional fighters.",
"title": "Classification"
},
{
"paragraph_id": 24,
"text": "The penetration fighter is typically also fitted for the ground-attack role, and so is able to defend itself while conducting attack sorties.",
"title": "Classification"
},
{
"paragraph_id": 25,
"text": "The word \"fighter\" was first used to describe a two-seat aircraft carrying a machine gun (mounted on a pedestal) and its operator as well as the pilot. Although the term was coined in the United Kingdom, the first examples were the French Voisin pushers beginning in 1910, and a Voisin III would be the first to shoot down another aircraft, on 5 October 1914.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 26,
"text": "However at the outbreak of World War I, front-line aircraft were mostly unarmed and used almost exclusively for reconnaissance. On 15 August 1914, Miodrag Tomić encountered an enemy airplane while on a reconnaissance flight over Austria-Hungary which fired at his aircraft with a revolver, so Tomić fired back. It was believed to be the first exchange of fire between aircraft. Within weeks, all Serbian and Austro-Hungarian aircraft were armed.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 27,
"text": "Another type of military aircraft formed the basis for an effective \"fighter\" in the modern sense of the word. It was based on small fast aircraft developed before the war for air racing such with the Gordon Bennett Cup and Schneider Trophy. The military scout airplane was not expected to carry serious armament, but rather to rely on speed to \"scout\" a location, and return quickly to report, making it a flying horse. British scout aircraft, in this sense, included the Sopwith Tabloid and Bristol Scout. The French and the Germans didn't have an equivalent as they used two seaters for reconnaissance, such as the Morane-Saulnier L, but would later modify pre-war racing aircraft into armed single seaters. It was quickly found that these were of little use since the pilot couldn't record what he saw while also flying, while military leaders usually ignored what the pilots reported.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 28,
"text": "Attempts were made with handheld weapons such as pistols and rifles and even light machine guns, but these were ineffective and cumbersome. The next advance came with the fixed forward-firing machine gun, so that the pilot pointed the entire aircraft at the target and fired the gun, instead of relying on a second gunner. Roland Garros bolted metal deflector plates to the propeller so that it would not shoot itself out of the sky and a number of Morane-Saulnier Ns were modified. The technique proved effective, however the deflected bullets were still highly dangerous.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 29,
"text": "Soon after the commencement of the war, pilots armed themselves with pistols, carbines, grenades, and an assortment of improvised weapons. Many of these proved ineffective as the pilot had to fly his airplane while attempting to aim a handheld weapon and make a difficult deflection shot. The first step in finding a real solution was to mount the weapon on the aircraft, but the propeller remained a problem since the best direction to shoot is straight ahead. Numerous solutions were tried. A second crew member behind the pilot could aim and fire a swivel-mounted machine gun at enemy airplanes; however, this limited the area of coverage chiefly to the rear hemisphere, and effective coordination of the pilot's maneuvering with the gunner's aiming was difficult. This option was chiefly employed as a defensive measure on two-seater reconnaissance aircraft from 1915 on. Both the SPAD S.A and the Royal Aircraft Factory B.E.9 added a second crewman ahead of the engine in a pod but this was both hazardous to the second crewman and limited performance. The Sopwith L.R.T.Tr. similarly added a pod on the top wing with no better luck.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 30,
"text": "An alternative was to build a \"pusher\" scout such as the Airco DH.2, with the propeller mounted behind the pilot. The main drawback was that the high drag of a pusher type's tail structure made it slower than a similar \"tractor\" aircraft. A better solution for a single seat scout was to mount the machine gun (rifles and pistols having been dispensed with) to fire forwards but outside the propeller arc. Wing guns were tried but the unreliable weapons available required frequent clearing of jammed rounds and misfires and remained impractical until after the war. Mounting the machine gun over the top wing worked well and was used long after the ideal solution was found. The Nieuport 11 of 1916 used this system with considerable success, however, this placement made aiming and reloading difficult but would continue to be used throughout the war as the weapons used were lighter and had a higher rate of fire than synchronized weapons. The British Foster mounting and several French mountings were specifically designed for this kind of application, fitted with either the Hotchkiss or Lewis Machine gun, which due to their design were unsuitable for synchronizing. The need to arm a tractor scout with a forward-firing gun whose bullets passed through the propeller arc was evident even before the outbreak of war and inventors in both France and Germany devised mechanisms that could time the firing of the individual rounds to avoid hitting the propeller blades. Franz Schneider, a Swiss engineer, had patented such a device in Germany in 1913, but his original work was not followed up. French aircraft designer Raymond Saulnier patented a practical device in April 1914, but trials were unsuccessful because of the propensity of the machine gun employed to hang fire due to unreliable ammunition. In December 1914, French aviator Roland Garros asked Saulnier to install his synchronization gear on Garros' Morane-Saulnier Type L parasol monoplane. Unfortunately the gas-operated Hotchkiss machine gun he was provided had an erratic rate of fire and it was impossible to synchronize it with the propeller. As an interim measure, the propeller blades were fitted with metal wedges to protect them from ricochets. Garros' modified monoplane first flew in March 1915 and he began combat operations soon after. Garros scored three victories in three weeks before he himself was downed on 18 April and his airplane, along with its synchronization gear and propeller was captured by the Germans. Meanwhile, the synchronization gear (called the Stangensteuerung in German, for \"pushrod control system\") devised by the engineers of Anthony Fokker's firm was the first system to enter service. It would usher in what the British called the \"Fokker scourge\" and a period of air superiority for the German forces, making the Fokker Eindecker monoplane a feared name over the Western Front, despite its being an adaptation of an obsolete pre-war French Morane-Saulnier racing airplane, with poor flight characteristics and a by now mediocre performance. The first Eindecker victory came on 1 July 1915, when Leutnant Kurt Wintgens, of Feldflieger Abteilung 6 on the Western Front, downed a Morane-Saulnier Type L. His was one of five Fokker M.5K/MG prototypes for the Eindecker, and was armed with a synchronized aviation version of the Parabellum MG14 machine gun. The success of the Eindecker kicked off a competitive cycle of improvement among the combatants, both sides striving to build ever more capable single-seat fighters. The Albatros D.I and Sopwith Pup of 1916 set the classic pattern followed by fighters for about twenty years. Most were biplanes and only rarely monoplanes or triplanes. The strong box structure of the biplane provided a rigid wing that allowed the accurate control essential for dogfighting. They had a single operator, who flew the aircraft and also controlled its armament. They were armed with one or two Maxim or Vickers machine guns, which were easier to synchronize than other types, firing through the propeller arc. Gun breeches were in front of the pilot, with obvious implications in case of accidents, but jams could be cleared in flight, while aiming was simplified.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 31,
"text": "The use of metal aircraft structures was pioneered before World War I by Breguet but would find its biggest proponent in Anthony Fokker, who used chrome-molybdenum steel tubing for the fuselage structure of all his fighter designs, while the innovative German engineer Hugo Junkers developed two all-metal, single-seat fighter monoplane designs with cantilever wings: the strictly experimental Junkers J 2 private-venture aircraft, made with steel, and some forty examples of the Junkers D.I, made with corrugated duralumin, all based on his experience in creating the pioneering Junkers J 1 all-metal airframe technology demonstration aircraft of late 1915. While Fokker would pursue steel tube fuselages with wooden wings until the late 1930s, and Junkers would focus on corrugated sheet metal, Dornier was the first to build a fighter (the Dornier-Zeppelin D.I) made with pre-stressed sheet aluminum and having cantilevered wings, a form that would replace all others in the 1930s. As collective combat experience grew, the more successful pilots such as Oswald Boelcke, Max Immelmann, and Edward Mannock developed innovative tactical formations and maneuvers to enhance their air units' combat effectiveness.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 32,
"text": "Allied and – before 1918 – German pilots of World War I were not equipped with parachutes, so in-flight fires or structural failures were often fatal. Parachutes were well-developed by 1918 having previously been used by balloonists, and were adopted by the German flying services during the course of that year. The well known and feared Manfred von Richthofen, the \"Red Baron\", was wearing one when he was killed, but the allied command continued to oppose their use on various grounds.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 33,
"text": "In April 1917, during a brief period of German aerial supremacy a British pilot's average life expectancy was calculated to average 93 flying hours, or about three weeks of active service. More than 50,000 airmen from both sides died during the war.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 34,
"text": "Fighter development stagnated between the wars, especially in the United States and the United Kingdom, where budgets were small. In France, Italy and Russia, where large budgets continued to allow major development, both monoplanes and all metal structures were common. By the end of the 1920s, however, those countries overspent themselves and were overtaken in the 1930s by those powers that hadn't been spending heavily, namely the British, the Americans and the Germans.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 35,
"text": "Given limited budgets, air forces were conservative in aircraft design, and biplanes remained popular with pilots for their agility, and remained in service long after they ceased to be competitive. Designs such as the Gloster Gladiator, Fiat CR.42 Falco, and Polikarpov I-15 were common even in the late 1930s, and many were still in service as late as 1942. Up until the mid-1930s, the majority of fighters in the US, the UK, Italy and Russia remained fabric-covered biplanes.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 36,
"text": "Fighter armament eventually began to be mounted inside the wings, outside the arc of the propeller, though most designs retained two synchronized machine guns directly ahead of the pilot, where they were more accurate (that being the strongest part of the structure, reducing the vibration to which the guns were subjected). Shooting with this traditional arrangement was also easier because the guns shot directly ahead in the direction of the aircraft's flight, up to the limit of the guns range; unlike wing-mounted guns which to be effective required to be harmonised, that is, preset to shoot at an angle by ground crews so that their bullets would converge on a target area a set distance ahead of the fighter. Rifle-caliber .30 and .303 in (7.62 and 7.70 mm) calibre guns remained the norm, with larger weapons either being too heavy and cumbersome or deemed unnecessary against such lightly built aircraft. It was not considered unreasonable to use World War I-style armament to counter enemy fighters as there was insufficient air-to-air combat during most of the period to disprove this notion.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 37,
"text": "The rotary engine, popular during World War I, quickly disappeared, its development having reached the point where rotational forces prevented more fuel and air from being delivered to the cylinders, which limited horsepower. They were replaced chiefly by the stationary radial engine though major advances led to inline engines gaining ground with several exceptional engines—including the 1,145 cu in (18,760 cm) V-12 Curtiss D-12. Aircraft engines increased in power several-fold over the period, going from a typical 180 hp (130 kW) in the 900 kg (2,000 lb) Fokker D.VII of 1918 to 900 hp (670 kW) in the 2,500 kg (5,500 lb) Curtiss P-36 of 1936. The debate between the sleek in-line engines versus the more reliable radial models continued, with naval air forces preferring the radial engines, and land-based forces often choosing inlines. Radial designs did not require a separate (and vulnerable) radiator, but had increased drag. Inline engines often had a better power-to-weight ratio.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 38,
"text": "Some air forces experimented with \"heavy fighters\" (called \"destroyers\" by the Germans). These were larger, usually twin-engined aircraft, sometimes adaptations of light or medium bomber types. Such designs typically had greater internal fuel capacity (thus longer range) and heavier armament than their single-engine counterparts. In combat, they proved vulnerable to more agile single-engine fighters.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 39,
"text": "The primary driver of fighter innovation, right up to the period of rapid re-armament in the late 1930s, were not military budgets, but civilian aircraft racing. Aircraft designed for these races introduced innovations like streamlining and more powerful engines that would find their way into the fighters of World War II. The most significant of these was the Schneider Trophy races, where competition grew so fierce, only national governments could afford to enter.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 40,
"text": "At the very end of the inter-war period in Europe came the Spanish Civil War. This was just the opportunity the German Luftwaffe, Italian Regia Aeronautica, and the Soviet Union's Voenno-Vozdushnye Sily needed to test their latest aircraft. Each party sent numerous aircraft types to support their sides in the conflict. In the dogfights over Spain, the latest Messerschmitt Bf 109 fighters did well, as did the Soviet Polikarpov I-16. The later German design was earlier in its design cycle, and had more room for development and the lessons learned led to greatly improved models in World War II. The Russians failed to keep up and despite newer models coming into service, I-16s remaining the most common Soviet front-line fighter into 1942 despite being outclassed by the improved Bf 109s in World War II. For their part, the Italians developed several monoplanes such as the Fiat G.50 Freccia, but being short on funds, were forced to continue operating obsolete Fiat CR.42 Falco biplanes.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 41,
"text": "From the early 1930s the Japanese were at war against both the Chinese Nationalists and the Russians in China, and used the experience to improve both training and aircraft, replacing biplanes with modern cantilever monoplanes and creating a cadre of exceptional pilots. In the United Kingdom, at the behest of Neville Chamberlain (more famous for his 'peace in our time' speech), the entire British aviation industry was retooled, allowing it to change quickly from fabric covered metal framed biplanes to cantilever stressed skin monoplanes in time for the war with Germany, a process that France attempted to emulate, but too late to counter the German invasion. The period of improving the same biplane design over and over was now coming to an end, and the Hawker Hurricane and Supermarine Spitfire started to supplant the Gloster Gladiator and Hawker Fury biplanes but many biplanes remained in front-line service well past the start of World War II. While not a combatant in Spain, they too absorbed many of the lessons in time to use them.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 42,
"text": "The Spanish Civil War also provided an opportunity for updating fighter tactics. One of the innovations was the development of the \"finger-four\" formation by the German pilot Werner Mölders. Each fighter squadron (German: Staffel) was divided into several flights (Schwärme) of four aircraft. Each Schwarm was divided into two Rotten, which was a pair of aircraft. Each Rotte was composed of a leader and a wingman. This flexible formation allowed the pilots to maintain greater situational awareness, and the two Rotten could split up at any time and attack on their own. The finger-four would be widely adopted as the fundamental tactical formation during World War Two, including by the British and later the Americans.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 43,
"text": "World War II featured fighter combat on a larger scale than any other conflict to date. German Field Marshal Erwin Rommel noted the effect of airpower: \"Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage…\" Throughout the war, fighters performed their conventional role in establishing air superiority through combat with other fighters and through bomber interception, and also often performed roles such as tactical air support and reconnaissance.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 44,
"text": "Fighter design varied widely among combatants. The Japanese and Italians favored lightly armed and armored but highly maneuverable designs such as the Japanese Nakajima Ki-27, Nakajima Ki-43 and Mitsubishi A6M Zero and the Italian Fiat G.50 Freccia and Macchi MC.200. In contrast, designers in the United Kingdom, Germany, the Soviet Union, and the United States believed that the increased speed of fighter aircraft would create g-forces unbearable to pilots who attempted maneuvering dogfights typical of the First World War, and their fighters were instead optimized for speed and firepower. In practice, while light, highly maneuverable aircraft did possess some advantages in fighter-versus-fighter combat, those could usually be overcome by sound tactical doctrine, and the design approach of the Italians and Japanese made their fighters ill-suited as interceptors or attack aircraft.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 45,
"text": "During the invasion of Poland and the Battle of France, Luftwaffe fighters—primarily the Messerschmitt Bf 109—held air superiority, and the Luftwaffe played a major role in German victories in these campaigns. During the Battle of Britain, however, British Hurricanes and Spitfires proved roughly equal to Luftwaffe fighters. Additionally Britain's radar-based Dowding system directing fighters onto German attacks and the advantages of fighting above Britain's home territory allowed the RAF to deny Germany air superiority, saving the UK from possible German invasion and dealing the Axis a major defeat early in the Second World War. On the Eastern Front, Soviet fighter forces were overwhelmed during the opening phases of Operation Barbarossa. This was a result of the tactical surprise at the outset of the campaign, the leadership vacuum within the Soviet military left by the Great Purge, and the general inferiority of Soviet designs at the time, such as the obsolescent Polikarpov I-15 biplane and the I-16. More modern Soviet designs, including the Mikoyan-Gurevich MiG-3, LaGG-3 and Yakolev Yak-1, had not yet arrived in numbers and in any case were still inferior to the Messerschmitt Bf 109. As a result, during the early months of these campaigns, Axis air forces destroyed large numbers of Red Air Force aircraft on the ground and in one-sided dogfights. In the later stages on the Eastern Front, Soviet training and leadership improved, as did their equipment. By 1942 Soviet designs such as the Yakovlev Yak-9 and Lavochkin La-5 had performance comparable to the German Bf 109 and Focke-Wulf Fw 190. Also, significant numbers of British, and later U.S., fighter aircraft were supplied to aid the Soviet war effort as part of Lend-Lease, with the Bell P-39 Airacobra proving particularly effective in the lower-altitude combat typical of the Eastern Front. The Soviets were also helped indirectly by the American and British bombing campaigns, which forced the Luftwaffe to shift many of its fighters away from the Eastern Front in defense against these raids. The Soviets increasingly were able to challenge the Luftwaffe, and while the Luftwaffe maintained a qualitative edge over the Red Air Force for much of the war, the increasing numbers and efficacy of the Soviet Air Force were critical to the Red Army's efforts at turning back and eventually annihilating the Wehrmacht.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 46,
"text": "Meanwhile, air combat on the Western Front had a much different character. Much of this combat focused on the strategic bombing campaigns of the RAF and the USAAF against German industry intended to wear down the Luftwaffe. Axis fighter aircraft focused on defending against Allied bombers while Allied fighters' main role was as bomber escorts. The RAF raided German cities at night, and both sides developed radar-equipped night fighters for these battles. The Americans, in contrast, flew daylight bombing raids into Germany delivering the Combined Bomber Offensive. Unescorted Consolidated B-24 Liberators and Boeing B-17 Flying Fortress bombers, however, proved unable to fend off German interceptors (primarily Bf 109s and Fw 190s). With the later arrival of long range fighters, particularly the North American P-51 Mustang, American fighters were able to escort far into Germany on daylight raids and by ranging ahead attrited the Luftwaffe to establish control of the skies over Western Europe.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 47,
"text": "By the time of Operation Overlord in June 1944, the Allies had gained near complete air superiority over the Western Front. This cleared the way both for intensified strategic bombing of German cities and industries, and for the tactical bombing of battlefield targets. With the Luftwaffe largely cleared from the skies, Allied fighters increasingly served as ground attack aircraft.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 48,
"text": "Allied fighters, by gaining air superiority over the European battlefield, played a crucial role in the eventual defeat of the Axis, which Reichmarshal Hermann Göring, commander of the German Luftwaffe summed up when he said: \"When I saw Mustangs over Berlin, I knew the jig was up.\"",
"title": "Piston engine fighters"
},
{
"paragraph_id": 49,
"text": "Major air combat during the war in the Pacific began with the entry of the Western Allies following Japan's attack against Pearl Harbor. The Imperial Japanese Navy Air Service primarily operated the Mitsubishi A6M Zero, and the Imperial Japanese Army Air Service flew the Nakajima Ki-27 and the Nakajima Ki-43, initially enjoying great success, as these fighters generally had better range, maneuverability, speed and climb rates than their Allied counterparts. Additionally, Japanese pilots were well trained and many were combat veterans from Japan's campaigns in China. They quickly gained air superiority over the Allies, who at this stage of the war were often disorganized, under-trained and poorly equipped, and Japanese air power contributed significantly to their successes in the Philippines, Malaysia and Singapore, the Dutch East Indies and Burma.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 50,
"text": "By mid-1942, the Allies began to regroup and while some Allied aircraft such as the Brewster Buffalo and the P-39 Airacobra were hopelessly outclassed by fighters like Japan's Mitsubishi A6M Zero, others such as the Army's Curtiss P-40 Warhawk and the Navy's Grumman F4F Wildcat possessed attributes such as superior firepower, ruggedness and dive speed, and the Allies soon developed tactics (such as the Thach Weave) to take advantage of these strengths. These changes soon paid dividends, as the Allied ability to deny Japan air superiority was critical to their victories at Coral Sea, Midway, Guadalcanal and New Guinea. In China, the Flying Tigers also used the same tactics with some success, although they were unable to stem the tide of Japanese advances there. By 1943, the Allies began to gain the upper hand in the Pacific Campaign's air campaigns. Several factors contributed to this shift. First, the Lockheed P-38 Lightning and second-generation Allied fighters such as the Grumman F6 Hellcat and later the Vought F4 Corsair, the Republic P-47 Thunderbolt and the North American P-51 Mustang, began arriving in numbers. These fighters outperformed Japanese fighters in all respects except maneuverability. Other problems with Japan's fighter aircraft also became apparent as the war progressed, such as their lack of armor and light armament, which had been typical of all pre-war fighters worldwide, but the problem was particularly difficult to rectify on the Japanese designs. This made them inadequate as either bomber-interceptors or ground-attack aircraft, roles Allied fighters were still able to fill. Most importantly, Japan's training program failed to provide enough well-trained pilots to replace losses. In contrast, the Allies improved both the quantity and quality of pilots graduating from their training programs. By mid-1944, Allied fighters had gained air superiority throughout the theater, which would not be contested again during the war. The extent of Allied quantitative and qualitative superiority by this point in the war was demonstrated during the Battle of the Philippine Sea, a lopsided Allied victory in which Japanese fliers were shot down in such numbers and with such ease that American fighter pilots likened it to a great 'turkey shoot'. Late in the war, Japan began to produce new fighters such as the Nakajima Ki-84 and the Kawanishi N1K to replace the Zero, but only in small numbers, and by then Japan lacked the trained pilots or sufficient fuel to mount an effective challenge to Allied attacks. During the closing stages of the war, Japan's fighter arm could not seriously challenge raids over Japan by American Boeing B-29 Superfortresses, and was largely reduced to Kamikaze attacks.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 51,
"text": "Fighter technology advanced rapidly during the Second World War. Piston-engines, which powered the vast majority of World War II fighters, grew more powerful: at the beginning of the war fighters typically had engines producing between 1,000 hp (750 kW) and 1,400 hp (1,000 kW), while by the end of the war many could produce over 2,000 hp (1,500 kW). For example, the Spitfire, one of the few fighters in continuous production throughout the war, was in 1939 powered by a 1,030 hp (770 kW) Merlin II, while variants produced in 1945 were equipped with the 2,035 hp (1,517 kW) Rolls-Royce Griffon 61. Nevertheless, these fighters could only achieve modest increases in top speed due to problems of compressibility created as aircraft and their propellers approached the sound barrier, and it was apparent that propeller-driven aircraft were approaching the limits of their performance. German jet and rocket-powered fighters entered combat in 1944, too late to impact the war's outcome. The same year the Allies' only operational jet fighter, the Gloster Meteor, also entered service.m World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51 Mustang, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds. Armament also advanced during the war. The rifle-caliber machine guns that were common on prewar fighters could not easily down the more rugged warplanes of the era. Air forces began to replace or supplement them with cannons, which fired explosive shells that could blast a hole in an enemy aircraft – rather than relying on kinetic energy from a solid bullet striking a critical component of the aircraft, such as a fuel line or control cable, or the pilot. Cannons could bring down even heavy bombers with just a few hits, but their slower rate of fire made it difficult to hit fast-moving fighters in a dogfight. Eventually, most fighters mounted cannons, sometimes in combination with machine guns. The British epitomized this shift. Their standard early war fighters mounted eight .303 in (7.7 mm) caliber machine guns, but by mid-war they often featured a combination of machine guns and 20 mm (0.79 in) cannons, and late in the war often only cannons. The Americans, in contrast, had problems producing a cannon design, so instead placed multiple .50 in (12.7 mm) heavy machine guns on their fighters. Fighters were also increasingly fitted with bomb racks and air-to-surface ordnance such as bombs or rockets beneath their wings, and pressed into close air support roles as fighter-bombers. Although they carried less ordnance than light and medium bombers, and generally had a shorter range, they were cheaper to produce and maintain and their maneuverability made it easier for them to hit moving targets such as motorized vehicles. Moreover, if they encountered enemy fighters, their ordnance (which reduced lift and increased drag and therefore decreased performance) could be jettisoned and they could engage enemy fighters, which eliminated the need for fighter escorts that bombers required.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 52,
"text": "Heavily armed fighters such as Germany's Focke-Wulf Fw 190, Britain's Hawker Typhoon and Hawker Tempest, and America's Curtiss P-40, F4 Corsair, P-47 Thunderbolt and P-38 Lightning all excelled as fighter-bombers, and since the Second World War ground attack has become an important secondary capability of many fighters. World War II also saw the first use of airborne radar on fighters. The primary purpose of these radars was to help night fighters locate enemy bombers and fighters. Because of the bulkiness of these radar sets, they could not be carried on conventional single-engined fighters and instead were typically retrofitted to larger heavy fighters or light bombers such as Germany's Messerschmitt Bf 110 and Junkers Ju 88, Britain's de Havilland Mosquito and Bristol Beaufighter, and America's Douglas A-20, which then served as night fighters. The Northrop P-61 Black Widow, a purpose-built night fighter, was the only fighter of the war that incorporated radar into its original design. Britain and America cooperated closely in the development of airborne radar, and Germany's radar technology generally lagged slightly behind Anglo-American efforts, while other combatants developed few radar-equipped fighters.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 53,
"text": "Several prototype fighter programs begun early in 1945 continued on after the war and led to advanced piston-engine fighters that entered production and operational service in 1946. A typical example is the Lavochkin La-9 'Fritz', which was an evolution of the successful wartime Lavochkin La-7 'Fin'. Working through a series of prototypes, the La-120, La-126 and La-130, the Lavochkin design bureau sought to replace the La-7's wooden airframe with a metal one, as well as fit a laminar flow wing to improve maneuver performance, and increased armament. The La-9 entered service in August 1946 and was produced until 1948; it also served as the basis for the development of a long-range escort fighter, the La-11 'Fang', of which nearly 1200 were produced 1947–51. Over the course of the Korean War, however, it became obvious that the day of the piston-engined fighter was coming to a close and that the future would lie with the jet fighter.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 54,
"text": "This period also witnessed experimentation with jet-assisted piston engine aircraft. La-9 derivatives included examples fitted with two underwing auxiliary pulsejet engines (the La-9RD) and a similarly mounted pair of auxiliary ramjet engines (the La-138); however, neither of these entered service. One that did enter service – with the U.S. Navy in March 1945 – was the Ryan FR-1 Fireball; production was halted with the war's end on VJ-Day, with only 66 having been delivered, and the type was withdrawn from service in 1947. The USAAF had ordered its first 13 mixed turboprop-turbojet-powered pre-production prototypes of the Consolidated Vultee XP-81 fighter, but this program was also canceled by VJ Day, with 80% of the engineering work completed.",
"title": "Piston engine fighters"
},
{
"paragraph_id": 55,
"text": "The first rocket-powered aircraft was the Lippisch Ente, which made a successful maiden flight in March 1928. The only pure rocket aircraft ever mass-produced was the Messerschmitt Me 163B Komet in 1944, one of several German World War II projects aimed at developing high speed, point-defense aircraft. Later variants of the Me 262 (C-1a and C-2b) were also fitted with \"mixed-power\" jet/rocket powerplants, while earlier models were fitted with rocket boosters, but were not mass-produced with these modifications.",
"title": "Rocket-powered fighters"
},
{
"paragraph_id": 56,
"text": "The USSR experimented with a rocket-powered interceptor in the years immediately following World War II, the Mikoyan-Gurevich I-270. Only two were built.",
"title": "Rocket-powered fighters"
},
{
"paragraph_id": 57,
"text": "In the 1950s, the British developed mixed-power jet designs employing both rocket and jet engines to cover the performance gap that existed in turbojet designs. The rocket was the main engine for delivering the speed and height required for high-speed interception of high-level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure the aircraft was able to make a powered landing rather than risking an unpredictable gliding return.",
"title": "Rocket-powered fighters"
},
{
"paragraph_id": 58,
"text": "The Saunders-Roe SR.53 was a successful design, and was planned for production when economics forced the British to curtail most aircraft programs in the late 1950s. Furthermore, rapid advancements in jet engine technology rendered mixed-power aircraft designs like Saunders-Roe's SR.53 (and the following SR.177) obsolete. The American Republic XF-91 Thunderceptor –the first U.S. fighter to exceed Mach 1 in level flight– met a similar fate for the same reason, and no hybrid rocket-and-jet-engine fighter design has ever been placed into service.",
"title": "Rocket-powered fighters"
},
{
"paragraph_id": 59,
"text": "The only operational implementation of mixed propulsion was Rocket-Assisted Take Off (RATO), a system rarely used in fighters, such as with the zero-length launch, RATO-based takeoff scheme from special launch platforms, tested out by both the United States and the Soviet Union, and made obsolete with advancements in surface-to-air missile technology.",
"title": "Rocket-powered fighters"
},
{
"paragraph_id": 60,
"text": "It has become common in the aviation community to classify jet fighters by \"generations\" for historical purposes. No official definitions of these generations exist; rather, they represent the notion of stages in the development of fighter-design approaches, performance capabilities, and technological evolution. Different authors have packed jet fighters into different generations. For example, Richard P. Hallion of the Secretary of the Air Force's Action Group classified the F-16 as a sixth-generation jet fighter.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 61,
"text": "The timeframes associated with each generation remain inexact and are only indicative of the period during which their design philosophies and technology employment enjoyed a prevailing influence on fighter design and development. These timeframes also encompass the peak period of service entry for such aircraft.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 62,
"text": "The first generation of jet fighters comprised the initial, subsonic jet-fighter designs introduced late in World War II (1939–1945) and in the early post-war period. They differed little from their piston-engined counterparts in appearance, and many employed unswept wings. Guns and cannon remained the principal armament. The need to obtain a decisive advantage in maximum speed pushed the development of turbojet-powered aircraft forward. Top speeds for fighters rose steadily throughout World War II as more powerful piston engines developed, and they approached transonic flight-speeds where the efficiency of propellers drops off, making further speed increases nearly impossible.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 63,
"text": "The first jets developed during World War II and saw combat in the last two years of the war. Messerschmitt developed the first operational jet fighter, the Me 262A, primarily serving with the Luftwaffe's JG 7, the world's first jet-fighter wing. It was considerably faster than contemporary piston-driven aircraft, and in the hands of a competent pilot, proved quite difficult for Allied pilots to defeat. The Luftwaffe never deployed the design in numbers sufficient to stop the Allied air campaign, and a combination of fuel shortages, pilot losses, and technical difficulties with the engines kept the number of sorties low. Nevertheless, the Me 262 indicated the obsolescence of piston-driven aircraft. Spurred by reports of the German jets, Britain's Gloster Meteor entered production soon after, and the two entered service around the same time in 1944. Meteors commonly served to intercept the V-1 flying bomb, as they were faster than available piston-engined fighters at the low altitudes used by the flying bombs. Nearer the end of World War II, the first military jet-powered light-fighter design, the Luftwaffe intended the Heinkel He 162A Spatz (sparrow) to serve as a simple jet fighter for German home defense, with a few examples seeing squadron service with JG 1 by April 1945. By the end of the war almost all work on piston-powered fighters had ended. A few designs combining piston- and jet-engines for propulsion – such as the Ryan FR Fireball – saw brief use, but by the end of the 1940s virtually all new fighters were jet-powered.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 64,
"text": "Despite their advantages, the early jet-fighters were far from perfect. The operational lifespan of turbines were very short and engines were temperamental, while power could be adjusted only slowly and acceleration was poor (even if top speed was higher) compared to the final generation of piston fighters. Many squadrons of piston-engined fighters remained in service until the early to mid-1950s, even in the air forces of the major powers (though the types retained were the best of the World War II designs). Innovations including ejection seats, air brakes and all-moving tailplanes became widespread in this period.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 65,
"text": "The Americans began using jet fighters operationally after World War II, the wartime Bell P-59 having proven a failure. The Lockheed P-80 Shooting Star (soon re-designated F-80) was more prone to wave drag than the swept-wing Me 262, but had a cruise speed (660 km/h (410 mph)) as high as the maximum speed attainable by many piston-engined fighters. The British designed several new jets, including the distinctive single-engined twin boom de Havilland Vampire which Britain sold to the air forces of many nations.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 66,
"text": "The British transferred the technology of the Rolls-Royce Nene jet-engine to the Soviets, who soon put it to use in their advanced Mikoyan-Gurevich MiG-15 fighter, which used fully swept wings that allowed flying closer to the speed of sound than straight-winged designs such as the F-80. The MiG-15s' top speed of 1,075 km/h (668 mph) proved quite a shock to the American F-80 pilots who encountered them in the Korean War, along with their armament of two 23 mm (0.91 in) cannons and a single 37 mm (1.5 in) cannon. Nevertheless, in the first jet-versus-jet dogfight, which occurred during the Korean War on 8 November 1950, an F-80 shot down two North Korean MiG-15s.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 67,
"text": "The Americans responded by rushing their own swept-wing fighter – the North American F-86 Sabre – into battle against the MiGs, which had similar transsonic performance. The two aircraft had different strengths and weaknesses, but were similar enough that victory could go either way. While the Sabres focused primarily on downing MiGs and scored favorably against those flown by the poorly-trained North Koreans, the MiGs in turn decimated US bomber formations and forced the withdrawal of numerous American types from operational service.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 68,
"text": "The world's navies also transitioned to jets during this period, despite the need for catapult-launching of the new aircraft. The U.S. Navy adopted the Grumman F9F Panther as their primary jet fighter in the Korean War period, and it was one of the first jet fighters to employ an afterburner. The de Havilland Sea Vampire became the Royal Navy's first jet fighter. Radar was used on specialized night-fighters such as the Douglas F3D Skyknight, which also downed MiGs over Korea, and later fitted to the McDonnell F2H Banshee and swept-wing Vought F7U Cutlass and McDonnell F3H Demon as all-weather / night fighters. Early versions of Infra-red (IR) air-to-air missiles (AAMs) such as the AIM-9 Sidewinder and radar-guided missiles such as the AIM-7 Sparrow whose descendants remain in use as of 2021, were first introduced on swept-wing subsonic Demon and Cutlass naval fighters.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 69,
"text": "Technological breakthroughs, lessons learned from the aerial battles of the Korean War, and a focus on conducting operations in a nuclear warfare environment shaped the development of second-generation fighters. Technological advances in aerodynamics, propulsion and aerospace building-materials (primarily aluminum alloys) permitted designers to experiment with aeronautical innovations such as swept wings, delta wings, and area-ruled fuselages. Widespread use of afterburning turbojet engines made these the first production aircraft to break the sound barrier, and the ability to sustain supersonic speeds in level flight became a common capability amongst fighters of this generation.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 70,
"text": "Fighter designs also took advantage of new electronics technologies that made effective radars small enough to carry aboard smaller aircraft. Onboard radars permitted detection of enemy aircraft beyond visual range, thereby improving the handoff of targets by longer-ranged ground-based warning- and tracking-radars. Similarly, advances in guided-missile development allowed air-to-air missiles to begin supplementing the gun as the primary offensive weapon for the first time in fighter history. During this period, passive-homing infrared-guided (IR) missiles became commonplace, but early IR missile sensors had poor sensitivity and a very narrow field of view (typically no more than 30°), which limited their effective use to only close-range, tail-chase engagements. Radar-guided (RF) missiles were introduced as well, but early examples proved unreliable. These semi-active radar homing (SARH) missiles could track and intercept an enemy aircraft \"painted\" by the launching aircraft's onboard radar. Medium- and long-range RF air-to-air missiles promised to open up a new dimension of \"beyond-visual-range\" (BVR) combat, and much effort concentrated on further development of this technology.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 71,
"text": "The prospect of a potential third world war featuring large mechanized armies and nuclear-weapon strikes led to a degree of specialization along two design approaches: interceptors, such as the English Electric Lightning and Mikoyan-Gurevich MiG-21F; and fighter-bombers, such as the Republic F-105 Thunderchief and the Sukhoi Su-7B. Dogfighting, per se, became de-emphasized in both cases. The interceptor was an outgrowth of the vision that guided missiles would completely replace guns and combat would take place at beyond-visual ranges. As a result, strategists designed interceptors with a large missile-payload and a powerful radar, sacrificing agility in favor of high speed, altitude ceiling and rate of climb. With a primary air-defense role, emphasis was placed on the ability to intercept strategic bombers flying at high altitudes. Specialized point-defense interceptors often had limited range and few, if any, ground-attack capabilities. Fighter-bombers could swing between air-superiority and ground-attack roles, and were often designed for a high-speed, low-altitude dash to deliver their ordnance. Television- and IR-guided air-to-surface missiles were introduced to augment traditional gravity bombs, and some were also equipped to deliver a nuclear bomb.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 72,
"text": "The third generation witnessed continued maturation of second-generation innovations, but it is most marked by renewed emphases on maneuverability and on traditional ground-attack capabilities. Over the course of the 1960s, increasing combat experience with guided missiles demonstrated that combat would devolve into close-in dogfights. Analog avionics began to appear, replacing older \"steam-gauge\" cockpit instrumentation. Enhancements to the aerodynamic performance of third-generation fighters included flight control surfaces such as canards, powered slats, and blown flaps. A number of technologies would be tried for vertical/short takeoff and landing, but thrust vectoring would be successful on the Harrier.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 73,
"text": "Growth in air-combat capability focused on the introduction of improved air-to-air missiles, radar systems, and other avionics. While guns remained standard equipment (early models of F-4 being a notable exception), air-to-air missiles became the primary weapons for air-superiority fighters, which employed more sophisticated radars and medium-range RF AAMs to achieve greater \"stand-off\" ranges, however, kill probabilities proved unexpectedly low for RF missiles due to poor reliability and improved electronic countermeasures (ECM) for spoofing radar seekers. Infrared-homing AAMs saw their fields of view expand to 45°, which strengthened their tactical usability. Nevertheless, the low dogfight loss-exchange ratios experienced by American fighters in the skies over Vietnam led the U.S. Navy to establish its famous \"TOPGUN\" fighter-weapons school, which provided a graduate-level curriculum to train fleet fighter-pilots in advanced Air Combat Maneuvering (ACM) and Dissimilar air combat training (DACT) tactics and techniques. This era also saw an expansion in ground-attack capabilities, principally in guided missiles, and witnessed the introduction of the first truly effective avionics for enhanced ground attack, including terrain-avoidance systems. Air-to-surface missiles (ASM) equipped with electro-optical (E-O) contrast seekers – such as the initial model of the widely used AGM-65 Maverick – became standard weapons, and laser-guided bombs (LGBs) became widespread in an effort to improve precision-attack capabilities. Guidance for such precision-guided munitions (PGM) was provided by externally-mounted targeting pods, which were introduced in the mid-1960s.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 74,
"text": "The third generation also led to the development of new automatic-fire weapons, primarily chain-guns that use an electric motor to drive the mechanism of a cannon. This allowed a plane to carry a single multi-barrel weapon (such as the 20 mm (0.79 in) Vulcan), and provided greater accuracy and rates of fire. Powerplant reliability increased, and jet engines became \"smokeless\" to make it harder to sight aircraft at long distances.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 75,
"text": "Dedicated ground-attack aircraft (like the Grumman A-6 Intruder, SEPECAT Jaguar and LTV A-7 Corsair II) offered longer range, more sophisticated night-attack systems or lower cost than supersonic fighters. With variable-geometry wings, the supersonic F-111 introduced the Pratt & Whitney TF30, the first turbofan equipped with afterburner. The ambitious project sought to create a versatile common fighter for many roles and services. It would serve well as an all-weather bomber, but lacked the performance to defeat other fighters. The McDonnell F-4 Phantom was designed to capitalize on radar and missile technology as an all-weather interceptor, but emerged as a versatile strike-bomber nimble enough to prevail in air combat, adopted by the U.S. Navy, Air Force and Marine Corps. Despite numerous shortcomings that would not be fully addressed until newer fighters, the Phantom claimed 280 aerial kills (more than any other U.S. fighter) over Vietnam. With range and payload capabilities that rivaled that of World War II bombers such as B-24 Liberator, the Phantom would become a highly successful multirole aircraft.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 76,
"text": "Fourth-generation fighters continued the trend towards multirole configurations, and were equipped with increasingly sophisticated avionics- and weapon-systems. Fighter designs were significantly influenced by the Energy-Maneuverability (E-M) theory developed by Colonel John Boyd and mathematician Thomas Christie, based upon Boyd's combat experience in the Korean War and as a fighter-tactics instructor during the 1960s. E-M theory emphasized the value of aircraft-specific energy maintenance as an advantage in fighter combat. Boyd perceived maneuverability as the primary means of getting \"inside\" an adversary's decision-making cycle, a process Boyd called the \"OODA loop\" (for \"Observation-Orientation-Decision-Action\"). This approach emphasized aircraft designs capable of performing \"fast transients\" – quick changes in speed, altitude, and direction – as opposed to relying chiefly on high speed alone.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 77,
"text": "E-M characteristics were first applied to the McDonnell Douglas F-15 Eagle, but Boyd and his supporters believed these performance parameters called for a small, lightweight aircraft with a larger, higher-lift wing. The small size would minimize drag and increase the thrust-to-weight ratio, while the larger wing would minimize wing loading; while the reduced wing loading tends to lower top speed and can cut range, it increases payload capacity and the range reduction can be compensated for by increased fuel in the larger wing. The efforts of Boyd's \"Fighter mafia\" would result in the General Dynamics F-16 Fighting Falcon (now Lockheed Martin's).",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 78,
"text": "The F-16's maneuverability was further enhanced by its slight aerodynamic instability. This technique, called \"relaxed static stability\" (RSS), was made possible by introduction of the \"fly-by-wire\" (FBW) flight-control system (FLCS), which in turn was enabled by advances in computers and in system-integration techniques. Analog avionics, required to enable FBW operations, became a fundamental requirement, but began to be replaced by digital flight-control systems in the latter half of the 1980s. Likewise, Full Authority Digital Engine Controls (FADEC) to electronically manage powerplant performance was introduced with the Pratt & Whitney F100 turbofan. The F-16's sole reliance on electronics and wires to relay flight commands, instead of the usual cables and mechanical linkage controls, earned it the sobriquet of \"the electric jet\". Electronic FLCS and FADEC quickly became essential components of all subsequent fighter designs.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 79,
"text": "Other innovative technologies introduced in fourth-generation fighters included pulse-Doppler fire-control radars (providing a \"look-down/shoot-down\" capability), head-up displays (HUD), \"hands on throttle-and-stick\" (HOTAS) controls, and multi-function displays (MFD), all essential equipment as of 2019. Aircraft designers began to incorporate composite materials in the form of bonded-aluminum honeycomb structural elements and graphite epoxy laminate skins to reduce weight. Infrared search-and-track (IRST) sensors became widespread for air-to-ground weapons delivery, and appeared for air-to-air combat as well. \"All-aspect\" IR AAM became standard air superiority weapons, which permitted engagement of enemy aircraft from any angle (although the field of view remained relatively limited). The first long-range active-radar-homing RF AAM entered service with the AIM-54 Phoenix, which solely equipped the Grumman F-14 Tomcat, one of the few variable-sweep-wing fighter designs to enter production. Even with the tremendous advancement of air-to-air missiles in this era, internal guns were standard equipment.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 80,
"text": "Another revolution came in the form of a stronger reliance on ease of maintenance, which led to standardization of parts, reductions in the numbers of access panels and lubrication points, and overall parts reduction in more complicated equipment like the engines. Some early jet fighters required 50 man-hours of work by a ground crew for every hour the aircraft was in the air; later models substantially reduced this to allow faster turn-around times and more sorties in a day. Some modern military aircraft only require 10-man-hours of work per hour of flight time, and others are even more efficient.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 81,
"text": "Aerodynamic innovations included variable-camber wings and exploitation of the vortex lift effect to achieve higher angles of attack through the addition of leading-edge extension devices such as strakes.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 82,
"text": "Unlike interceptors of the previous eras, most fourth-generation air-superiority fighters were designed to be agile dogfighters (although the Mikoyan MiG-31 and Panavia Tornado ADV are notable exceptions). The continually rising cost of fighters, however, continued to emphasize the value of multirole fighters. The need for both types of fighters led to the \"high/low mix\" concept, which envisioned a high-capability and high-cost core of dedicated air-superiority fighters (like the F-15 and Su-27) supplemented by a larger contingent of lower-cost multi-role fighters (such as the F-16 and MiG-29).",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 83,
"text": "Most fourth-generation fighters, such as the McDonnell Douglas F/A-18 Hornet, HAL Tejas, JF-17 and Dassault Mirage 2000, are true multirole warplanes, designed as such from the start. This was facilitated by multimode avionics that could switch seamlessly between air and ground modes. The earlier approaches of adding on strike capabilities or designing separate models specialized for different roles generally became passé (with the Panavia Tornado being an exception in this regard). Attack roles were generally assigned to dedicated ground-attack aircraft such as the Sukhoi Su-25 and the A-10 Thunderbolt II.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 84,
"text": "A typical US Air Force fighter wing of the period might contain a mix of one air superiority squadron (F-15C), one strike fighter squadron (F-15E), and two multirole fighter squadrons (F-16C). Perhaps the most novel technology introduced for combat aircraft was stealth, which involves the use of special \"low-observable\" (L-O) materials and design techniques to reduce the susceptibility of an aircraft to detection by the enemy's sensor systems, particularly radars. The first stealth aircraft introduced were the Lockheed F-117 Nighthawk attack aircraft (introduced in 1983) and the Northrop Grumman B-2 Spirit bomber (first flew in 1989). Although no stealthy fighters per se appeared among the fourth generation, some radar-absorbent coatings and other L-O treatments developed for these programs are reported to have been subsequently applied to fourth-generation fighters.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 85,
"text": "The end of the Cold War in 1992 led many governments to significantly decrease military spending as a \"peace dividend\". Air force inventories were cut. Research and development programs working on \"fifth-generation\" fighters took serious hits. Many programs were canceled during the first half of the 1990s, and those that survived were \"stretched out\". While the practice of slowing the pace of development reduces annual investment expenses, it comes at the penalty of increased overall program and unit costs over the long-term. In this instance, however, it also permitted designers to make use of the tremendous achievements being made in the fields of computers, avionics and other flight electronics, which had become possible largely due to the advances made in microchip and semiconductor technologies in the 1980s and 1990s. This opportunity enabled designers to develop fourth-generation designs – or redesigns – with significantly enhanced capabilities. These improved designs have become known as \"Generation 4.5\" fighters, recognizing their intermediate nature between the 4th and 5th generations, and their contribution in furthering development of individual fifth-generation technologies.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 86,
"text": "The primary characteristics of this sub-generation are the application of advanced digital avionics and aerospace materials, modest signature reduction (primarily RF \"stealth\"), and highly integrated systems and weapons. These fighters have been designed to operate in a \"network-centric\" battlefield environment and are principally multirole aircraft. Key weapons technologies introduced include beyond-visual-range (BVR) AAMs; Global Positioning System (GPS)–guided weapons, solid-state phased-array radars; helmet-mounted sights; and improved secure, jamming-resistant datalinks. Thrust vectoring to further improve transient maneuvering capabilities has also been adopted by many 4.5th generation fighters, and uprated powerplants have enabled some designs to achieve a degree of \"supercruise\" ability. Stealth characteristics are focused primarily on frontal-aspect radar cross section (RCS) signature-reduction techniques including radar-absorbent materials (RAM), L-O coatings and limited shaping techniques.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 87,
"text": "\"Half-generation\" designs are either based on existing airframes or are based on new airframes following similar design theory to previous iterations; however, these modifications have introduced the structural use of composite materials to reduce weight, greater fuel fractions to increase range, and signature reduction treatments to achieve lower RCS compared to their predecessors. Prime examples of such aircraft, which are based on new airframe designs making extensive use of carbon-fiber composites, include the Eurofighter Typhoon, Dassault Rafale, Saab JAS 39 Gripen, and HAL Tejas Mark 1A.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 88,
"text": "Apart from these fighter jets, most of the 4.5 generation aircraft are actually modified variants of existing airframes from the earlier fourth generation fighter jets. Such fighter jets are generally heavier and examples include the Boeing F/A-18E/F Super Hornet, which is an evolution of the F/A-18 Hornet, the F-15E Strike Eagle, which is a ground-attack/multi-role variant of the F-15 Eagle, the Su-30SM and Su-35S modified variants of the Sukhoi Su-27, and the MiG-35 upgraded version of the Mikoyan MiG-29. The Su-30SM/Su-35S and MiG-35 feature thrust vectoring engine nozzles to enhance maneuvering. The upgraded version of F-16 is also considered a member of the 4.5 generation aircraft.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 89,
"text": "Generation 4.5 fighters first entered service in the early 1990s, and most of them are still being produced and evolved. It is quite possible that they may continue in production alongside fifth-generation fighters due to the expense of developing the advanced level of stealth technology needed to achieve aircraft designs featuring very low observables (VLO), which is one of the defining features of fifth-generation fighters. Of the 4.5th generation designs, the Strike Eagle, Super Hornet, Typhoon, Gripen, and Rafale have been used in combat.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 90,
"text": "The U.S. government has defined 4.5 generation fighter aircraft as those that \"(1) have advanced capabilities, including— (A) AESA radar; (B) high capacity data-link; and (C) enhanced avionics; and (2) have the ability to deploy current and reasonably foreseeable advanced armaments.\"",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 91,
"text": "Currently the cutting edge of fighter design, fifth-generation fighters are characterized by being designed from the start to operate in a network-centric combat environment, and to feature extremely low, all-aspect, multi-spectral signatures employing advanced materials and shaping techniques. They have multifunction AESA radars with high-bandwidth, low-probability of intercept (LPI) data transmission capabilities. The infra-red search and track sensors incorporated for air-to-air combat as well as for air-to-ground weapons delivery in the 4.5th generation fighters are now fused in with other sensors for Situational Awareness IRST or SAIRST, which constantly tracks all targets of interest around the aircraft so the pilot need not guess when he glances. These sensors, along with advanced avionics, glass cockpits, helmet-mounted sights (not currently on F-22), and improved secure, jamming-resistant LPI datalinks are highly integrated to provide multi-platform, multi-sensor data fusion for vastly improved situational awareness while easing the pilot's workload. Avionics suites rely on extensive use of very high-speed integrated circuit (VHSIC) technology, common modules, and high-speed data buses. Overall, the integration of all these elements is claimed to provide fifth-generation fighters with a \"first-look, first-shot, first-kill capability\".",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 92,
"text": "A key attribute of fifth-generation fighters is a small radar cross-section. Great care has been taken in designing its layout and internal structure to minimize RCS over a broad bandwidth of detection and tracking radar frequencies; furthermore, to maintain its VLO signature during combat operations, primary weapons are carried in internal weapon bays that are only briefly opened to permit weapon launch. Furthermore, stealth technology has advanced to the point where it can be employed without a tradeoff with aerodynamics performance, in contrast to previous stealth efforts. Some attention has also been paid to reducing IR signatures, especially on the F-22. Detailed information on these signature-reduction techniques is classified, but in general includes special shaping approaches, thermoset and thermoplastic materials, extensive structural use of advanced composites, conformal sensors, heat-resistant coatings, low-observable wire meshes to cover intake and cooling vents, heat ablating tiles on the exhaust troughs (seen on the Northrop YF-23), and coating internal and external metal areas with radar-absorbent materials and paint (RAM/RAP).",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 93,
"text": "The AESA radar offers unique capabilities for fighters (and it is also quickly becoming essential for Generation 4.5 aircraft designs, as well as being retrofitted onto some fourth-generation aircraft). In addition to its high resistance to ECM and LPI features, it enables the fighter to function as a sort of \"mini-AWACS\", providing high-gain electronic support measures (ESM) and electronic warfare (EW) jamming functions. Other technologies common to this latest generation of fighters includes integrated electronic warfare system (INEWS) technology, integrated communications, navigation, and identification (CNI) avionics technology, centralized \"vehicle health monitoring\" systems for ease of maintenance, fiber optics data transmission, stealth technology and even hovering capabilities. Maneuver performance remains important and is enhanced by thrust-vectoring, which also helps reduce takeoff and landing distances. Supercruise may or may not be featured; it permits flight at supersonic speeds without the use of the afterburner – a device that significantly increases IR signature when used in full military power.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 94,
"text": "Such aircraft are sophisticated and expensive. The fifth generation was ushered in by the Lockheed Martin/Boeing F-22 Raptor in late 2005. The U.S. Air Force originally planned to acquire 650 F-22s, but now only 187 will be built. As a result, its unit flyaway cost (FAC) is around US$150 million. To spread the development costs – and production base – more broadly, the Joint Strike Fighter (JSF) program enrolls eight other countries as cost- and risk-sharing partners. Altogether, the nine partner nations anticipate procuring over 3,000 Lockheed Martin F-35 Lightning II fighters at an anticipated average FAC of $80–85 million. The F-35, however, is designed to be a family of three aircraft, a conventional take-off and landing (CTOL) fighter, a short take-off and vertical landing (STOVL) fighter, and a Catapult Assisted Take Off But Arrested Recovery (CATOBAR) fighter, each of which has a different unit price and slightly varying specifications in terms of fuel capacity (and therefore range), size and payload.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 95,
"text": "Other countries have initiated fifth-generation fighter development projects. In December 2010, it was discovered that China is developing the 5th generation fighter Chengdu J-20. The J-20 took its maiden flight in January 2011. The Shenyang FC-31 took its maiden flight on 31 October 2012, and developed a carrier-based version based on Chinese aircraft carriers. United Aircraft Corporation with Russia's Mikoyan LMFS and Sukhoi Su-75 Checkmate plan, Sukhoi Su-57 became the first fifth-generation fighter jets in service with the Russian Aerospace Forces on 2020, and launch missiles in the Russo-Ukrainian War in 2022. Japan is exploring its technical feasibility to produce fifth-generation fighters. India is developing the Advanced Medium Combat Aircraft (AMCA), a medium weight stealth fighter jet designated to enter into serial production by late 2030s. India also had initiated a joint fifth generation heavy fighter with Russia called the FGFA. As of 2018 May, the project is suspected to have not yielded desired progress or results for India and has been put on hold or dropped altogether. Other countries considering fielding an indigenous or semi-indigenous advanced fifth generation aircraft include South Korea, Sweden, Turkey and Pakistan.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 96,
"text": "As of November 2018, France, Germany, China, Japan, Russia, the United Kingdom and the United States have announced the development of a sixth-generation aircraft program.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 97,
"text": "France and Germany will develop a joint sixth-generation fighter to replace their current fleet of Dassault Rafales, Eurofighter Typhoons, and Panavia Tornados by 2035. The overall development will be led by a collaboration of Dassault and Airbus, while the engines will reportedly be jointly developed by Safran and MTU Aero Engines. Thales and MBDA are also seeking a stake in the project. Spain officially joined the Franco-German project to develop a Next-Generation Fighter (NGF) that will form part of a broader Future Combat Air Systems (FCAS) with the signing of a letter of intent (LOI) on February 14, 2019.\" />",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 98,
"text": "Currently at the concept stage, the first sixth-generation jet fighter is expected to enter service in the United States Navy in 2025–30 period. The USAF seeks a new fighter for the 2030–50 period named the \"Next Generation Tactical Aircraft\" (\"Next Gen TACAIR\"). The US Navy looks to replace its F/A-18E/F Super Hornets beginning in 2025 with the Next Generation Air Dominance air superiority fighter.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 99,
"text": "The United Kingdom's proposed stealth fighter is being developed by a European consortium called Team Tempest, consisting of BAE Systems, Rolls-Royce, Leonardo S.p.A. and MBDA. The aircraft is intended to enter service in 2035.",
"title": "Jet-powered fighters"
},
{
"paragraph_id": 100,
"text": "Fighters were typically armed with guns only for air to air combat up through the late 1950s, though unguided rockets for mostly air to ground use and limited air to air use were deployed in WWII. From the late 1950s forward guided missiles came into use for air to air combat. Throughout this history fighters which by surprise or maneuver attain a good firing position have achieved the kill about one third to one half the time, no matter what weapons were carried. The only major historic exception to this has been the low effectiveness shown by guided missiles in the first one to two decades of their existence. From WWI to the present, fighter aircraft have featured machine guns and automatic cannons as weapons, and they are still considered as essential back-up weapons today. The power of air-to-air guns has increased greatly over time, and has kept them relevant in the guided missile era. In WWI two rifle (approximately 0.30) caliber machine guns was the typical armament, producing a weight of fire of about 0.4 kg (0.88 lb) per second. In WWII rifle caliber machine guns also remained common, though usually in larger numbers or supplemented with much heavier 0.50 caliber machine guns or cannons. The standard WWII American fighter armament of six 0.50-cal (12.7mm) machine guns fired a bullet weight of approximately 3.7 kg/sec (8.1 lbs/sec), at a muzzle velocity of 856 m/s (2,810 ft/s). British and German aircraft tended to use a mix of machine guns and autocannon, the latter firing explosive projectiles. Later British fighters were exclusively cannon-armed, the US were not able to produce a reliable cannon in high numbers and most fighters remained equipped only with heavy machine guns despite the US Navy pressing for a change to 20mm.",
"title": "Weapons"
},
{
"paragraph_id": 101,
"text": "Post war 20–30 mm revolver cannon and rotary cannon were introduced. The modern M61 Vulcan 20 mm rotary cannon that is standard on current American fighters fires a projectile weight of about 10 kg/s (22 lb/s), nearly three times that of six 0.50-cal machine guns, with higher velocity of 1,052 m/s (3450 ft/s) supporting a flatter trajectory, and with exploding projectiles. Modern fighter gun systems also feature ranging radar and lead computing electronic gun sights to ease the problem of aim point to compensate for projectile drop and time of flight (target lead) in the complex three dimensional maneuvering of air-to-air combat. However, getting in position to use the guns is still a challenge. The range of guns is longer than in the past but still quite limited compared to missiles, with modern gun systems having a maximum effective range of approximately 1,000 meters. High probability of kill also requires firing to usually occur from the rear hemisphere of the target. Despite these limits, when pilots are well trained in air-to-air gunnery and these conditions are satisfied, gun systems are tactically effective and highly cost efficient. The cost of a gun firing pass is far less than firing a missile, and the projectiles are not subject to the thermal and electronic countermeasures than can sometimes defeat missiles. When the enemy can be approached to within gun range, the lethality of guns is approximately a 25% to 50% chance of \"kill per firing pass\".",
"title": "Weapons"
},
{
"paragraph_id": 102,
"text": "The range limitations of guns, and the desire to overcome large variations in fighter pilot skill and thus achieve higher force effectiveness, led to the development of the guided air-to-air missile. There are two main variations, heat-seeking (infrared homing), and radar guided. Radar missiles are typically several times heavier and more expensive than heat-seekers, but with longer range, greater destructive power, and ability to track through clouds.",
"title": "Weapons"
},
{
"paragraph_id": 103,
"text": "The highly successful AIM-9 Sidewinder heat-seeking (infrared homing) short-range missile was developed by the United States Navy in the 1950s. These small missiles are easily carried by lighter fighters, and provide effective ranges of approximately 10 to 35 km (~6 to 22 miles). Beginning with the AIM-9L in 1977, subsequent versions of Sidewinder have added all-aspect capability, the ability to use the lower heat of air to skin friction on the target aircraft to track from the front and sides. The latest (2003 service entry) AIM-9X also features \"off-boresight\" and \"lock on after launch\" capabilities, which allow the pilot to make a quick launch of a missile to track a target anywhere within the pilot's vision. The AIM-9X development cost was U.S. $3 billion in mid to late 1990s dollars, and 2015 per unit procurement cost is $0.6 million each. The missile weighs 85.3 kg (188 lbs), and has a maximum range of 35 km (22 miles) at higher altitudes. Like most air-to-air missiles, lower altitude range can be as limited as only about one third of maximum due to higher drag and less ability to coast downward.",
"title": "Weapons"
},
{
"paragraph_id": 104,
"text": "The effectiveness of infrared homing missiles was only 7% early in the Vietnam War, but improved to approximately 15%–40% over the course of the war. The AIM-4 Falcon used by the USAF had kill rates of approximately 7% and was considered a failure. The AIM-9B Sidewinder introduced later achieved 15% kill rates, and the further improved AIM-9D and J models reached 19%. The AIM-9G used in the last year of the Vietnam air war achieved 40%. Israel used almost totally guns in the 1967 Six-Day War, achieving 60 kills and 10 losses. However, Israel made much more use of steadily improving heat-seeking missiles in the 1973 Yom Kippur War. In this extensive conflict Israel scored 171 of 261 total kills with heat-seeking missiles (65.5%), 5 kills with radar guided missiles (1.9%), and 85 kills with guns (32.6%). The AIM-9L Sidewinder scored 19 kills out of 26 fired missiles (73%) in the 1982 Falklands War. But, in a conflict against opponents using thermal countermeasures, the United States only scored 11 kills out of 48 fired (Pk = 23%) with the follow-on AIM-9M in the 1991 Gulf War.",
"title": "Weapons"
},
{
"paragraph_id": 105,
"text": "Radar guided missiles fall into two main missile guidance types. In the historically more common semi-active radar homing case the missile homes in on radar signals transmitted from launching aircraft and reflected from the target. This has the disadvantage that the firing aircraft must maintain radar lock on the target and is thus less free to maneuver and more vulnerable to attack. A widely deployed missile of this type was the AIM-7 Sparrow, which entered service in 1954 and was produced in improving versions until 1997. In more advanced active radar homing the missile is guided to the vicinity of the target by internal data on its projected position, and then \"goes active\" with an internally carried small radar system to conduct terminal guidance to the target. This eliminates the requirement for the firing aircraft to maintain radar lock, and thus greatly reduces risk. A prominent example is the AIM-120 AMRAAM, which was first fielded in 1991 as the AIM-7 replacement, and which has no firm retirement date as of 2016. The current AIM-120D version has a maximum high altitude range of greater than 160 km (>99 miles), and cost approximately $2.4 million each (2016). As is typical with most other missiles, range at lower altitude may be as little as one third that of high altitude.",
"title": "Weapons"
},
{
"paragraph_id": 106,
"text": "In the Vietnam air war radar missile kill reliability was approximately 10% at shorter ranges, and even worse at longer ranges due to reduced radar return and greater time for the target aircraft to detect the incoming missile and take evasive action. At one point in the Vietnam war, the U.S. Navy fired 50 AIM-7 Sparrow radar guided missiles in a row without a hit. Between 1958 and 1982 in five wars there were 2,014 combined heat-seeking and radar guided missile firings by fighter pilots engaged in air-to-air combat, achieving 528 kills, of which 76 were radar missile kills, for a combined effectiveness of 26%. However, only four of the 76 radar missile kills were in the beyond-visual-range mode intended to be the strength of radar guided missiles. The United States invested over $10 billion in air-to-air radar missile technology from the 1950s to the early 1970s. Amortized over actual kills achieved by the U.S. and its allies, each radar guided missile kill thus cost over $130 million. The defeated enemy aircraft were for the most part older MiG-17s, −19s, and −21s, with new cost of $0.3 million to $3 million each. Thus, the radar missile investment over that period far exceeded the value of enemy aircraft destroyed, and furthermore had very little of the intended BVR effectiveness.",
"title": "Weapons"
},
{
"paragraph_id": 107,
"text": "However, continuing heavy development investment and rapidly advancing electronic technology led to significant improvement in radar missile reliabilities from the late 1970s onward. Radar guided missiles achieved 75% Pk (9 kills out of 12 shots) in operations in the Gulf War in 1991. The percentage of kills achieved by radar guided missiles also surpassed 50% of total kills for the first time by 1991. Since 1991, 20 of 61 kills worldwide have been beyond-visual-range using radar missiles. Discounting an accidental friendly fire kill, in operational use the AIM-120D (the current main American radar guided missile) has achieved 9 kills out of 16 shots for a 56% Pk. Six of these kills were BVR, out of 13 shots, for a 46% BVR Pk. Though all these kills were against less capable opponents who were not equipped with operating radar, electronic countermeasures, or a comparable weapon themselves, the BVR Pk was a significant improvement from earlier eras. However, a current concern is electronic countermeasures to radar missiles, which are thought to be reducing the effectiveness of the AIM-120D. Some experts believe that as of 2016 the European Meteor missile, the Russian R-37M, and the Chinese PL-15 are more resistant to countermeasures and more effective than the AIM-120D.",
"title": "Weapons"
},
{
"paragraph_id": 108,
"text": "Now that higher reliabilities have been achieved, both types of missiles allow the fighter pilot to often avoid the risk of the short-range dogfight, where only the more experienced and skilled fighter pilots tend to prevail, and where even the finest fighter pilot can simply get unlucky. Taking maximum advantage of complicated missile parameters in both attack and defense against competent opponents does take considerable experience and skill, but against surprised opponents lacking comparable capability and countermeasures, air-to-air missile warfare is relatively simple. By partially automating air-to-air combat and reducing reliance on gun kills mostly achieved by only a small expert fraction of fighter pilots, air-to-air missiles now serve as highly effective force multipliers.",
"title": "Weapons"
},
{
"paragraph_id": 109,
"text": "",
"title": "External links"
}
]
| Fighter aircraft are military aircraft designed primarily for air-to-air combat. In military conflict, the role of fighter aircraft is to establish air superiority of the battlespace. Domination of the airspace above a battlefield permits bombers and attack aircraft to engage in tactical and strategic bombing of enemy targets. The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters. Many modern fighter aircraft also have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, and these include the interceptor, heavy fighter, and night fighter. | 2001-11-22T12:26:48Z | 2023-12-24T09:17:57Z | [
"Template:ISBN",
"Template:Cite book",
"Template:Webarchive",
"Template:Efn",
"Template:Clear",
"Template:Sfn",
"Template:See also",
"Template:Cite news",
"Template:Citation",
"Template:Cite magazine",
"Template:Convert",
"Template:Citation needed",
"Template:Main",
"Template:Redirect2",
"Template:As of",
"Template:Clarify",
"Template:Cite web",
"Template:Military aircraft types (roles)",
"Template:Authority control",
"Template:Use American English",
"Template:More citations needed section",
"Template:Sfnm",
"Template:Notelist",
"Template:Reflist",
"Template:Use dmy dates",
"Template:About",
"Template:Cvt",
"Template:Further",
"Template:Short description",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Fighter_aircraft |
10,930 | February 25 | February 25 is the 56th day of the year in the Gregorian calendar; 309 days remain until the end of the year (310 in leap years). | [
{
"paragraph_id": 0,
"text": "February 25 is the 56th day of the year in the Gregorian calendar; 309 days remain until the end of the year (310 in leap years).",
"title": ""
}
]
| February 25 is the 56th day of the year in the Gregorian calendar; 309 days remain until the end of the year. | 2001-08-23T18:37:21Z | 2023-12-22T22:27:01Z | [
"Template:Pp-move",
"Template:This date in recent years",
"Template:Cite EB1911",
"Template:Months",
"Template:Cite book",
"Template:Cite news",
"Template:Cite web",
"Template:Cite ODNB",
"Template:Short description",
"Template:Pp-pc",
"Template:Day",
"Template:Cite magazine",
"Template:NYT On this day",
"Template:Calendar",
"Template:USS",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Hugman",
"Template:Commons"
]
| https://en.wikipedia.org/wiki/February_25 |
10,931 | Finite-state machine | A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order.
The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory.
An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.
Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. There are two possible inputs that affect its state: putting a coin in the slot coin and pushing the arm push. In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. A customer pushing through the arms gives a push input and resets the state to Locked.
The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:
The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state.
A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.
In some finite-state machine representations, it is also possible to associate actions with a state:
Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. An FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine).
The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.
The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:
SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite-state machine executable.
There are a large number of variants to represent an FSM such as the one in figure 3.
In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation. In computer science, finite-state machines are widely used in modeling of application behavior (control theory), design of hardware digital systems, software engineering, compilers, network protocols, and computational linguistics.
Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers.
Acceptors (also called detectors or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either accepting or non accepting. Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string "nice". In this acceptor, the only accepting state is state 7.
A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not.
An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is accepted by the acceptor. By definition, the languages accepted by acceptors are the regular languages.
The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring.
An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.
S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc.
Classifiers are a generalization of acceptors that produce n-ary output where n is strictly greater than two.
Transducers produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.
In control applications, two types are distinguished:
Sequencers (also called generators) are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence, which can be seen as an output sequence of acceptor or transducer outputs.
A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.
A finite-state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools.
There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions.
In accordance with the general classification, the following formal definitions are found.
A deterministic finite-state machine or deterministic finite-state acceptor is a quintuple ( Σ , S , s 0 , δ , F ) {\displaystyle (\Sigma ,S,s_{0},\delta ,F)} , where:
For both deterministic and non-deterministic FSMs, it is conventional to allow δ {\displaystyle \delta } to be a partial function, i.e. δ ( s , x ) {\displaystyle \delta (s,x)} does not have to be defined for every combination of s ∈ S {\displaystyle s\in S} and x ∈ Σ {\displaystyle x\in \Sigma } . If an FSM M {\displaystyle M} is in a state s {\displaystyle s} , the next symbol is x {\displaystyle x} and δ ( s , x ) {\displaystyle \delta (s,x)} is not defined, then M {\displaystyle M} can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.
A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa.
A finite-state transducer is a sextuple ( Σ , Γ , S , s 0 , δ , ω ) {\displaystyle (\Sigma ,\Gamma ,S,s_{0},\delta ,\omega )} , where:
If the output function depends on the state and input symbol ( ω : S × Σ → Γ {\displaystyle \omega :S\times \Sigma \rightarrow \Gamma } ) that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on the state ( ω : S → Γ {\displaystyle \omega :S\rightarrow \Gamma } ) that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.
If we disregard the first output symbol of a Moore machine, ω ( s 0 ) {\displaystyle \omega (s_{0})} , then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol.
Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time.
In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.
In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output.
Through state encoding for low power state machines may be optimized to minimize power consumption.
The following concepts are commonly used to build software applications with finite-state machines:
Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser. Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.
Finite Markov-chain processes are also known as subshifts of finite type. | [
{
"paragraph_id": 0,
"text": "A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform \"read\" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory.",
"title": ""
},
{
"paragraph_id": 3,
"text": "An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.",
"title": "Example: coin-operated turnstile"
},
{
"paragraph_id": 4,
"text": "Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. There are two possible inputs that affect its state: putting a coin in the slot coin and pushing the arm push. In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. A customer pushing through the arms gives a push input and resets the state to Locked.",
"title": "Example: coin-operated turnstile"
},
{
"paragraph_id": 5,
"text": "The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:",
"title": "Example: coin-operated turnstile"
},
{
"paragraph_id": 6,
"text": "The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state.",
"title": "Example: coin-operated turnstile"
},
{
"paragraph_id": 7,
"text": "A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received. For example, when using an audio system to listen to the radio (the system is in the \"radio\" state), receiving a \"next\" stimulus results in moving to the next station. When the system is in the \"CD\" state, the \"next\" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.",
"title": "Concepts and terminology"
},
{
"paragraph_id": 8,
"text": "In some finite-state machine representations, it is also possible to associate actions with a state:",
"title": "Concepts and terminology"
},
{
"paragraph_id": 9,
"text": "Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. An FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine).",
"title": "Representations"
},
{
"paragraph_id": 10,
"text": "The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.",
"title": "Representations"
},
{
"paragraph_id": 11,
"text": "The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:",
"title": "Representations"
},
{
"paragraph_id": 12,
"text": "SDL embeds basic data types called \"Abstract Data Types\", an action language, and an execution semantic in order to make the finite-state machine executable.",
"title": "Representations"
},
{
"paragraph_id": 13,
"text": "There are a large number of variants to represent an FSM such as the one in figure 3.",
"title": "Representations"
},
{
"paragraph_id": 14,
"text": "In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation. In computer science, finite-state machines are widely used in modeling of application behavior (control theory), design of hardware digital systems, software engineering, compilers, network protocols, and computational linguistics.",
"title": "Usage"
},
{
"paragraph_id": 15,
"text": "Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "Acceptors (also called detectors or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either accepting or non accepting. Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string \"nice\". In this acceptor, the only accepting state is state 7.",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not.",
"title": "Classification"
},
{
"paragraph_id": 18,
"text": "An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is accepted by the acceptor. By definition, the languages accepted by acceptors are the regular languages.",
"title": "Classification"
},
{
"paragraph_id": 19,
"text": "The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring.",
"title": "Classification"
},
{
"paragraph_id": 20,
"text": "An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.",
"title": "Classification"
},
{
"paragraph_id": 21,
"text": "S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc.",
"title": "Classification"
},
{
"paragraph_id": 22,
"text": "Classifiers are a generalization of acceptors that produce n-ary output where n is strictly greater than two.",
"title": "Classification"
},
{
"paragraph_id": 23,
"text": "Transducers produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.",
"title": "Classification"
},
{
"paragraph_id": 24,
"text": "In control applications, two types are distinguished:",
"title": "Classification"
},
{
"paragraph_id": 25,
"text": "Sequencers (also called generators) are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence, which can be seen as an output sequence of acceptor or transducer outputs.",
"title": "Classification"
},
{
"paragraph_id": 26,
"text": "A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.",
"title": "Classification"
},
{
"paragraph_id": 27,
"text": "A finite-state machine with only one state is called a \"combinatorial FSM\". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools.",
"title": "Classification"
},
{
"paragraph_id": 28,
"text": "There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions.",
"title": "Alternative semantics"
},
{
"paragraph_id": 29,
"text": "In accordance with the general classification, the following formal definitions are found.",
"title": "Mathematical model"
},
{
"paragraph_id": 30,
"text": "A deterministic finite-state machine or deterministic finite-state acceptor is a quintuple ( Σ , S , s 0 , δ , F ) {\\displaystyle (\\Sigma ,S,s_{0},\\delta ,F)} , where:",
"title": "Mathematical model"
},
{
"paragraph_id": 31,
"text": "For both deterministic and non-deterministic FSMs, it is conventional to allow δ {\\displaystyle \\delta } to be a partial function, i.e. δ ( s , x ) {\\displaystyle \\delta (s,x)} does not have to be defined for every combination of s ∈ S {\\displaystyle s\\in S} and x ∈ Σ {\\displaystyle x\\in \\Sigma } . If an FSM M {\\displaystyle M} is in a state s {\\displaystyle s} , the next symbol is x {\\displaystyle x} and δ ( s , x ) {\\displaystyle \\delta (s,x)} is not defined, then M {\\displaystyle M} can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.",
"title": "Mathematical model"
},
{
"paragraph_id": 32,
"text": "A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform \"read\" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa.",
"title": "Mathematical model"
},
{
"paragraph_id": 33,
"text": "A finite-state transducer is a sextuple ( Σ , Γ , S , s 0 , δ , ω ) {\\displaystyle (\\Sigma ,\\Gamma ,S,s_{0},\\delta ,\\omega )} , where:",
"title": "Mathematical model"
},
{
"paragraph_id": 34,
"text": "If the output function depends on the state and input symbol ( ω : S × Σ → Γ {\\displaystyle \\omega :S\\times \\Sigma \\rightarrow \\Gamma } ) that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on the state ( ω : S → Γ {\\displaystyle \\omega :S\\rightarrow \\Gamma } ) that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.",
"title": "Mathematical model"
},
{
"paragraph_id": 35,
"text": "If we disregard the first output symbol of a Moore machine, ω ( s 0 ) {\\displaystyle \\omega (s_{0})} , then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol.",
"title": "Mathematical model"
},
{
"paragraph_id": 36,
"text": "Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time.",
"title": "Optimization"
},
{
"paragraph_id": 37,
"text": "In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.",
"title": "Implementation"
},
{
"paragraph_id": 38,
"text": "In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output.",
"title": "Implementation"
},
{
"paragraph_id": 39,
"text": "Through state encoding for low power state machines may be optimized to minimize power consumption.",
"title": "Implementation"
},
{
"paragraph_id": 40,
"text": "The following concepts are commonly used to build software applications with finite-state machines:",
"title": "Implementation"
},
{
"paragraph_id": 41,
"text": "Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser. Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.",
"title": "Implementation"
},
{
"paragraph_id": 42,
"text": "Finite Markov-chain processes are also known as subshifts of finite type.",
"title": "Further reading"
}
]
| A finite-state machine (FSM) or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed. The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order. The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory. | 2001-08-01T21:35:22Z | 2023-11-29T14:21:54Z | [
"Template:Redirect",
"Template:Diagonal split header",
"Template:Rp",
"Template:ISBN",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:Automata theory",
"Template:Explain",
"Template:Main",
"Template:Cite web",
"Template:Webarchive",
"Template:Digital systems",
"Template:Hatnote",
"Template:Technical inline",
"Template:Div col",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Dead link",
"Template:Curlie",
"Template:Citation needed",
"Template:Div col end",
"Template:Cite conference",
"Template:Cite report",
"Template:Formal languages and grammars"
]
| https://en.wikipedia.org/wiki/Finite-state_machine |
10,933 | Functional programming | In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.
Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).
The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.
Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language.
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.
The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.
Functional programming continues to be used in commercial settings.
A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.
Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator d / d x {\displaystyle d/dx} , which returns the derivative of a function f {\displaystyle f} .
Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).
Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics.
Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.
The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.
Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.
Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression:
fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.
Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic datatypes makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.
Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.
A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#.
Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.
Consider C assignment statement x=x * 10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.
Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent.
Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.
Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.
Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item.
The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result".
Traditional Imperative Loop:
Functional Programming with higher-order functions:
There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).
Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.
Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.
Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.
Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) .
It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.
JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.
In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.
In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes.
In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.
Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.
Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program:
The program can be queried, like a functional program, to generate mothers from children:
But it can also be queried backwards, to generate children:
It can even be used to generate all instances of the mother relation:
Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form:
The same definition in relational notation needs to be written in the unnested form:
Here :- means if and , means and.
However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:
Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy.
Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.
Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.
Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming.
Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure.
Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.
Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods.
Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. | [
{
"paragraph_id": 0,
"text": "In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).",
"title": ""
},
{
"paragraph_id": 4,
"text": "The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "John Backus presented FP in his 1977 Turing Award lecture \"Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs\". He defines functional programs as being built up in a hierarchical way by means of \"combining forms\" that allow an \"algebra of programs\"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Functional programming continues to be used in commercial settings.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using \"mostly imperative\" languages may have utilized some of these concepts.",
"title": "Concepts"
},
{
"paragraph_id": 18,
"text": "Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator d / d x {\\displaystyle d/dx} , which returns the derivative of a function f {\\displaystyle f} .",
"title": "Concepts"
},
{
"paragraph_id": 19,
"text": "Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: \"higher-order\" describes a mathematical concept of functions that operate on other functions, while \"first-class\" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).",
"title": "Concepts"
},
{
"paragraph_id": 20,
"text": "Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.",
"title": "Concepts"
},
{
"paragraph_id": 21,
"text": "Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:",
"title": "Concepts"
},
{
"paragraph_id": 22,
"text": "While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics.",
"title": "Concepts"
},
{
"paragraph_id": 23,
"text": "Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.",
"title": "Concepts"
},
{
"paragraph_id": 24,
"text": "The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.",
"title": "Concepts"
},
{
"paragraph_id": 25,
"text": "Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or \"folds\" and \"unfolds\") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.",
"title": "Concepts"
},
{
"paragraph_id": 26,
"text": "Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.",
"title": "Concepts"
},
{
"paragraph_id": 27,
"text": "Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression:",
"title": "Concepts"
},
{
"paragraph_id": 28,
"text": "fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.",
"title": "Concepts"
},
{
"paragraph_id": 29,
"text": "The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.",
"title": "Concepts"
},
{
"paragraph_id": 30,
"text": "Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.",
"title": "Concepts"
},
{
"paragraph_id": 31,
"text": "Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic datatypes makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.",
"title": "Concepts"
},
{
"paragraph_id": 32,
"text": "Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.",
"title": "Concepts"
},
{
"paragraph_id": 33,
"text": "A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#.",
"title": "Concepts"
},
{
"paragraph_id": 34,
"text": "Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.",
"title": "Concepts"
},
{
"paragraph_id": 35,
"text": "Consider C assignment statement x=x * 10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.",
"title": "Concepts"
},
{
"paragraph_id": 36,
"text": "Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent.",
"title": "Concepts"
},
{
"paragraph_id": 37,
"text": "Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.",
"title": "Concepts"
},
{
"paragraph_id": 38,
"text": "Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 39,
"text": "Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order \"map\" function that takes a function and a list, generating and returning a new list by applying the function to each list item.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 40,
"text": "The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable \"result\".",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 41,
"text": "Traditional Imperative Loop:",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 42,
"text": "Functional Programming with higher-order functions:",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 43,
"text": "There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 44,
"text": "The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 45,
"text": "Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 46,
"text": "Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 47,
"text": "Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 48,
"text": "Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 49,
"text": "Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 50,
"text": "Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) .",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 51,
"text": "It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 52,
"text": "JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for \"lambda\", \"map\", \"reduce\", and \"filter\" in 1994, as well as closures in Python 2.2, though Python 3 relegated \"reduce\" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 53,
"text": "In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 54,
"text": "In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 55,
"text": "In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 56,
"text": "Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 57,
"text": "Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.",
"title": "Comparison to imperative programming"
},
{
"paragraph_id": 58,
"text": "Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 59,
"text": "The program can be queried, like a functional program, to generate mothers from children:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 60,
"text": "But it can also be queried backwards, to generate children:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 61,
"text": "It can even be used to generate all instances of the mother relation:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 62,
"text": "Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 63,
"text": "The same definition in relational notation needs to be written in the unnested form:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 64,
"text": "Here :- means if and , means and.",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 65,
"text": "However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 66,
"text": "Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy.",
"title": "Comparison to logic programming"
},
{
"paragraph_id": 67,
"text": "Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.",
"title": "Applications"
},
{
"paragraph_id": 68,
"text": "Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.",
"title": "Applications"
},
{
"paragraph_id": 69,
"text": "Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming.",
"title": "Applications"
},
{
"paragraph_id": 70,
"text": "Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure.",
"title": "Applications"
},
{
"paragraph_id": 71,
"text": "Functional \"platforms\" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.",
"title": "Applications"
},
{
"paragraph_id": 72,
"text": "Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods.",
"title": "Applications"
},
{
"paragraph_id": 73,
"text": "Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics.",
"title": "Applications"
}
]
| In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names, passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects. Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java. | 2001-10-14T13:58:27Z | 2023-12-10T22:44:57Z | [
"Template:Citation needed",
"Template:Harvnb",
"Template:Portal",
"Template:Cite web",
"Template:Citation",
"Template:Short description",
"Template:For",
"Template:ISBN",
"Template:Spoken Wikipedia",
"Template:Types of programming languages",
"Template:Cbignore",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Triangulation",
"Template:Authority control",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite thesis",
"Template:Main",
"Template:Notelist",
"Template:Cite conference",
"Template:Programming paradigms",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Functional_programming |
10,936 | February 29 | February 29 is a leap day (or "leap year day"), an intercalary date added periodically to leap years in the Julian and Gregorian calendars. It is the 60th day of a leap year in both calendars, and 306 days remain until the end of the leap year. It is also the last day of February in leap years with the exception of 1712 in Sweden. It is also the last day of meteorological winter in the Northern Hemisphere and the last day of meteorological summer in the Southern Hemisphere in leap years.
In the Gregorian calendar (the standard civil calendar used in most of the world), February 29 is added in each year that is an integer multiple of four (except for years evenly divisible by 100, but not by 400). The Julian calendar — since 1923 a liturgical calendar — has a February 29 every fourth year without exception. (Consequently, February 29 in the Julian calendar falls 13 days later than February 29 in the Gregorian, until the year 2100.) | [
{
"paragraph_id": 0,
"text": "February 29 is a leap day (or \"leap year day\"), an intercalary date added periodically to leap years in the Julian and Gregorian calendars. It is the 60th day of a leap year in both calendars, and 306 days remain until the end of the leap year. It is also the last day of February in leap years with the exception of 1712 in Sweden. It is also the last day of meteorological winter in the Northern Hemisphere and the last day of meteorological summer in the Southern Hemisphere in leap years.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the Gregorian calendar (the standard civil calendar used in most of the world), February 29 is added in each year that is an integer multiple of four (except for years evenly divisible by 100, but not by 400). The Julian calendar — since 1923 a liturgical calendar — has a February 29 every fourth year without exception. (Consequently, February 29 in the Julian calendar falls 13 days later than February 29 in the Gregorian, until the year 2100.)",
"title": ""
}
]
| February 29 is a leap day, an intercalary date added periodically to leap years in the Julian and Gregorian calendars. It is the 60th day of a leap year in both calendars, and 306 days remain until the end of the leap year. It is also the last day of February in leap years with the exception of 1712 in Sweden. It is also the last day of meteorological winter in the Northern Hemisphere and the last day of meteorological summer in the Southern Hemisphere in leap years. In the Gregorian calendar, February 29 is added in each year that is an integer multiple of four. The Julian calendar — since 1923 a liturgical calendar — has a February 29 every fourth year without exception. | 2001-11-08T15:54:37Z | 2023-12-22T11:44:35Z | [
"Template:Other uses",
"Template:DNB",
"Template:LCAuth",
"Template:Calendar",
"Template:CanParlbio",
"Template:NYT On this day",
"Template:M",
"Template:Cite book",
"Template:Schaff-Herzog",
"Template:Pp-pc1",
"Template:Reflist",
"Template:Cite ODNB",
"Template:Cite AmCyc",
"Template:Australian Dictionary of Biography",
"Template:CongBio",
"Template:Cite report",
"Template:Citation",
"Template:Use mdy dates",
"Template:This date in recent years",
"Template:Cite magazine",
"Template:Dictionary of Australian Biography",
"Template:Cite ANB",
"Template:Short description",
"Template:Cite journal",
"Template:Cite news",
"Template:Cite CE1913",
"Template:Cite Sports-Reference",
"Template:Cite AuDB",
"Template:Redirect",
"Template:Pp-move-indef",
"Template:Cite web",
"Template:Cite DCB",
"Template:Cbignore",
"Template:Cite tweet",
"Template:EB1911",
"Template:Commons",
"Template:Months"
]
| https://en.wikipedia.org/wiki/February_29 |
10,937 | Francis Scott Key | Francis Scott Key (August 1, 1779 – January 11, 1843) was an American lawyer, author, and amateur poet from Frederick, Maryland, best known as the author of the text of the U.S. national anthem, "The Star-Spangled Banner". Key observed the British bombardment of Fort McHenry in 1814 during the War of 1812. He was inspired upon seeing the American flag still flying over the fort at dawn and wrote the poem "Defence of Fort M'Henry"; it was published within a week with the suggested tune of the popular song "To Anacreon in Heaven". The song with Key's lyrics became known as "The Star-Spangled Banner" and slowly gained in popularity as an unofficial anthem, finally achieving official status more than a century later under President Herbert Hoover as the national anthem.
Key was a lawyer in Maryland and Washington, D.C. for four decades and worked on important cases, including the Burr conspiracy trial, and he argued numerous times before the Supreme Court. He was nominated for District Attorney for the District of Columbia by President Andrew Jackson, where he served from 1833 to 1841. Key was a devout Episcopalian.
Key owned slaves from 1800, during which time abolitionists ridiculed his words, claiming that America was more like the "Land of the Free and Home of the Oppressed". As District Attorney, he suppressed abolitionists, and in 1836 lost a case against Reuben Crandall where he accused the defendant's abolitionist publications of instigating slaves to rebel. He was also a leader of the American Colonization Society which sent freed slaves to Africa. He freed some of his slaves in the 1830s, paying one ex-slave as his farm foreman to supervise his other slaves. He publicly criticized slavery and gave free legal representation to some slaves seeking freedom, but he also represented owners of runaway slaves. At the time of his death he owned eight slaves.
Key was born into an affluent family. Key's father John Ross Key was a lawyer, a commissioned officer in the Continental Army, and a judge of English descent. His mother Ann Phoebe Dagworthy Charlton was born (February 6, 1756 – 1830), to Arthur Charlton, a tavern keeper, and his wife, Eleanor Harrison of Frederick in the colony of Maryland.
Key grew up on the family plantation Terra Rubra in Frederick County, Maryland, which is now Carroll County. He graduated from St. John's College, Annapolis, Maryland, in 1796 and read law under his uncle Philip Barton Key who was loyal to the British Crown during the War of Independence. He married Mary Tayloe Lloyd on January 1, 1802, daughter of Edward Lloyd IV of Wye House and Elizabeth Tayloe, daughter of John Tayloe II of Mount Airy and sister of John Tayloe III of The Octagon House. The couple raised their 11 children in their Georgetown residence, the Key House.
During the War of 1812, following the Burning of Washington in August 1814, on September 7, 1814, Key and American Agent for Prisoners of War, Colonel John Stuart Skinner dined aboard HMS Tonnant as the guests of Vice Admiral Alexander Cochrane, Rear Admiral George Cockburn, and Major General Robert Ross. Skinner and Key were there to plead for the release of Dr. William Beanes, an elderly resident of Upper Marlboro, Maryland, and a friend of Key, who had been captured in his home on August 28, 1814. Beanes was accused of aiding the arrest of some British soldiers (stragglers withdrawing after the Washington campaign) who were pillaging homes. Skinner, Key, and the released Beanes were allowed to return to their own truce ship, under guard, but not allowed to leave the fleet because they had become familiar with the strength and position of the British units and their intention to launch an attack upon Baltimore. Key was unable to do anything but watch the 25-hour bombardment of the American forces at Fort McHenry during the Battle of Baltimore from dawn of September 13 through the morning of the 14th, 1814.
At dawn, Key was able to see a large American flag waving over the fort, and he started writing a poem about his experience, on the back of a letter he had kept in his pocket. On September 16, Key, Skinner and Beanes were released from the fleet. When they arrived in Baltimore that evening, Key completed the poem at the Indian Queen Hotel, where he was staying, His finished manuscript was untitled and unsigned. When printed as a broadside the next day, it was given the title "Defence of Fort M'Henry” with the notation: "Tune – Anacreon in Heaven" This was a popular tune that Key had already used as a setting for his 1805 song "When the Warrior Returns", celebrating American heroes of the First Barbary War. It was soon published in newspapers first in Baltimore and then across the nation, and given the new title The Star-Spangled Banner. It was somewhat difficult to sing, yet it became increasingly popular, competing with "Hail, Columbia" (1796) as the de facto national anthem by the time of the Mexican–American War and the American Civil War. The song was finally adopted as the American national anthem more than a century after its first publication by Act of Congress in 1931 signed by President Herbert Hoover.
Key was a leading attorney in Frederick, Maryland, and Washington, D.C., for many years, with an extensive real estate and trial practice. He and his family settled in Georgetown in 1805 or 1806, near the new national capital. He assisted his uncle Philip Barton Key in the sensational conspiracy trial of Aaron Burr and in the expulsion of Senator John Smith of Ohio. He made the first of his many arguments before the United States Supreme Court in 1807. In 1808, he assisted President Thomas Jefferson's attorney general in United States v. Peters.
In 1829, Key assisted in the prosecution of Tobias Watkins, former U.S. Treasury auditor under President John Quincy Adams, for misappropriating public funds. He also handled the Petticoat affair concerning Secretary of War John Eaton, and he served as the attorney for Sam Houston in 1832 during his trial for assaulting Representative William Stanbery of Ohio. After years as an adviser to President Jackson, Key was nominated by the President to District Attorney for the District of Columbia in 1833. He served from 1833 to 1841 while also handling his own private legal cases. In 1835, he prosecuted Richard Lawrence for his attempt to assassinate President Jackson at the top steps of the Capitol, the first attempt to kill an American president.
Key purchased his first slave in 1800 or 1801 and owned six slaves in 1820. He freed seven of his slaves in the 1830s, and owned eight slaves when he died. One of his freed slaves continued to work for him for wages as his farm's foreman, supervising several slaves. Key also represented several slaves seeking their freedom, as well as several slave-owners seeking return of their runaway slaves. Key was one of the executors of John Randolph of Roanoke's will, which freed his 400 slaves, and Key fought to enforce the will for the next decade and to provide the freed slaves with land to support themselves.
Key is known to have publicly criticized slavery's cruelties, and a newspaper editorial stated that "he often volunteered to defend the downtrodden sons and daughters of Africa." The editor said that Key "convinced me that slavery was wrong—radically wrong".
A quote increasingly credited to Key stating that free black people are "a distinct and inferior race of people, which all experience proves to be the greatest evil that afflicts a community" is erroneous. The quote is taken from an 1838 letter that Key wrote to Reverend Benjamin Tappan of Maine who had sent Key a questionnaire about the attitudes of Southern religious institutions about slavery. Rather than representing a statement by Key identifying his personal thoughts, the words quoted are offered by Key to describe the attitudes of others who assert that formerly enslaved black people could not remain in the U.S. as paid laborers. This was the official policy of the American Colonization Society. Key was an ACS leader and fundraiser for the organization, but he himself did not send the men and women he freed to Africa upon their emancipation. The original confusion around this quote arises from ambiguities in the 1937 biography of Key by Edward S. Delaplaine.
Key was a founding member and active leader of the American Colonization Society (ACS), whose primary goal was to send free black people to Africa. Though many free black people were born in the United States by this time, historians argue that upper-class American society, of which Key was a part, could never "envision a multiracial society". The ACS was not supported by most abolitionists or free black people of the time, but the organization's work would eventually lead to the creation of Liberia in 1847.
In the early 1830s American thinking on slavery changed quite abruptly. Considerable opposition to the American Colonization Society's project emerged. Led by newspaper editor and publisher Wm. Lloyd Garrison, a growing portion of the population noted that only a very small number of free black people were actually moved, and they faced brutal conditions in West Africa, with very high mortality. Free black people made it clear that few of them wanted to move, and if they did, it would be to Canada, Mexico, or Central America, not Africa. The leaders of the American Colonization Society, including Key, were predominantly slave owners. The Society was intended to preserve slavery, rather than eliminate it. In the words of philanthropist Gerrit Smith, it was "quite as much an Anti-Abolition, as Colonization Society". "This Colonization Society had, by an invisible process, half conscious, half unconscious, been transformed into a serviceable organ and member of the Slave Power."
The alternative to the colonization of Africa, project of the American Colonization Society, was the total and immediate abolition of slavery in the United States. This Key was firmly against, with or without slave owner compensation, and he used his position as District Attorney to attack abolitionists. In 1833, he secured a grand jury indictment against Benjamin Lundy, editor of the anti-slavery publication Genius of Universal Emancipation, and his printer William Greer, for libel after Lundy published an article that declared, "There is neither mercy nor justice for colored people in this district [of Columbia]". Lundy's article, Key said in the indictment, "was intended to injure, oppress, aggrieve, and vilify the good name, fame, credit & reputation of the Magistrates and constables" of Washington. Lundy left town rather than face trial; Greer was acquitted.
In a larger unsuccessful prosecution, in August 1836 Key obtained an indictment against Reuben Crandall, brother of controversial Connecticut teacher Prudence Crandall, who had recently moved to Washington, D.C. It accused Crandall of "seditious libel" after two marshals (who operated as slave catchers in their off hours) found Crandall had a trunk full of anti-slavery publications in his Georgetown residence/office, five days after the Snow riot, caused by rumors that a mentally ill slave had attempted to kill an elderly white woman. In an April 1837 trial that attracted nationwide attention and that congressmen attended, Key charged that Crandall's publications instigated slaves to rebel. Crandall's attorneys acknowledged he opposed slavery, but denied any intent or actions to encourage rebellion. Evidence was introduced that the anti-slavery publications were packing materials used by his landlady in shipping his possessions to him. He had not "published" anything; he had given one copy to one man who had asked for it.
Key, in his final address to the jury said:
Are you willing, gentlemen, to abandon your country, to permit it to be taken from you, and occupied by the abolitionist, according to whose taste it is to associate and amalgamate with the negro? Or, gentlemen, on the other hand, are there laws in this community to defend you from the immediate abolitionist, who would open upon you the floodgates of such extensive wickedness and mischief?
The jury acquitted Crandall of all charges. This public and humiliating defeat, as well as family tragedies in 1835, diminished Key's political ambition. He resigned as District Attorney in 1840. He remained a staunch proponent of African colonization and a strong critic of the abolition movement until his death.
Crandall died shortly after his acquittal of pneumonia contracted in the Washington jail.
Key was a devout and prominent Episcopalian. In his youth, he almost became an Episcopal priest rather than a lawyer. Throughout his life he sprinkled biblical references in his correspondence. He was active in All Saints Parish in Frederick, Maryland, near his family's home. He also helped found or financially support several parishes in the new national capital, including St. John's Episcopal Church in Georgetown, Trinity Episcopal Church in present-day Judiciary Square, and Christ Church in Alexandria (at the time, in the District of Columbia).
From 1818 until his death in 1843, Key was associated with the American Bible Society. He successfully opposed an abolitionist resolution presented to that group around 1838.
Key also helped found two Episcopal seminaries, one in Baltimore and the other across the Potomac River in Alexandria (the Virginia Theological Seminary). Key also published a prose work called The Power of Literature, and Its Connection with Religion, in 1834.
On January 11, 1843, Key died at the home of his daughter Elizabeth Howard in Baltimore from pleurisy at age 63. He was initially interred in Old Saint Paul's Cemetery in the vault of John Eager Howard but in 1866, his body was moved to his family plot in Frederick at Mount Olivet Cemetery.
The Key Monument Association erected a memorial in 1898 and the remains of both Francis Scott Key and his wife, Mary Tayloe Lloyd, were placed in a crypt in the base of the monument.
Despite several efforts to preserve it, the Francis Scott Key residence was ultimately dismantled in 1947. The residence had been located at 3516–18 M Street in Georgetown.
Though Key had written poetry from time to time, often with heavily religious themes, these works were not collected and published until 14 years after his death. Two of his religious poems used as Christian hymns include "Before the Lord We Bow" and "Lord, with Glowing Heart I'd Praise Thee".
In 1806, Key's sister, Anne Phoebe Charlton Key, married Roger B. Taney, who would later become Chief Justice of the United States. In 1846 one daughter, Alice, married U.S. Senator George H. Pendleton and another, Ellen Lloyd, married Simon F. Blunt. In 1859, Key's son Philip Barton Key II, who also served as United States Attorney for the District of Columbia, was shot and killed by Daniel Sickles—a U.S. Representative from New York who would serve as a general in the American Civil War—after he discovered that Philip Barton Key was having an affair with his wife. Sickles was acquitted in the first use of the temporary insanity defense. In 1861, Key's grandson Francis Key Howard was imprisoned in Fort McHenry with the Mayor of Baltimore George William Brown and other locals deemed to be Confederate sympathizers.
Key was a distant cousin and the namesake of F. Scott Fitzgerald, whose full name was Francis Scott Key Fitzgerald. His direct descendants include geneticist Thomas Hunt Morgan, guitarist Dana Key, and American fashion designer and socialite Pauline de Rothschild. | [
{
"paragraph_id": 0,
"text": "Francis Scott Key (August 1, 1779 – January 11, 1843) was an American lawyer, author, and amateur poet from Frederick, Maryland, best known as the author of the text of the U.S. national anthem, \"The Star-Spangled Banner\". Key observed the British bombardment of Fort McHenry in 1814 during the War of 1812. He was inspired upon seeing the American flag still flying over the fort at dawn and wrote the poem \"Defence of Fort M'Henry\"; it was published within a week with the suggested tune of the popular song \"To Anacreon in Heaven\". The song with Key's lyrics became known as \"The Star-Spangled Banner\" and slowly gained in popularity as an unofficial anthem, finally achieving official status more than a century later under President Herbert Hoover as the national anthem.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Key was a lawyer in Maryland and Washington, D.C. for four decades and worked on important cases, including the Burr conspiracy trial, and he argued numerous times before the Supreme Court. He was nominated for District Attorney for the District of Columbia by President Andrew Jackson, where he served from 1833 to 1841. Key was a devout Episcopalian.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Key owned slaves from 1800, during which time abolitionists ridiculed his words, claiming that America was more like the \"Land of the Free and Home of the Oppressed\". As District Attorney, he suppressed abolitionists, and in 1836 lost a case against Reuben Crandall where he accused the defendant's abolitionist publications of instigating slaves to rebel. He was also a leader of the American Colonization Society which sent freed slaves to Africa. He freed some of his slaves in the 1830s, paying one ex-slave as his farm foreman to supervise his other slaves. He publicly criticized slavery and gave free legal representation to some slaves seeking freedom, but he also represented owners of runaway slaves. At the time of his death he owned eight slaves.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Key was born into an affluent family. Key's father John Ross Key was a lawyer, a commissioned officer in the Continental Army, and a judge of English descent. His mother Ann Phoebe Dagworthy Charlton was born (February 6, 1756 – 1830), to Arthur Charlton, a tavern keeper, and his wife, Eleanor Harrison of Frederick in the colony of Maryland.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Key grew up on the family plantation Terra Rubra in Frederick County, Maryland, which is now Carroll County. He graduated from St. John's College, Annapolis, Maryland, in 1796 and read law under his uncle Philip Barton Key who was loyal to the British Crown during the War of Independence. He married Mary Tayloe Lloyd on January 1, 1802, daughter of Edward Lloyd IV of Wye House and Elizabeth Tayloe, daughter of John Tayloe II of Mount Airy and sister of John Tayloe III of The Octagon House. The couple raised their 11 children in their Georgetown residence, the Key House.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "During the War of 1812, following the Burning of Washington in August 1814, on September 7, 1814, Key and American Agent for Prisoners of War, Colonel John Stuart Skinner dined aboard HMS Tonnant as the guests of Vice Admiral Alexander Cochrane, Rear Admiral George Cockburn, and Major General Robert Ross. Skinner and Key were there to plead for the release of Dr. William Beanes, an elderly resident of Upper Marlboro, Maryland, and a friend of Key, who had been captured in his home on August 28, 1814. Beanes was accused of aiding the arrest of some British soldiers (stragglers withdrawing after the Washington campaign) who were pillaging homes. Skinner, Key, and the released Beanes were allowed to return to their own truce ship, under guard, but not allowed to leave the fleet because they had become familiar with the strength and position of the British units and their intention to launch an attack upon Baltimore. Key was unable to do anything but watch the 25-hour bombardment of the American forces at Fort McHenry during the Battle of Baltimore from dawn of September 13 through the morning of the 14th, 1814.",
"title": "\"The Star-Spangled Banner\""
},
{
"paragraph_id": 6,
"text": "At dawn, Key was able to see a large American flag waving over the fort, and he started writing a poem about his experience, on the back of a letter he had kept in his pocket. On September 16, Key, Skinner and Beanes were released from the fleet. When they arrived in Baltimore that evening, Key completed the poem at the Indian Queen Hotel, where he was staying, His finished manuscript was untitled and unsigned. When printed as a broadside the next day, it was given the title \"Defence of Fort M'Henry” with the notation: \"Tune – Anacreon in Heaven\" This was a popular tune that Key had already used as a setting for his 1805 song \"When the Warrior Returns\", celebrating American heroes of the First Barbary War. It was soon published in newspapers first in Baltimore and then across the nation, and given the new title The Star-Spangled Banner. It was somewhat difficult to sing, yet it became increasingly popular, competing with \"Hail, Columbia\" (1796) as the de facto national anthem by the time of the Mexican–American War and the American Civil War. The song was finally adopted as the American national anthem more than a century after its first publication by Act of Congress in 1931 signed by President Herbert Hoover.",
"title": "\"The Star-Spangled Banner\""
},
{
"paragraph_id": 7,
"text": "Key was a leading attorney in Frederick, Maryland, and Washington, D.C., for many years, with an extensive real estate and trial practice. He and his family settled in Georgetown in 1805 or 1806, near the new national capital. He assisted his uncle Philip Barton Key in the sensational conspiracy trial of Aaron Burr and in the expulsion of Senator John Smith of Ohio. He made the first of his many arguments before the United States Supreme Court in 1807. In 1808, he assisted President Thomas Jefferson's attorney general in United States v. Peters.",
"title": "Legal career"
},
{
"paragraph_id": 8,
"text": "In 1829, Key assisted in the prosecution of Tobias Watkins, former U.S. Treasury auditor under President John Quincy Adams, for misappropriating public funds. He also handled the Petticoat affair concerning Secretary of War John Eaton, and he served as the attorney for Sam Houston in 1832 during his trial for assaulting Representative William Stanbery of Ohio. After years as an adviser to President Jackson, Key was nominated by the President to District Attorney for the District of Columbia in 1833. He served from 1833 to 1841 while also handling his own private legal cases. In 1835, he prosecuted Richard Lawrence for his attempt to assassinate President Jackson at the top steps of the Capitol, the first attempt to kill an American president.",
"title": "Legal career"
},
{
"paragraph_id": 9,
"text": "Key purchased his first slave in 1800 or 1801 and owned six slaves in 1820. He freed seven of his slaves in the 1830s, and owned eight slaves when he died. One of his freed slaves continued to work for him for wages as his farm's foreman, supervising several slaves. Key also represented several slaves seeking their freedom, as well as several slave-owners seeking return of their runaway slaves. Key was one of the executors of John Randolph of Roanoke's will, which freed his 400 slaves, and Key fought to enforce the will for the next decade and to provide the freed slaves with land to support themselves.",
"title": "Key and slavery"
},
{
"paragraph_id": 10,
"text": "Key is known to have publicly criticized slavery's cruelties, and a newspaper editorial stated that \"he often volunteered to defend the downtrodden sons and daughters of Africa.\" The editor said that Key \"convinced me that slavery was wrong—radically wrong\".",
"title": "Key and slavery"
},
{
"paragraph_id": 11,
"text": "A quote increasingly credited to Key stating that free black people are \"a distinct and inferior race of people, which all experience proves to be the greatest evil that afflicts a community\" is erroneous. The quote is taken from an 1838 letter that Key wrote to Reverend Benjamin Tappan of Maine who had sent Key a questionnaire about the attitudes of Southern religious institutions about slavery. Rather than representing a statement by Key identifying his personal thoughts, the words quoted are offered by Key to describe the attitudes of others who assert that formerly enslaved black people could not remain in the U.S. as paid laborers. This was the official policy of the American Colonization Society. Key was an ACS leader and fundraiser for the organization, but he himself did not send the men and women he freed to Africa upon their emancipation. The original confusion around this quote arises from ambiguities in the 1937 biography of Key by Edward S. Delaplaine.",
"title": "Key and slavery"
},
{
"paragraph_id": 12,
"text": "Key was a founding member and active leader of the American Colonization Society (ACS), whose primary goal was to send free black people to Africa. Though many free black people were born in the United States by this time, historians argue that upper-class American society, of which Key was a part, could never \"envision a multiracial society\". The ACS was not supported by most abolitionists or free black people of the time, but the organization's work would eventually lead to the creation of Liberia in 1847.",
"title": "Key and slavery"
},
{
"paragraph_id": 13,
"text": "In the early 1830s American thinking on slavery changed quite abruptly. Considerable opposition to the American Colonization Society's project emerged. Led by newspaper editor and publisher Wm. Lloyd Garrison, a growing portion of the population noted that only a very small number of free black people were actually moved, and they faced brutal conditions in West Africa, with very high mortality. Free black people made it clear that few of them wanted to move, and if they did, it would be to Canada, Mexico, or Central America, not Africa. The leaders of the American Colonization Society, including Key, were predominantly slave owners. The Society was intended to preserve slavery, rather than eliminate it. In the words of philanthropist Gerrit Smith, it was \"quite as much an Anti-Abolition, as Colonization Society\". \"This Colonization Society had, by an invisible process, half conscious, half unconscious, been transformed into a serviceable organ and member of the Slave Power.\"",
"title": "Key and slavery"
},
{
"paragraph_id": 14,
"text": "The alternative to the colonization of Africa, project of the American Colonization Society, was the total and immediate abolition of slavery in the United States. This Key was firmly against, with or without slave owner compensation, and he used his position as District Attorney to attack abolitionists. In 1833, he secured a grand jury indictment against Benjamin Lundy, editor of the anti-slavery publication Genius of Universal Emancipation, and his printer William Greer, for libel after Lundy published an article that declared, \"There is neither mercy nor justice for colored people in this district [of Columbia]\". Lundy's article, Key said in the indictment, \"was intended to injure, oppress, aggrieve, and vilify the good name, fame, credit & reputation of the Magistrates and constables\" of Washington. Lundy left town rather than face trial; Greer was acquitted.",
"title": "Key and slavery"
},
{
"paragraph_id": 15,
"text": "In a larger unsuccessful prosecution, in August 1836 Key obtained an indictment against Reuben Crandall, brother of controversial Connecticut teacher Prudence Crandall, who had recently moved to Washington, D.C. It accused Crandall of \"seditious libel\" after two marshals (who operated as slave catchers in their off hours) found Crandall had a trunk full of anti-slavery publications in his Georgetown residence/office, five days after the Snow riot, caused by rumors that a mentally ill slave had attempted to kill an elderly white woman. In an April 1837 trial that attracted nationwide attention and that congressmen attended, Key charged that Crandall's publications instigated slaves to rebel. Crandall's attorneys acknowledged he opposed slavery, but denied any intent or actions to encourage rebellion. Evidence was introduced that the anti-slavery publications were packing materials used by his landlady in shipping his possessions to him. He had not \"published\" anything; he had given one copy to one man who had asked for it.",
"title": "Key and slavery"
},
{
"paragraph_id": 16,
"text": "Key, in his final address to the jury said:",
"title": "Key and slavery"
},
{
"paragraph_id": 17,
"text": "Are you willing, gentlemen, to abandon your country, to permit it to be taken from you, and occupied by the abolitionist, according to whose taste it is to associate and amalgamate with the negro? Or, gentlemen, on the other hand, are there laws in this community to defend you from the immediate abolitionist, who would open upon you the floodgates of such extensive wickedness and mischief?",
"title": "Key and slavery"
},
{
"paragraph_id": 18,
"text": "The jury acquitted Crandall of all charges. This public and humiliating defeat, as well as family tragedies in 1835, diminished Key's political ambition. He resigned as District Attorney in 1840. He remained a staunch proponent of African colonization and a strong critic of the abolition movement until his death.",
"title": "Key and slavery"
},
{
"paragraph_id": 19,
"text": "Crandall died shortly after his acquittal of pneumonia contracted in the Washington jail.",
"title": "Key and slavery"
},
{
"paragraph_id": 20,
"text": "Key was a devout and prominent Episcopalian. In his youth, he almost became an Episcopal priest rather than a lawyer. Throughout his life he sprinkled biblical references in his correspondence. He was active in All Saints Parish in Frederick, Maryland, near his family's home. He also helped found or financially support several parishes in the new national capital, including St. John's Episcopal Church in Georgetown, Trinity Episcopal Church in present-day Judiciary Square, and Christ Church in Alexandria (at the time, in the District of Columbia).",
"title": "Religion"
},
{
"paragraph_id": 21,
"text": "From 1818 until his death in 1843, Key was associated with the American Bible Society. He successfully opposed an abolitionist resolution presented to that group around 1838.",
"title": "Religion"
},
{
"paragraph_id": 22,
"text": "Key also helped found two Episcopal seminaries, one in Baltimore and the other across the Potomac River in Alexandria (the Virginia Theological Seminary). Key also published a prose work called The Power of Literature, and Its Connection with Religion, in 1834.",
"title": "Religion"
},
{
"paragraph_id": 23,
"text": "On January 11, 1843, Key died at the home of his daughter Elizabeth Howard in Baltimore from pleurisy at age 63. He was initially interred in Old Saint Paul's Cemetery in the vault of John Eager Howard but in 1866, his body was moved to his family plot in Frederick at Mount Olivet Cemetery.",
"title": "Death and legacy"
},
{
"paragraph_id": 24,
"text": "The Key Monument Association erected a memorial in 1898 and the remains of both Francis Scott Key and his wife, Mary Tayloe Lloyd, were placed in a crypt in the base of the monument.",
"title": "Death and legacy"
},
{
"paragraph_id": 25,
"text": "Despite several efforts to preserve it, the Francis Scott Key residence was ultimately dismantled in 1947. The residence had been located at 3516–18 M Street in Georgetown.",
"title": "Death and legacy"
},
{
"paragraph_id": 26,
"text": "Though Key had written poetry from time to time, often with heavily religious themes, these works were not collected and published until 14 years after his death. Two of his religious poems used as Christian hymns include \"Before the Lord We Bow\" and \"Lord, with Glowing Heart I'd Praise Thee\".",
"title": "Death and legacy"
},
{
"paragraph_id": 27,
"text": "In 1806, Key's sister, Anne Phoebe Charlton Key, married Roger B. Taney, who would later become Chief Justice of the United States. In 1846 one daughter, Alice, married U.S. Senator George H. Pendleton and another, Ellen Lloyd, married Simon F. Blunt. In 1859, Key's son Philip Barton Key II, who also served as United States Attorney for the District of Columbia, was shot and killed by Daniel Sickles—a U.S. Representative from New York who would serve as a general in the American Civil War—after he discovered that Philip Barton Key was having an affair with his wife. Sickles was acquitted in the first use of the temporary insanity defense. In 1861, Key's grandson Francis Key Howard was imprisoned in Fort McHenry with the Mayor of Baltimore George William Brown and other locals deemed to be Confederate sympathizers.",
"title": "Death and legacy"
},
{
"paragraph_id": 28,
"text": "Key was a distant cousin and the namesake of F. Scott Fitzgerald, whose full name was Francis Scott Key Fitzgerald. His direct descendants include geneticist Thomas Hunt Morgan, guitarist Dana Key, and American fashion designer and socialite Pauline de Rothschild.",
"title": "Death and legacy"
}
]
| Francis Scott Key was an American lawyer, author, and amateur poet from Frederick, Maryland, best known as the author of the text of the U.S. national anthem, "The Star-Spangled Banner". Key observed the British bombardment of Fort McHenry in 1814 during the War of 1812. He was inspired upon seeing the American flag still flying over the fort at dawn and wrote the poem "Defence of Fort M'Henry"; it was published within a week with the suggested tune of the popular song "To Anacreon in Heaven". The song with Key's lyrics became known as "The Star-Spangled Banner" and slowly gained in popularity as an unofficial anthem, finally achieving official status more than a century later under President Herbert Hoover as the national anthem. Key was a lawyer in Maryland and Washington, D.C. for four decades and worked on important cases, including the Burr conspiracy trial, and he argued numerous times before the Supreme Court. He was nominated for District Attorney for the District of Columbia by President Andrew Jackson, where he served from 1833 to 1841. Key was a devout Episcopalian. Key owned slaves from 1800, during which time abolitionists ridiculed his words, claiming that America was more like the "Land of the Free and Home of the Oppressed". As District Attorney, he suppressed abolitionists, and in 1836 lost a case against Reuben Crandall where he accused the defendant's abolitionist publications of instigating slaves to rebel. He was also a leader of the American Colonization Society which sent freed slaves to Africa. He freed some of his slaves in the 1830s, paying one ex-slave as his farm foreman to supervise his other slaves. He publicly criticized slavery and gave free legal representation to some slaves seeking freedom, but he also represented owners of runaway slaves. At the time of his death he owned eight slaves. | 2001-08-02T02:24:47Z | 2023-12-26T10:42:08Z | [
"Template:Use mdy dates",
"Template:USS",
"Template:Librivox author",
"Template:Shof",
"Template:HMS",
"Template:Gutenberg author",
"Template:Portal",
"Template:Authority control",
"Template:Main",
"Template:Reflist",
"Template:War of 1812",
"Template:Short description",
"Template:Pp-pc1",
"Template:Nbsp",
"Template:Ndash",
"Template:Self-published source",
"Template:SS",
"Template:US$",
"Template:Internet Archive author",
"Template:Citation needed",
"Template:Nsmdns",
"Template:Cite book",
"Template:Wikimedia",
"Template:Infobox officeholder",
"Template:Stack",
"Template:Cite web",
"Template:Snds",
"Template:Blockquote",
"Template:Convert",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Page needed"
]
| https://en.wikipedia.org/wiki/Francis_Scott_Key |
10,938 | FSU | FSU may refer to: | [
{
"paragraph_id": 0,
"text": "FSU may refer to:",
"title": ""
}
]
| FSU may refer to: Florida State University, a large public research university in Tallahassee, Florida
Ferris State University, Michigan
Frostburg State University, Maryland
Fayetteville State University, North Carolina
Fairmont State University, West Virginia
Fitchburg State University, Massachusetts
Framingham State University, Massachusetts
Friends Stand United, street gang
Finance Sector Union, Australian trade union
Financial Services Union, Irish trade union
Fédération Syndicale Unitaire, French trade union
Former Soviet Union, collective term for the fifteen countries that formed the Soviet Union until 1991
Fuse-Switch-Unit, opposite of SFU, relating to the order an electrical fuse is inserted in a circuit | 2019-09-27T06:37:33Z | [
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/FSU |
|
10,939 | Formal language | In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar.
The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules.
In computer science, formal languages are used among others as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way.
The field of formal language theory studies primarily the purely syntactical aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.
In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes.
Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language."
In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages.
In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools.
Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60.
An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode.
A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.
A formal language L over an alphabet Σ is a subset of Σ, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'.
In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant.
While formal language theory usually concerns itself with formal languages that are described by some syntactical rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being equipped with a formal grammar that describes it.
The following rules describe a formal language L over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}:
Under these rules, the string "23+4=555" is in L, but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc.
For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language L as just L = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all (L = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages:
Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as
Typical questions asked about such formalisms include:
Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications.
Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations.
Examples: suppose L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} are languages over some common alphabet Σ {\displaystyle \Sigma } .
Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right.
A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built.
Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute.
In mathematical logic, a formal theory is a set of sentences expressed in a formal language.
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems F S {\displaystyle {\mathcal {FS}}} and F S ′ {\displaystyle {\mathcal {FS'}}} may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance).
A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions.
Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true. | [
{
"paragraph_id": 0,
"text": "In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In computer science, formal languages are used among others as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The field of formal language theory studies primarily the purely syntactical aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a \"formal language of pure language.\"",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof \"that the word problem for semigroups was recursively insoluble\", and later devised the canonical system for the creation of formal languages.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published \"Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas\" (\"On a system of notations and symbols intended to facilitate the description of machines\"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode.",
"title": "Words over an alphabet"
},
{
"paragraph_id": 10,
"text": "A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word.",
"title": "Words over an alphabet"
},
{
"paragraph_id": 11,
"text": "In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.",
"title": "Words over an alphabet"
},
{
"paragraph_id": 12,
"text": "A formal language L over an alphabet Σ is a subset of Σ, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'.",
"title": "Definition"
},
{
"paragraph_id": 13,
"text": "In computer science and mathematics, which do not usually deal with natural languages, the adjective \"formal\" is often omitted as redundant.",
"title": "Definition"
},
{
"paragraph_id": 14,
"text": "While formal language theory usually concerns itself with formal languages that are described by some syntactical rules, the actual definition of the concept \"formal language\" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a \"language\", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being equipped with a formal grammar that describes it.",
"title": "Definition"
},
{
"paragraph_id": 15,
"text": "The following rules describe a formal language L over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}:",
"title": "Examples"
},
{
"paragraph_id": 16,
"text": "Under these rules, the string \"23+4=555\" is in L, but the string \"=234=+\" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that \"0\" means the number zero, \"+\" means addition, \"23+4=555\" is false, etc.",
"title": "Examples"
},
{
"paragraph_id": 17,
"text": "For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language L as just L = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all (L = ∅).",
"title": "Examples"
},
{
"paragraph_id": 18,
"text": "However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: \"a\", \"abb\", \"ababba\", \"aaababbbbaab\", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages:",
"title": "Examples"
},
{
"paragraph_id": 19,
"text": "Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as",
"title": "Language-specification formalisms"
},
{
"paragraph_id": 20,
"text": "Typical questions asked about such formalisms include:",
"title": "Language-specification formalisms"
},
{
"paragraph_id": 21,
"text": "Surprisingly often, the answer to these decision problems is \"it cannot be done at all\", or \"it is extremely expensive\" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications.",
"title": "Language-specification formalisms"
},
{
"paragraph_id": 22,
"text": "Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations.",
"title": "Operations on languages"
},
{
"paragraph_id": 23,
"text": "Examples: suppose L 1 {\\displaystyle L_{1}} and L 2 {\\displaystyle L_{2}} are languages over some common alphabet Σ {\\displaystyle \\Sigma } .",
"title": "Operations on languages"
},
{
"paragraph_id": 24,
"text": "Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right.",
"title": "Operations on languages"
},
{
"paragraph_id": 25,
"text": "A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built.",
"title": "Applications"
},
{
"paragraph_id": 26,
"text": "Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute.",
"title": "Applications"
},
{
"paragraph_id": 27,
"text": "In mathematical logic, a formal theory is a set of sentences expressed in a formal language.",
"title": "Applications"
},
{
"paragraph_id": 28,
"text": "A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems F S {\\displaystyle {\\mathcal {FS}}} and F S ′ {\\displaystyle {\\mathcal {FS'}}} may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance).",
"title": "Applications"
},
{
"paragraph_id": 29,
"text": "A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions.",
"title": "Applications"
},
{
"paragraph_id": 30,
"text": "Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value.",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true.",
"title": "Applications"
}
]
| In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar. The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules. In computer science, formal languages are used among others as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field of formal language theory studies primarily the purely syntactical aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages. | 2001-08-27T00:41:01Z | 2023-11-20T15:51:13Z | [
"Template:About",
"Template:No",
"Template:Cite journal",
"Template:Harvtxt",
"Template:Refbegin",
"Template:ISBN",
"Template:Mathematical logic",
"Template:Expand section",
"Template:Main",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Webarchive",
"Template:Formal languages and grammars",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:Yes",
"Template:NoteFoot",
"Template:Refend",
"Template:Springer",
"Template:NoteTag",
"Template:Mvar"
]
| https://en.wikipedia.org/wiki/Formal_language |
10,940 | Free to Choose | Free to Choose: A Personal Statement is a 1980 book by economists Milton and Rose D. Friedman, accompanied by a ten-part series broadcast on public television, that advocates free market principles. It was primarily a response to an earlier landmark book and television series The Age of Uncertainty, by the noted economist John Kenneth Galbraith.
Free to Choose: A Personal Statement maintains that the free market works best for all members of a society, provides examples of how the free market engenders prosperity, and maintains that it can solve problems where other approaches have failed. Published in January 1980, the 297 page book contains 10 chapters. The book was on top of the United States best sellers list for 5 weeks.
PBS broadcast the programs, beginning in January 1980. It was filmed at the invitation of Robert Chitester, the owner of WQLN-TV. It was based on a 15-part series of taped public lectures and question-and-answer sessions. The general format was that of Milton Friedman visiting and narrating a number of success and failure stories in history, which he attributes to free-market capitalism or the lack thereof (e.g., Hong Kong is commended for its free markets, while India is excoriated for relying on centralized planning especially for its protection of its traditional textile industry). Following the primary show, Friedman would engage in discussion moderated by Robert McKenzie with a number of selected debaters drawn from trade unions, academy and the business community, such as Donald Rumsfeld (then of G.D. Searle & Company) and Frances Fox Piven of City University of New York. The interlocutors would offer objections to or support for the proposals put forward by Friedman, who would in turn respond. After the final episode, Friedman sat down for an interview with Lawrence Spivak.
Guest debaters included:
The series was rebroadcast in 1990 with Linda Chavez moderating the episodes. Arnold Schwarzenegger, George Shultz, Ronald Reagan, David D. Friedman, and Steve Allen, each give personal introductions for one episode. This time, after the documentary segment, Milton Friedman sits down with a single discussion participant to debate the points raised in the episode.
The Friedmans advocate laissez-faire economic policies, often criticizing interventionist government policies and their cost in personal freedoms and economic efficiency in the United States and abroad. They argue that international free trade has been restricted through tariffs and protectionism while domestic free trade and freedom have been limited through high taxation and regulation. They cite the 19th-century United Kingdom, the United States before the Great Depression, and modern Hong Kong as ideal examples of a minimalist economic policy. They contrast the economic growth of Japan after the Meiji Restoration and the economic stagnation of India after its independence from the British Empire, and argue that India has performed worse despite its superior economic potential due to its centralized planning. They argue that even countries with command economies, including the Soviet Union and Yugoslavia, have been forced to adopt limited market mechanisms in order to operate. The authors argue against government taxation on gas and tobacco and government regulation of the public school systems. The Friedmans argue that the Federal Reserve exacerbated the Great Depression by neglecting to prevent the decline of the money supply in the years leading up to it. They further argue that the American public falsely perceived the Depression to be a result of a failure of capitalism rather than the government, and that the Depression allowed the Federal Reserve Board to centralize its control of the monetary system despite its responsibility for it.
On the subject of welfare, the Friedmans argue that the United States has maintained a higher degree of freedom and productivity by avoiding the nationalizations and extensive welfare systems of Western European countries such as the United Kingdom and Sweden. However, they also argue that welfare practices since the New Deal under "the HEW empire" have been harmful. They argue that public assistance programs have become larger than originally envisioned and are creating "wards of the state" as opposed to "self-reliant individuals." They also argue that the Social Security System is fundamentally flawed, that urban renewal and public housing programs have contributed to racial inequality and diminished quality of low-income housing, and that Medicare and Medicaid are responsible for rising healthcare prices in the United States. They suggest completely replacing the welfare state with a negative income tax as a less harmful alternative.
The Friedmans also argue that declining academic performance in the United States is the result of increasing government control of the American education system tracing back to the 1840s, but suggest a voucher system as a politically feasible solution. They blame the 1970s recession and lower quality of consumer goods on extensive business regulations since the 1960s, and advocate abolishing the Food and Drug Administration, the Interstate Commerce Commission, the Consumer Product Safety Commission, Amtrak, and Conrail. They argue that the energy crisis would be resolved by abolishing the Department of Energy and price floors on crude oil. They recommend replacing the Environmental Protection Agency and environmental regulation with an effluent charge. They criticize labor unions for raising prices and lowering demand by enforcing high wage levels, and for contributing to unemployment by limiting jobs. They argue that inflation is caused by excessive government spending, the Federal Reserve's attempts to control interest rates, and full employment policy. They call for tighter control of Fed money supply despite the fact that it will result in a temporary period of high unemployment and low growth due to the interruption of the wage-price spiral. In the final chapter, they take note of recent current events that seem to suggest a return to free-market principles in academic thought and public opinion, and argue in favor of an "economic Bill of Rights" to cement the changes. | [
{
"paragraph_id": 0,
"text": "Free to Choose: A Personal Statement is a 1980 book by economists Milton and Rose D. Friedman, accompanied by a ten-part series broadcast on public television, that advocates free market principles. It was primarily a response to an earlier landmark book and television series The Age of Uncertainty, by the noted economist John Kenneth Galbraith.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Free to Choose: A Personal Statement maintains that the free market works best for all members of a society, provides examples of how the free market engenders prosperity, and maintains that it can solve problems where other approaches have failed. Published in January 1980, the 297 page book contains 10 chapters. The book was on top of the United States best sellers list for 5 weeks.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "PBS broadcast the programs, beginning in January 1980. It was filmed at the invitation of Robert Chitester, the owner of WQLN-TV. It was based on a 15-part series of taped public lectures and question-and-answer sessions. The general format was that of Milton Friedman visiting and narrating a number of success and failure stories in history, which he attributes to free-market capitalism or the lack thereof (e.g., Hong Kong is commended for its free markets, while India is excoriated for relying on centralized planning especially for its protection of its traditional textile industry). Following the primary show, Friedman would engage in discussion moderated by Robert McKenzie with a number of selected debaters drawn from trade unions, academy and the business community, such as Donald Rumsfeld (then of G.D. Searle & Company) and Frances Fox Piven of City University of New York. The interlocutors would offer objections to or support for the proposals put forward by Friedman, who would in turn respond. After the final episode, Friedman sat down for an interview with Lawrence Spivak.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Guest debaters included:",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The series was rebroadcast in 1990 with Linda Chavez moderating the episodes. Arnold Schwarzenegger, George Shultz, Ronald Reagan, David D. Friedman, and Steve Allen, each give personal introductions for one episode. This time, after the documentary segment, Milton Friedman sits down with a single discussion participant to debate the points raised in the episode.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "The Friedmans advocate laissez-faire economic policies, often criticizing interventionist government policies and their cost in personal freedoms and economic efficiency in the United States and abroad. They argue that international free trade has been restricted through tariffs and protectionism while domestic free trade and freedom have been limited through high taxation and regulation. They cite the 19th-century United Kingdom, the United States before the Great Depression, and modern Hong Kong as ideal examples of a minimalist economic policy. They contrast the economic growth of Japan after the Meiji Restoration and the economic stagnation of India after its independence from the British Empire, and argue that India has performed worse despite its superior economic potential due to its centralized planning. They argue that even countries with command economies, including the Soviet Union and Yugoslavia, have been forced to adopt limited market mechanisms in order to operate. The authors argue against government taxation on gas and tobacco and government regulation of the public school systems. The Friedmans argue that the Federal Reserve exacerbated the Great Depression by neglecting to prevent the decline of the money supply in the years leading up to it. They further argue that the American public falsely perceived the Depression to be a result of a failure of capitalism rather than the government, and that the Depression allowed the Federal Reserve Board to centralize its control of the monetary system despite its responsibility for it.",
"title": "Positions advocated"
},
{
"paragraph_id": 6,
"text": "On the subject of welfare, the Friedmans argue that the United States has maintained a higher degree of freedom and productivity by avoiding the nationalizations and extensive welfare systems of Western European countries such as the United Kingdom and Sweden. However, they also argue that welfare practices since the New Deal under \"the HEW empire\" have been harmful. They argue that public assistance programs have become larger than originally envisioned and are creating \"wards of the state\" as opposed to \"self-reliant individuals.\" They also argue that the Social Security System is fundamentally flawed, that urban renewal and public housing programs have contributed to racial inequality and diminished quality of low-income housing, and that Medicare and Medicaid are responsible for rising healthcare prices in the United States. They suggest completely replacing the welfare state with a negative income tax as a less harmful alternative.",
"title": "Positions advocated"
},
{
"paragraph_id": 7,
"text": "The Friedmans also argue that declining academic performance in the United States is the result of increasing government control of the American education system tracing back to the 1840s, but suggest a voucher system as a politically feasible solution. They blame the 1970s recession and lower quality of consumer goods on extensive business regulations since the 1960s, and advocate abolishing the Food and Drug Administration, the Interstate Commerce Commission, the Consumer Product Safety Commission, Amtrak, and Conrail. They argue that the energy crisis would be resolved by abolishing the Department of Energy and price floors on crude oil. They recommend replacing the Environmental Protection Agency and environmental regulation with an effluent charge. They criticize labor unions for raising prices and lowering demand by enforcing high wage levels, and for contributing to unemployment by limiting jobs. They argue that inflation is caused by excessive government spending, the Federal Reserve's attempts to control interest rates, and full employment policy. They call for tighter control of Fed money supply despite the fact that it will result in a temporary period of high unemployment and low growth due to the interruption of the wage-price spiral. In the final chapter, they take note of recent current events that seem to suggest a return to free-market principles in academic thought and public opinion, and argue in favor of an \"economic Bill of Rights\" to cement the changes.",
"title": "Positions advocated"
}
]
| Free to Choose: A Personal Statement is a 1980 book by economists Milton and Rose D. Friedman, accompanied by a ten-part series broadcast on public television, that advocates free market principles. It was primarily a response to an earlier landmark book and television series The Age of Uncertainty, by the noted economist John Kenneth Galbraith. | 2022-10-31T20:56:30Z | [
"Template:Div col end",
"Template:Milton Friedman",
"Template:Short description",
"Template:Use mdy dates",
"Template:Other uses of",
"Template:Infobox book",
"Template:Div col"
]
| https://en.wikipedia.org/wiki/Free_to_Choose |
|
10,945 | Albert Park Circuit | The Albert Park Circuit is a motorsport street circuit around Albert Park Lake in the suburb of Albert Park in Melbourne. It is used annually as a circuit for the Formula One Australian Grand Prix, the supporting Supercars Championship Melbourne SuperSprint and other associated support races. The circuit has an FIA Grade 1 license.
Although the entire track consists of normally public roads, each sector includes medium to high-speed characteristics more commonly associated with dedicated racetracks facilitated by grass and gravel run-off safety zones that are reconstructed annually. However, the circuit also has characteristics of a street circuit's enclosed nature due to concrete barriers annually built along the Lakeside Drive curve, in particular, where run-off is not available due to the proximity of the lake shore.
The circuit uses everyday sections of road that circle Albert Park Lake, a small man-altered lake (originally a large lagoon formed as part of the ancient Yarra River course) just south of the Central Business District of Melbourne. The road sections that are used were rebuilt before the inaugural event in 1996 to ensure consistency and smoothness. As a result, compared to other circuits that are held on public roads, the Albert Park track has quite a smooth surface. Before 2007 there existed only a few other places on the Formula 1 calendar with a body of water close to the track. Many of the new tracks, such as Valencia, Singapore and Abu Dhabi are close to a body of water.
The course is considered to be quite fast and relatively easy to drive, drivers having commented that the consistent placement of corners allows them to easily learn the circuit and achieve competitive times. However, the flat terrain around the lake, coupled with a track design that features few true straights, means that the track is not conducive to overtaking or easy spectating unless in possession of a grandstand seat.
Each year, most of the trackside fencing, pedestrian overpasses, grandstands, and other motorsport infrastructure are erected approximately two months before the Grand Prix weekend and removed within 6 weeks after the event. The land around the circuit (including a large aquatic centre, a golf course, a Lakeside Stadium, some restaurants, and rowing boathouses) has restricted access during that entire period. Dissent is still prevalent among nearby residents and users of those other facilities, and some still maintain a silent protest against the event. Nevertheless, the event is reasonably popular in Melbourne and Australia (with a large European population and a general interest in motorsport). Middle Park, the home of South Melbourne FC was demolished in 1994 due to expansion at Albert Park.
The Grand Prix regularly draws crowds of over 270,000 spectators, with the 2022 drawing a record crowd of 419,114, including 128,294 on the main raceday. There has never been a night race at Albert Park, although the 2009 and 2010 events both started at 5:00 p.m. local time. The current contract for the Grand Prix at the circuit concludes in 2035.
Following the postponement of the Australian Grand Prix in 2021, due to the COVID-19 pandemic, the track underwent layout changes, the most notable part was the modification of the turn 9–10 complex from a heavy right-left corner to a fast-sweeping right-left corner into turns 11 and 12. Further modifications included the widening of the pit lane by 2 m (2.2 yd) and the reprofiling of turn 13. Also, some corners were widened such as turn 1, turn 3, turn 6, turn 7, and turn 15; and it is expected that these modifications will reduce qualifying lap times by as much as five seconds.
During the nine months of the year when the track is not required for Grand Prix preparation or the race weekend, most of the track can be driven by ordinary street-registered vehicles either clockwise or anti-clockwise.
Only the sections between turns 3, 4, and 5, then 5 and 6, differ significantly from the race track configuration. Turn 4 is replaced by a car park access road running directly from turns 3 to 5. Between turns 5 and 6, the road is blocked. It is possible to drive from turn 5 on to Albert Road and back on to the track at turn 7 though three sets of lights control the flow of this option. The only set of lights on the actual track is halfway between turns 12 and 13, where drivers using Queens Road are catered for. The chicanes at turns 11 and 12 are considerably more open than that used in the Grand Prix, using the escape roads. Turn 9 is also a car park and traffic is directed down another escape road.
The speed limit is generally 40 km/h (25 mph), while some short sections have a speed limit of 50 km/h (31 mph), which is still slower than an F1 car under pit lane speed restrictions. The back of the track, turns 7 to 13 inclusive, is known as Lakeside Drive. Double lines separate the two-way traffic along most of Lakeside Drive with short road islands approximately every 50 m (55 yd) which means overtaking is illegal here. Black Swans live and breed in Albert Park, and frequently cross the road causing traffic delays, sometimes with up to five cygnets (young swans).
Approximately 80% of the track edge is lined with short parkland-style chain-linked fencing leaving normal drivers less room for error than F1 drivers have during race weekend. There is however substantial shoulder room between the outside of each lane and the fencing, which is used as parking along Aughtie Drive during the other nine months.
Prior to World War II, attempts were made to use Albert Park for motor racing. The first was in 1934 but failed due to opposition, and a second attempt for a motorcycle race in 1937 similarly failed. Finally in 1953 the Light Car Club of Australia were able to secure use of the circuit for that year's Australian Grand Prix.
Albert Park is the only venue to host the Australian Grand Prix in both World Championship and non-World Championship formats with an earlier configuration of the current circuit used for the race on two occasions during the 1950s. During this time racing was conducted in an anti-clockwise direction as opposed to the current circuit which runs clockwise.
Known as the Albert Park Circuit, the original 3.125 mi (5.029 km) course hosted a total of six race meetings:
The November 1958 meeting was the last on the original incarnation of the circuit, as it closed shortly after.
As of April 2023, the fastest official race lap records at the Albert Park Circuit are listed as: | [
{
"paragraph_id": 0,
"text": "The Albert Park Circuit is a motorsport street circuit around Albert Park Lake in the suburb of Albert Park in Melbourne. It is used annually as a circuit for the Formula One Australian Grand Prix, the supporting Supercars Championship Melbourne SuperSprint and other associated support races. The circuit has an FIA Grade 1 license.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although the entire track consists of normally public roads, each sector includes medium to high-speed characteristics more commonly associated with dedicated racetracks facilitated by grass and gravel run-off safety zones that are reconstructed annually. However, the circuit also has characteristics of a street circuit's enclosed nature due to concrete barriers annually built along the Lakeside Drive curve, in particular, where run-off is not available due to the proximity of the lake shore.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The circuit uses everyday sections of road that circle Albert Park Lake, a small man-altered lake (originally a large lagoon formed as part of the ancient Yarra River course) just south of the Central Business District of Melbourne. The road sections that are used were rebuilt before the inaugural event in 1996 to ensure consistency and smoothness. As a result, compared to other circuits that are held on public roads, the Albert Park track has quite a smooth surface. Before 2007 there existed only a few other places on the Formula 1 calendar with a body of water close to the track. Many of the new tracks, such as Valencia, Singapore and Abu Dhabi are close to a body of water.",
"title": "Design"
},
{
"paragraph_id": 3,
"text": "The course is considered to be quite fast and relatively easy to drive, drivers having commented that the consistent placement of corners allows them to easily learn the circuit and achieve competitive times. However, the flat terrain around the lake, coupled with a track design that features few true straights, means that the track is not conducive to overtaking or easy spectating unless in possession of a grandstand seat.",
"title": "Design"
},
{
"paragraph_id": 4,
"text": "Each year, most of the trackside fencing, pedestrian overpasses, grandstands, and other motorsport infrastructure are erected approximately two months before the Grand Prix weekend and removed within 6 weeks after the event. The land around the circuit (including a large aquatic centre, a golf course, a Lakeside Stadium, some restaurants, and rowing boathouses) has restricted access during that entire period. Dissent is still prevalent among nearby residents and users of those other facilities, and some still maintain a silent protest against the event. Nevertheless, the event is reasonably popular in Melbourne and Australia (with a large European population and a general interest in motorsport). Middle Park, the home of South Melbourne FC was demolished in 1994 due to expansion at Albert Park.",
"title": "Design"
},
{
"paragraph_id": 5,
"text": "The Grand Prix regularly draws crowds of over 270,000 spectators, with the 2022 drawing a record crowd of 419,114, including 128,294 on the main raceday. There has never been a night race at Albert Park, although the 2009 and 2010 events both started at 5:00 p.m. local time. The current contract for the Grand Prix at the circuit concludes in 2035.",
"title": "Design"
},
{
"paragraph_id": 6,
"text": "Following the postponement of the Australian Grand Prix in 2021, due to the COVID-19 pandemic, the track underwent layout changes, the most notable part was the modification of the turn 9–10 complex from a heavy right-left corner to a fast-sweeping right-left corner into turns 11 and 12. Further modifications included the widening of the pit lane by 2 m (2.2 yd) and the reprofiling of turn 13. Also, some corners were widened such as turn 1, turn 3, turn 6, turn 7, and turn 15; and it is expected that these modifications will reduce qualifying lap times by as much as five seconds.",
"title": "Design"
},
{
"paragraph_id": 7,
"text": "During the nine months of the year when the track is not required for Grand Prix preparation or the race weekend, most of the track can be driven by ordinary street-registered vehicles either clockwise or anti-clockwise.",
"title": "Everyday access"
},
{
"paragraph_id": 8,
"text": "Only the sections between turns 3, 4, and 5, then 5 and 6, differ significantly from the race track configuration. Turn 4 is replaced by a car park access road running directly from turns 3 to 5. Between turns 5 and 6, the road is blocked. It is possible to drive from turn 5 on to Albert Road and back on to the track at turn 7 though three sets of lights control the flow of this option. The only set of lights on the actual track is halfway between turns 12 and 13, where drivers using Queens Road are catered for. The chicanes at turns 11 and 12 are considerably more open than that used in the Grand Prix, using the escape roads. Turn 9 is also a car park and traffic is directed down another escape road.",
"title": "Everyday access"
},
{
"paragraph_id": 9,
"text": "The speed limit is generally 40 km/h (25 mph), while some short sections have a speed limit of 50 km/h (31 mph), which is still slower than an F1 car under pit lane speed restrictions. The back of the track, turns 7 to 13 inclusive, is known as Lakeside Drive. Double lines separate the two-way traffic along most of Lakeside Drive with short road islands approximately every 50 m (55 yd) which means overtaking is illegal here. Black Swans live and breed in Albert Park, and frequently cross the road causing traffic delays, sometimes with up to five cygnets (young swans).",
"title": "Everyday access"
},
{
"paragraph_id": 10,
"text": "Approximately 80% of the track edge is lined with short parkland-style chain-linked fencing leaving normal drivers less room for error than F1 drivers have during race weekend. There is however substantial shoulder room between the outside of each lane and the fencing, which is used as parking along Aughtie Drive during the other nine months.",
"title": "Everyday access"
},
{
"paragraph_id": 11,
"text": "Prior to World War II, attempts were made to use Albert Park for motor racing. The first was in 1934 but failed due to opposition, and a second attempt for a motorcycle race in 1937 similarly failed. Finally in 1953 the Light Car Club of Australia were able to secure use of the circuit for that year's Australian Grand Prix.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Albert Park is the only venue to host the Australian Grand Prix in both World Championship and non-World Championship formats with an earlier configuration of the current circuit used for the race on two occasions during the 1950s. During this time racing was conducted in an anti-clockwise direction as opposed to the current circuit which runs clockwise.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Known as the Albert Park Circuit, the original 3.125 mi (5.029 km) course hosted a total of six race meetings:",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The November 1958 meeting was the last on the original incarnation of the circuit, as it closed shortly after.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "As of April 2023, the fastest official race lap records at the Albert Park Circuit are listed as:",
"title": "Race lap records"
}
]
| The Albert Park Circuit is a motorsport street circuit around Albert Park Lake in the suburb of Albert Park in Melbourne. It is used annually as a circuit for the Formula One Australian Grand Prix, the supporting Supercars Championship Melbourne SuperSprint and other associated support races. The circuit has an FIA Grade 1 license. Although the entire track consists of normally public roads, each sector includes medium to high-speed characteristics more commonly associated with dedicated racetracks facilitated by grass and gravel run-off safety zones that are reconstructed annually. However, the circuit also has characteristics of a street circuit's enclosed nature due to concrete barriers annually built along the Lakeside Drive curve, in particular, where run-off is not available due to the proximity of the lake shore. | 2001-08-27T03:34:43Z | 2023-11-26T10:21:06Z | [
"Template:Formula 2 circuits",
"Template:Australian GT circuits",
"Template:Melbourne landmarks",
"Template:Use Australian English",
"Template:Flagicon",
"Template:Cite web",
"Template:Cite magazine",
"Template:Cite book",
"Template:Commons category",
"Template:Porsche Supercup circuits",
"Template:V8 Supercar tracks",
"Template:Short description",
"Template:Use dmy dates",
"Template:Motorsport venue",
"Template:Cvt",
"Template:Convert",
"Template:Reflist",
"Template:Formula One circuits",
"Template:FIA Formula 3 circuits",
"Template:S5000 circuits"
]
| https://en.wikipedia.org/wiki/Albert_Park_Circuit |
10,946 | Monaco Grand Prix | The Monaco Grand Prix (French: Grand Prix de Monaco) is a Formula One motor racing event held annually on the Circuit de Monaco, in late May or early June. Run since 1929, it is widely considered to be one of the most important and prestigious automobile races in the world, and is one of the races—along with the Indianapolis 500 and the 24 Hours of Le Mans—that form the Triple Crown of Motorsport. The circuit has been called "an exceptional location of glamour and prestige". The Formula One event is usually held on the last weekend of May and is known as one of the largest weekends in motor racing, as the Formula One race occurs on the same Sunday as the Indianapolis 500 (IndyCar Series) and the Coca-Cola 600 (NASCAR Cup Series).
The race is held on a narrow course laid out in the streets of Monaco, with many elevation changes and tight corners as well as the tunnel, making it one of the most demanding circuits in Formula One. In spite of the relatively low average speeds, the Monaco circuit is a dangerous place to race due to how narrow the track is, and the race often involves the intervention of a safety car. It is the only Grand Prix that does not adhere to the FIA's mandated 305-kilometre (190-mile) minimum race distance for F1 races.
The first Monaco Grand Prix took place on 14 April 1929, and the race eventually became part of the pre-Second World War European Championship and was included in the first World Championship of Drivers in 1950. It was twice designated the European Grand Prix, in 1955 and 1963, when this title was an honorary designation given each year to one Grand Prix race in Europe. Graham Hill was known as "Mr. Monaco" due to his five Monaco wins in the 1960s. Ayrton Senna won the race more times than any other driver, with six victories, winning five races consecutively between 1989 and 1993.
Like many European races, the Monaco Grand Prix predates the current World Championship. The principality's first Grand Prix was organised in 1929 by Antony Noghès, under the auspices of Prince Louis II, through the Automobile Club de Monaco (ACM), of which he was president. The ACM organised the Rallye Automobile Monte Carlo, and in 1928 applied to the Association Internationale des Automobiles Clubs Reconnus (AIACR), the international governing body of motorsport, to be upgraded from a regional French club to full national status. Their application was refused due to the lack of a major motorsport event held wholly within Monaco's boundaries. The rally could not be considered, as it mostly used the roads of other European countries.
To attain full national status, Noghès proposed the creation of an automobile Grand Prix in the streets of Monte Carlo. He obtained the official sanction of Prince Louis II and the support of Monégasque Grand Prix driver Louis Chiron. Chiron thought Monaco's topography was well-suited to setting up a race track.
The first race, held on 14 April 1929, was won by William Grover-Williams (using the pseudonym "Williams"), driving a works Bugatti Type 35B. It was an invitation-only event, but not all of those who were invited decided to attend. The leading Maserati and Alfa Romeo drivers decided not to compete, but Bugatti was well represented. Mercedes sent their leading driver, Rudolf Caracciola. Starting fifteenth, Caracciola drove a fighting race, taking his SSK into the lead before wasting 4+1⁄2 minutes on refuelling and a tyre change to finish second. Another driver who competed using a pseudonym was "Georges Philippe", the Baron Philippe de Rothschild. Chiron was unable to compete, having a prior commitment to compete in the Indianapolis 500.
Caracciola's SSK was refused permission to race the following year, but Chiron did compete (in the works Bugatti Type 35C), when he was beaten by privateer René Dreyfus and his Bugatti Type 35B, and finished second. Chiron took victory in the 1931 race driving a Bugatti. As of 2023, he remains the only native of Monaco to have won the event.
The race quickly grew in importance after its inception. Because of the high number of races which were being termed 'Grands Prix', the AIACR formally recognised the most important race of each of its affiliated national automobile clubs as International Grands Prix, or Grandes Épreuves, and in 1933 Monaco was ranked as such alongside the French, Belgian, Italian, and Spanish Grands Prix. That year's race was the first Grand Prix in which grid positions were decided, as they are now, by practice time rather than the established method of balloting. The race saw Achille Varzi and Tazio Nuvolari exchange the lead many times before the race settled in Varzi's favour on the final lap when Nuvolari's car caught fire.
The race became a round of the new European Championship in 1936, when stormy weather and a broken oil line led to a series of crashes, eliminating the Mercedes-Benzes of Chiron, Fagioli, and von Brauchitsch, as well as Bernd Rosemeyer's Typ C for newcomer Auto Union; Rudolf Caracciola, proving the truth of his nickname, Regenmeister (Rainmaster), went on to win. In 1937, von Brauchitsch duelled Caracciola before coming out on top. It was the last prewar Grand Prix at Monaco, for in 1938, the lack of profits for organisers, and demand for nearly £500 (approximately £34000 adjusted to 2021 inflation) in appearance money per top entrant led AIACR to cancel the event, while looming war overtook it in 1939, and the Second World War ended organised racing in Europe until 1945.
Racing in Europe started again on 9 September 1945 at the Bois de Boulogne Park in the city of Paris, four months and one day after the end of the war in Europe. However, the Monaco Grand Prix was not run between 1945 and 1947 due to financial reasons. In 1946, a new premier racing category, Grand Prix, was defined by the Fédération Internationale de l'Automobile (FIA), the successor of the AIACR, based on the pre-war voiturette class. A Monaco Grand Prix was run to this formula in 1948, won by the future world champion Nino Farina in a Maserati 4CLT.
The 1949 event was cancelled due to the death of Prince Louis II; it was included in the new Formula One World Drivers' Championship the following year. The race provided future five-time world champion Juan Manuel Fangio with his first win in a World Championship race, as well as third place for the 51-year-old Louis Chiron, his best result in the World Championship era. However, there was no race in 1951 due to budgetary concerns and a lack of regulations in the sport. 1952 was the first of the two years in which the World Drivers' Championship was run to less powerful Formula Two regulations. The race was run to sports car rules instead, and it did not form part of the World Championship.
No races were held in 1953 or 1954 due to the fact that the car regulations were not finalized.
The Monaco Grand Prix returned in 1955, again as part of the Formula One World Championship, and this would begin a streak of 64 consecutive years in which the race was held. In the 1955 race, Maurice Trintignant won in Monte Carlo for the first time and Chiron again scored points and at 56 became the oldest driver to compete in a Formula One Grand Prix. It was not until 1957, when Fangio won again, that the Grand Prix saw a double winner. Between 1954 and 1961 Fangio's former Mercedes colleague, Stirling Moss, went one better, as did Trintignant, who won the race again in 1958 driving a Cooper. The 1961 race saw Moss fend off three works Ferrari 156s in a year-old privateer Rob Walker Racing Team Lotus 18 to take his third Monaco victory.
Britain's Graham Hill won the race five times in the 1960s and became known as "King of Monaco" and "Mr. Monaco". He first won in 1963, and then won the next two years. In the 1965 race, he took pole position and led from the start, but went up an escape road on lap 25 to avoid hitting a slow backmarker. Re-joining in fifth place, Hill set several new lap records on the way to winning. The race was also notable for Jim Clark's absence (he was participating in the Indianapolis 500), and for Paul Hawkins's Lotus ending up in the harbour. Hill's teammate, Briton Jackie Stewart, won in 1966 and New Zealander Denny Hulme won in 1967, but Hill won the next two years, the 1969 event being his final Formula One championship victory, by which time he was a double Formula One world champion.
By the start of the 1970s, efforts by Jackie Stewart saw several Formula One events cancelled because of safety concerns. For the 1969 event, Armco barriers were placed at specific points for the first time in the circuit's history. Before that, the circuit's conditions were (aside from the removal of people's production cars parked on the side of the road) virtually identical to everyday road use. If a driver went off, he had a chance to crash into whatever was next to the track (buildings, trees, lamp posts, glass windows, and even a train station), and in Alberto Ascari's and Paul Hawkins's cases, the harbour water, because the concrete road the course used had no Armco to protect the drivers from going off the track and into the Mediterranean. The circuit gained more Armco in specific points for the next two races, and by 1972, the circuit was almost completely Armco-lined. For the first time in its history, the Monaco circuit was altered in 1972, as the pits were moved next to the waterfront straight between the chicane and Tabac, and the chicane was moved further forward right before Tabac, becoming the junction point between the pits and the course. The course was changed again for the 1973 race. The Rainier III Nautical Stadium was constructed where the straight that went behind the pits was, and the circuit introduced a double chicane that went around the new swimming pool (this chicane complex is known today as "Swimming Pool"). This created space for a whole new pit facility, and in 1976 the course was altered yet again; the Sainte Devote corner was made slower and a chicane was placed right before the pit straight.
By the early 1970s, as Brabham team owner Bernie Ecclestone started to marshal the collective bargaining power of the Formula One Constructors Association (FOCA), Monaco was prestigious enough to become an early bone of contention. Historically, the number of cars permitted in a race was decided by the race organiser, in this case the ACM, which had always set a low number of around 16. In 1972, Ecclestone started to negotiate deals which relied on FOCA guaranteeing at least 18 entrants for every race. A stand-off over this issue left the 1972 race in jeopardy until the ACM gave in and agreed that 26 cars could participate – the same number permitted at most other circuits. Two years later, in 1974, the ACM got the numbers back down to 18.
Because of its tight confines, slow average speeds, and punishing nature, Monaco has often thrown up unexpected results. In the 1982 race, René Arnoux led the first 15 laps before retiring. Alain Prost then led until four laps from the end, when he spun off on the wet track, hit the barriers and lost a wheel, giving Riccardo Patrese the lead. Patrese himself spun with only a lap and a half to go, letting Didier Pironi through to the front, followed by Andrea de Cesaris. On the last lap, Pironi ran out of fuel in the tunnel, but De Cesaris also ran out of fuel before he could overtake. In the meantime, Patrese had bump-started his car and went through to score his first Grand Prix win.
In 1983, the ACM became entangled in the disagreements between Fédération Internationale du Sport Automobile (FISA) and FOCA. The ACM, with the agreement of Bernie Ecclestone, negotiated an individual television rights deal with ABC in the United States. This broke an agreement enforced by FISA for a single central negotiation of television rights. Jean-Marie Balestre, president of FISA, announced that the Monaco Grand Prix would not form part of the Formula One world championship in 1985. The ACM fought their case in the French courts. They won the case and the race was eventually reinstated.
For the decade from 1984 to 1993, the race was won by only two drivers, arguably the two best drivers in Formula One at the time – Frenchman Alain Prost and Brazilian Ayrton Senna. Prost, already a winner of the support race for Formula Three cars in 1979, took his first Monaco win at the 1984 race. The race started 45 minutes late after heavy rain. Prost led briefly before Nigel Mansell overtook him on lap 11. Mansell crashed out five laps later, letting Prost back into the lead. On lap 27, Prost led from Ayrton Senna's Toleman and Stefan Bellof's Tyrrell. Senna was catching Prost, and Bellof was catching both of them in the only naturally aspirated car in the race. However, on lap 31, the race was controversially stopped due to conditions deemed to be undriveable. Later, FISA fined the clerk of the course, Jacky Ickx, $6,000 and suspended his licence for not consulting the stewards before stopping the race. The drivers received only half of the points that would usually be awarded, as the race had been stopped before two-thirds of the intended race distance had been completed.
Prost won 1985 after polesitter Senna retired with a blown Renault engine in his Lotus after over-revving it at the start, and Michele Alboreto in the Ferrari retook the lead twice, but he went off the track at Sainte-Devote, where Brazilian Nelson Piquet and Italian Riccardo Patrese had a huge accident only a few laps previously and oil and debris littered the track. Prost passed Alboreto, who retook the Frenchman, and then he punctured a tyre after running over bodywork debris from the Piquet/Patrese accident, which dropped him to 4th. He was able to pass his Roman countrymen Andrea De Cesaris and Elio de Angelis, but finished 2nd behind Prost. The French Prost dominated 1986 after starting from pole position, a race where the Nouvelle Chicane had been changed on the grounds of safety.
Senna holds the record for the most victories in Monaco, with six, including five consecutive wins between 1989 and 1993, as well as eight podium finishes in ten starts. His 1987 win was the first time a car with an active suspension had won a Grand Prix. He won this race after Briton Nigel Mansell in a Williams-Honda went out with a broken exhaust. His win was very popular with the people of Monaco, and when he was arrested on the Monday following the race for riding a motorcycle without wearing a helmet, he was released by the officers after they realised who he was. Senna dominated 1988 and was able to get ahead of his teammate Prost while the Frenchman was held up for most of the race by Austrian Gerhard Berger in a Ferrari. By the time Prost got past Berger, he pushed as hard as he could and set a lap some 6 seconds faster than Senna's; Senna then set 2 fastest laps, and while pushing as hard as possible, he touched the barrier at the Portier corner and crashed into the Armco separating the road from the Mediterranean. Senna was so upset that he went back to his Monaco flat and was not heard from until the evening. Prost went on to win for the fourth time.
Senna dominated 1989 while Prost was stuck behind backmarker René Arnoux and others; the Brazilian also dominated 1990 and 1991. At the 1992 event Nigel Mansell, who had won all five races held to that point in the season, took pole and dominated the race in his Williams FW14B-Renault. However, with seven laps remaining, Mansell suffered a loose wheel nut and was forced into the pits, emerging behind Senna's McLaren-Honda, who was on worn tyres. Mansell, on fresh tyres, set a lap record almost two seconds quicker than Senna's and closed from 5.2 to 1.9 seconds in only two laps. The pair duelled around Monaco for the final four laps but Mansell could find no way past, finishing just two-tenths of a second behind the Brazilian. It was Senna's fifth win at Monaco, equalling Graham Hill's record. Senna had a poor start to the 1993 event, crashing in practice and qualifying 3rd behind pole-sitter Prost and the rising German star Michael Schumacher. Both of them beat Senna to the first corner, but Prost had to serve a time penalty for jumping the start and Schumacher retired after suspension problems, so Senna took his sixth win to break Graham Hill's record for most wins at the Monaco Grand Prix. Runner-up Damon Hill commented, "If my father was around now, he would be the first to congratulate Ayrton."
The 1994 race was an emotional and tragic affair. It came two weeks after the race at Imola in which Austrian Roland Ratzenberger and Ayrton Senna both died in crashes on successive days. During the Monaco event, Austrian Karl Wendlinger had an accident in his Sauber in the tunnel; he went into a coma and was to miss the rest of the season. The German Michael Schumacher won the 1994 Monaco event. Schumacher also won the 1995 event. The 1996 race saw Michael Schumacher take pole position before crashing out on the first lap after being overtaken by Damon Hill. Hill led the first 40 laps before his engine expired in the tunnel. Jean Alesi took the lead but suffered suspension failure 20 laps later. Olivier Panis, who started in 14th place, moved into the lead and stayed there until the end of the race, being pushed all the way by David Coulthard. It was Panis's only win, and the last for his Ligier team. Only three cars crossed the finish line, but seven were classified.
Seven-time world champion Schumacher would eventually win the race five times, matching Graham Hill's record. In his appearance at the 2006 event, he attracted criticism when, while provisionally holding pole position and with the qualifying session drawing to a close, he stopped his car at the Rascasse hairpin, blocking the track and obliging competitors to slow down. Although Schumacher claimed it was the unintentional result of a genuine car failure, the FIA disagreed and he was sent to the back of the grid.
In July 2010, Bernie Ecclestone announced that a 10-year deal had been reached with the race organisers, keeping the race on the calendar until at least 2020.
Due to the COVID-19 pandemic, the FIA announced the 2020 Monaco Grand Prix's postponement, along with the two other races scheduled for May 2020, to help prevent the spread of the virus. However, later the same day the Automobile Club de Monaco confirmed that the Grand Prix was instead cancelled, making 2020 the first time the Grand Prix was not run since 1954. It returned in 2021, on 23 May, where Max Verstappen won his first Monaco Grand Prix. The 2022 event saw the Monégasque driver, Charles Leclerc of Scuderia Ferrari, achieve his first Monaco Grand Prix pole position at the Circuit de Monaco (he had taken pole the previous year but could not start due to driveshaft failure). However, a critical strategical error meant Leclerc would drop to fourth, with Verstappen's teammate Sergio Pérez winning the race. The race was delayed due to heavy rain; two formation laps were completed before the start procedure was suspended and further delayed an hour from its 15:00 local time intended start. In addition to a red flag due to a big crash from Mick Schumacher, this dropped the laps completed from the intended 78 to 64.
In September 2022 the Grand Prix signed a new race contract to remain on the F1 calendar until the 2025 season.
The Circuit de Monaco consists of the city streets of Monte Carlo and La Condamine, which includes the famous harbour. It is unique in having been held on the same circuit every time it has been run over such a long period – only the Italian Grand Prix, which has been held at Autodromo Nazionale Monza during every Formula One regulated year except 1980, has a similarly lengthy and close relationship with a single circuit.
The race circuit has many elevation changes, tight corners, and a narrow course that makes it one of the most demanding tracks in Formula One racing. As of 2022, two drivers have crashed and ended up in the harbour, the most famous being Alberto Ascari in 1955. Despite the fact that the course has had minor changes several times during its history, it is still considered the ultimate test of driving skills in Formula One, and if it were not already an existing Grand Prix, it would not be permitted to be added to the schedule for safety reasons. Even in 1929, La Vie Automobile magazine offered the opinion that "Any respectable traffic system would have covered the track with <<Danger>> sign posts left, right and centre".
Triple Formula One champion Nelson Piquet was fond of saying that racing at Monaco was "like trying to cycle round your living room", but added that "a win here was worth two anywhere else".
Notably, the course includes a tunnel. The contrast of daylight and gloom when entering/exiting the tunnel presents "challenges not faced elsewhere", as the drivers have to "adjust their vision as they emerge from the tunnel at the fastest point of the track and brake for the chicane in the daylight.".
The fastest-ever qualifying lap was set by Lewis Hamilton in qualifying (Q3) for the 2019 Monaco Grand Prix, at a time of 1:10.166.
During the Grand Prix weekend, spectators crowd around the Monaco Circuit. There are a number of temporary grandstands built around the circuit, mostly around the harbour area. The rich and famous spectators often arrive on their boats and the yachts through the harbour. Balconies around Monaco become viewing areas for the race as well. Many hotels and residents cash in on the bird's eye views of the race.
The Monaco Grand Prix is organised each year by the Automobile Club de Monaco which also runs the Monte Carlo Rally and previously ran the Junior Monaco Kart Cup.
The Monaco Grand Prix differs in several ways from other Grands Prix. The practice session for the race was traditionally held on the Thursday preceding the race instead of Friday. This allows the streets to be opened to the public again on Friday. From the 2022 event onwards the first two Formula One practice sessions will now be held on Friday, bringing the running schedule for Formula One in line with other Grands Prix. Support races will still be run on Thursday. Until the late 1990s the race started at 3:30 p.m. local time – an hour and a half later than other European Formula One races. In recent years the race has fallen in line with the other Formula One races for the convenience of television viewers. Also, earlier the event was traditionally held on the week of Ascension Day. For many years, the numbers of cars admitted to Grands Prix was at the discretion of the race organisers – Monaco had the smallest grids, ostensibly because of its narrow and twisting track. Only 18 cars were permitted to start the 1975 Monaco Grand Prix, compared to 23 to 26 cars at all other rounds that year.
The erecting of the circuit takes six weeks, and the removal after the race takes three weeks. Until 2017, there was no proper podium at the race. Instead, a section of the track was closed after the race to act as parc fermé, a place where the cars are held for official inspection. The first three drivers in the race left their cars there and walked directly to the royal box where the 'podium' ceremony was held, which was considered a custom for the race. The trophies were handed out before the national anthems for the winning driver and team are played, as opposed to other Grands Prix where the anthems are played first.
The Monaco Grand Prix is widely considered to be one of the most important and prestigious automobile races in the world alongside the Indianapolis 500 and the 24 Hours of Le Mans. These three races are considered to form a Triple Crown of the three most famous motor races in the world. As of 2023, Graham Hill is the only driver to have won the Triple Crown, by winning all three races. The practice session for Monaco overlaps with that for the Indianapolis 500, and the races themselves sometimes clash. As the two races take place on opposite sides of the Atlantic Ocean and form part of different championships, it is difficult for one driver to compete effectively in both during his career. Juan Pablo Montoya and Fernando Alonso are the only active drivers to have won two of the three events.
In awarding its first gold medal for motorsport to Prince Rainier III, the Fédération Internationale de l'Automobile (FIA) characterised the Monaco Grand Prix as contributing "an exceptional location of glamour and prestige" to motorsport. The Grand Prix has been run under the patronage of three generations of Monaco's royal family: Louis II, Rainier III and Albert II, all of whom have taken a close interest in the race. A large part of the principality's income comes from tourists attracted by the warm climate and the famous casino, but it is also a tax haven and is home to many millionaires, including several Formula One drivers.
Monaco has produced four native Formula One drivers - Louis Chiron, André Testut, Olivier Beretta, and Charles Leclerc - but its tax status has made it home to many drivers over the years, including Gilles Villeneuve and Ayrton Senna. Of the 2006 Formula One contenders, several have property in the principality, including Jenson Button and David Coulthard, who was part owner of a hotel there. Because of the small size of the town and the location of the circuit, drivers whose races end early can usually get back to their apartments in minutes. Ayrton Senna famously retired to his apartment after crashing out of the lead of the 1988 race. In the 2006 race, after retiring due to a mechanical failure while in second place, Kimi Räikkönen retired to his yacht, which was parked in the harbour.
The Grand Prix attracts big-name celebrities each year who come to experience the glamour and prestige of the event. Big parties are held in the nightclubs on the Grand Prix weekend, and the Port Hercule fills up with party-goers joining in the celebrations.
Criticism from drivers and commentators
In the 21st century, several commentators and F1 drivers have called the Grand Prix the most boring race of all circuits, both to drive and to watch as a spectator. Criticism has been directed towards how few overtake attempts are performed, as well as how frequently the driver who sets the pole position wins. Fernando Alonso has said that the race is "the most boring race ever," and Lewis Hamilton stated that the 2022 Grand Prix "wasn't really racing."
Drivers in bold are competing in the Formula One championship in the current season.
Teams in bold are competing in the Formula One championship in the current season. A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship.
Manufacturers in bold are competing in the Formula One championship in the current season. A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship.
* Between 1998 and 2005 built by Ilmor, funded by Mercedes
** Built by Cosworth, funded by Ford
*** Built by Porsche
A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship. | [
{
"paragraph_id": 0,
"text": "The Monaco Grand Prix (French: Grand Prix de Monaco) is a Formula One motor racing event held annually on the Circuit de Monaco, in late May or early June. Run since 1929, it is widely considered to be one of the most important and prestigious automobile races in the world, and is one of the races—along with the Indianapolis 500 and the 24 Hours of Le Mans—that form the Triple Crown of Motorsport. The circuit has been called \"an exceptional location of glamour and prestige\". The Formula One event is usually held on the last weekend of May and is known as one of the largest weekends in motor racing, as the Formula One race occurs on the same Sunday as the Indianapolis 500 (IndyCar Series) and the Coca-Cola 600 (NASCAR Cup Series).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The race is held on a narrow course laid out in the streets of Monaco, with many elevation changes and tight corners as well as the tunnel, making it one of the most demanding circuits in Formula One. In spite of the relatively low average speeds, the Monaco circuit is a dangerous place to race due to how narrow the track is, and the race often involves the intervention of a safety car. It is the only Grand Prix that does not adhere to the FIA's mandated 305-kilometre (190-mile) minimum race distance for F1 races.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first Monaco Grand Prix took place on 14 April 1929, and the race eventually became part of the pre-Second World War European Championship and was included in the first World Championship of Drivers in 1950. It was twice designated the European Grand Prix, in 1955 and 1963, when this title was an honorary designation given each year to one Grand Prix race in Europe. Graham Hill was known as \"Mr. Monaco\" due to his five Monaco wins in the 1960s. Ayrton Senna won the race more times than any other driver, with six victories, winning five races consecutively between 1989 and 1993.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Like many European races, the Monaco Grand Prix predates the current World Championship. The principality's first Grand Prix was organised in 1929 by Antony Noghès, under the auspices of Prince Louis II, through the Automobile Club de Monaco (ACM), of which he was president. The ACM organised the Rallye Automobile Monte Carlo, and in 1928 applied to the Association Internationale des Automobiles Clubs Reconnus (AIACR), the international governing body of motorsport, to be upgraded from a regional French club to full national status. Their application was refused due to the lack of a major motorsport event held wholly within Monaco's boundaries. The rally could not be considered, as it mostly used the roads of other European countries.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "To attain full national status, Noghès proposed the creation of an automobile Grand Prix in the streets of Monte Carlo. He obtained the official sanction of Prince Louis II and the support of Monégasque Grand Prix driver Louis Chiron. Chiron thought Monaco's topography was well-suited to setting up a race track.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first race, held on 14 April 1929, was won by William Grover-Williams (using the pseudonym \"Williams\"), driving a works Bugatti Type 35B. It was an invitation-only event, but not all of those who were invited decided to attend. The leading Maserati and Alfa Romeo drivers decided not to compete, but Bugatti was well represented. Mercedes sent their leading driver, Rudolf Caracciola. Starting fifteenth, Caracciola drove a fighting race, taking his SSK into the lead before wasting 4+1⁄2 minutes on refuelling and a tyre change to finish second. Another driver who competed using a pseudonym was \"Georges Philippe\", the Baron Philippe de Rothschild. Chiron was unable to compete, having a prior commitment to compete in the Indianapolis 500.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Caracciola's SSK was refused permission to race the following year, but Chiron did compete (in the works Bugatti Type 35C), when he was beaten by privateer René Dreyfus and his Bugatti Type 35B, and finished second. Chiron took victory in the 1931 race driving a Bugatti. As of 2023, he remains the only native of Monaco to have won the event.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The race quickly grew in importance after its inception. Because of the high number of races which were being termed 'Grands Prix', the AIACR formally recognised the most important race of each of its affiliated national automobile clubs as International Grands Prix, or Grandes Épreuves, and in 1933 Monaco was ranked as such alongside the French, Belgian, Italian, and Spanish Grands Prix. That year's race was the first Grand Prix in which grid positions were decided, as they are now, by practice time rather than the established method of balloting. The race saw Achille Varzi and Tazio Nuvolari exchange the lead many times before the race settled in Varzi's favour on the final lap when Nuvolari's car caught fire.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The race became a round of the new European Championship in 1936, when stormy weather and a broken oil line led to a series of crashes, eliminating the Mercedes-Benzes of Chiron, Fagioli, and von Brauchitsch, as well as Bernd Rosemeyer's Typ C for newcomer Auto Union; Rudolf Caracciola, proving the truth of his nickname, Regenmeister (Rainmaster), went on to win. In 1937, von Brauchitsch duelled Caracciola before coming out on top. It was the last prewar Grand Prix at Monaco, for in 1938, the lack of profits for organisers, and demand for nearly £500 (approximately £34000 adjusted to 2021 inflation) in appearance money per top entrant led AIACR to cancel the event, while looming war overtook it in 1939, and the Second World War ended organised racing in Europe until 1945.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Racing in Europe started again on 9 September 1945 at the Bois de Boulogne Park in the city of Paris, four months and one day after the end of the war in Europe. However, the Monaco Grand Prix was not run between 1945 and 1947 due to financial reasons. In 1946, a new premier racing category, Grand Prix, was defined by the Fédération Internationale de l'Automobile (FIA), the successor of the AIACR, based on the pre-war voiturette class. A Monaco Grand Prix was run to this formula in 1948, won by the future world champion Nino Farina in a Maserati 4CLT.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 1949 event was cancelled due to the death of Prince Louis II; it was included in the new Formula One World Drivers' Championship the following year. The race provided future five-time world champion Juan Manuel Fangio with his first win in a World Championship race, as well as third place for the 51-year-old Louis Chiron, his best result in the World Championship era. However, there was no race in 1951 due to budgetary concerns and a lack of regulations in the sport. 1952 was the first of the two years in which the World Drivers' Championship was run to less powerful Formula Two regulations. The race was run to sports car rules instead, and it did not form part of the World Championship.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "No races were held in 1953 or 1954 due to the fact that the car regulations were not finalized.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Monaco Grand Prix returned in 1955, again as part of the Formula One World Championship, and this would begin a streak of 64 consecutive years in which the race was held. In the 1955 race, Maurice Trintignant won in Monte Carlo for the first time and Chiron again scored points and at 56 became the oldest driver to compete in a Formula One Grand Prix. It was not until 1957, when Fangio won again, that the Grand Prix saw a double winner. Between 1954 and 1961 Fangio's former Mercedes colleague, Stirling Moss, went one better, as did Trintignant, who won the race again in 1958 driving a Cooper. The 1961 race saw Moss fend off three works Ferrari 156s in a year-old privateer Rob Walker Racing Team Lotus 18 to take his third Monaco victory.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Britain's Graham Hill won the race five times in the 1960s and became known as \"King of Monaco\" and \"Mr. Monaco\". He first won in 1963, and then won the next two years. In the 1965 race, he took pole position and led from the start, but went up an escape road on lap 25 to avoid hitting a slow backmarker. Re-joining in fifth place, Hill set several new lap records on the way to winning. The race was also notable for Jim Clark's absence (he was participating in the Indianapolis 500), and for Paul Hawkins's Lotus ending up in the harbour. Hill's teammate, Briton Jackie Stewart, won in 1966 and New Zealander Denny Hulme won in 1967, but Hill won the next two years, the 1969 event being his final Formula One championship victory, by which time he was a double Formula One world champion.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "By the start of the 1970s, efforts by Jackie Stewart saw several Formula One events cancelled because of safety concerns. For the 1969 event, Armco barriers were placed at specific points for the first time in the circuit's history. Before that, the circuit's conditions were (aside from the removal of people's production cars parked on the side of the road) virtually identical to everyday road use. If a driver went off, he had a chance to crash into whatever was next to the track (buildings, trees, lamp posts, glass windows, and even a train station), and in Alberto Ascari's and Paul Hawkins's cases, the harbour water, because the concrete road the course used had no Armco to protect the drivers from going off the track and into the Mediterranean. The circuit gained more Armco in specific points for the next two races, and by 1972, the circuit was almost completely Armco-lined. For the first time in its history, the Monaco circuit was altered in 1972, as the pits were moved next to the waterfront straight between the chicane and Tabac, and the chicane was moved further forward right before Tabac, becoming the junction point between the pits and the course. The course was changed again for the 1973 race. The Rainier III Nautical Stadium was constructed where the straight that went behind the pits was, and the circuit introduced a double chicane that went around the new swimming pool (this chicane complex is known today as \"Swimming Pool\"). This created space for a whole new pit facility, and in 1976 the course was altered yet again; the Sainte Devote corner was made slower and a chicane was placed right before the pit straight.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "By the early 1970s, as Brabham team owner Bernie Ecclestone started to marshal the collective bargaining power of the Formula One Constructors Association (FOCA), Monaco was prestigious enough to become an early bone of contention. Historically, the number of cars permitted in a race was decided by the race organiser, in this case the ACM, which had always set a low number of around 16. In 1972, Ecclestone started to negotiate deals which relied on FOCA guaranteeing at least 18 entrants for every race. A stand-off over this issue left the 1972 race in jeopardy until the ACM gave in and agreed that 26 cars could participate – the same number permitted at most other circuits. Two years later, in 1974, the ACM got the numbers back down to 18.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Because of its tight confines, slow average speeds, and punishing nature, Monaco has often thrown up unexpected results. In the 1982 race, René Arnoux led the first 15 laps before retiring. Alain Prost then led until four laps from the end, when he spun off on the wet track, hit the barriers and lost a wheel, giving Riccardo Patrese the lead. Patrese himself spun with only a lap and a half to go, letting Didier Pironi through to the front, followed by Andrea de Cesaris. On the last lap, Pironi ran out of fuel in the tunnel, but De Cesaris also ran out of fuel before he could overtake. In the meantime, Patrese had bump-started his car and went through to score his first Grand Prix win.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1983, the ACM became entangled in the disagreements between Fédération Internationale du Sport Automobile (FISA) and FOCA. The ACM, with the agreement of Bernie Ecclestone, negotiated an individual television rights deal with ABC in the United States. This broke an agreement enforced by FISA for a single central negotiation of television rights. Jean-Marie Balestre, president of FISA, announced that the Monaco Grand Prix would not form part of the Formula One world championship in 1985. The ACM fought their case in the French courts. They won the case and the race was eventually reinstated.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "For the decade from 1984 to 1993, the race was won by only two drivers, arguably the two best drivers in Formula One at the time – Frenchman Alain Prost and Brazilian Ayrton Senna. Prost, already a winner of the support race for Formula Three cars in 1979, took his first Monaco win at the 1984 race. The race started 45 minutes late after heavy rain. Prost led briefly before Nigel Mansell overtook him on lap 11. Mansell crashed out five laps later, letting Prost back into the lead. On lap 27, Prost led from Ayrton Senna's Toleman and Stefan Bellof's Tyrrell. Senna was catching Prost, and Bellof was catching both of them in the only naturally aspirated car in the race. However, on lap 31, the race was controversially stopped due to conditions deemed to be undriveable. Later, FISA fined the clerk of the course, Jacky Ickx, $6,000 and suspended his licence for not consulting the stewards before stopping the race. The drivers received only half of the points that would usually be awarded, as the race had been stopped before two-thirds of the intended race distance had been completed.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Prost won 1985 after polesitter Senna retired with a blown Renault engine in his Lotus after over-revving it at the start, and Michele Alboreto in the Ferrari retook the lead twice, but he went off the track at Sainte-Devote, where Brazilian Nelson Piquet and Italian Riccardo Patrese had a huge accident only a few laps previously and oil and debris littered the track. Prost passed Alboreto, who retook the Frenchman, and then he punctured a tyre after running over bodywork debris from the Piquet/Patrese accident, which dropped him to 4th. He was able to pass his Roman countrymen Andrea De Cesaris and Elio de Angelis, but finished 2nd behind Prost. The French Prost dominated 1986 after starting from pole position, a race where the Nouvelle Chicane had been changed on the grounds of safety.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Senna holds the record for the most victories in Monaco, with six, including five consecutive wins between 1989 and 1993, as well as eight podium finishes in ten starts. His 1987 win was the first time a car with an active suspension had won a Grand Prix. He won this race after Briton Nigel Mansell in a Williams-Honda went out with a broken exhaust. His win was very popular with the people of Monaco, and when he was arrested on the Monday following the race for riding a motorcycle without wearing a helmet, he was released by the officers after they realised who he was. Senna dominated 1988 and was able to get ahead of his teammate Prost while the Frenchman was held up for most of the race by Austrian Gerhard Berger in a Ferrari. By the time Prost got past Berger, he pushed as hard as he could and set a lap some 6 seconds faster than Senna's; Senna then set 2 fastest laps, and while pushing as hard as possible, he touched the barrier at the Portier corner and crashed into the Armco separating the road from the Mediterranean. Senna was so upset that he went back to his Monaco flat and was not heard from until the evening. Prost went on to win for the fourth time.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Senna dominated 1989 while Prost was stuck behind backmarker René Arnoux and others; the Brazilian also dominated 1990 and 1991. At the 1992 event Nigel Mansell, who had won all five races held to that point in the season, took pole and dominated the race in his Williams FW14B-Renault. However, with seven laps remaining, Mansell suffered a loose wheel nut and was forced into the pits, emerging behind Senna's McLaren-Honda, who was on worn tyres. Mansell, on fresh tyres, set a lap record almost two seconds quicker than Senna's and closed from 5.2 to 1.9 seconds in only two laps. The pair duelled around Monaco for the final four laps but Mansell could find no way past, finishing just two-tenths of a second behind the Brazilian. It was Senna's fifth win at Monaco, equalling Graham Hill's record. Senna had a poor start to the 1993 event, crashing in practice and qualifying 3rd behind pole-sitter Prost and the rising German star Michael Schumacher. Both of them beat Senna to the first corner, but Prost had to serve a time penalty for jumping the start and Schumacher retired after suspension problems, so Senna took his sixth win to break Graham Hill's record for most wins at the Monaco Grand Prix. Runner-up Damon Hill commented, \"If my father was around now, he would be the first to congratulate Ayrton.\"",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The 1994 race was an emotional and tragic affair. It came two weeks after the race at Imola in which Austrian Roland Ratzenberger and Ayrton Senna both died in crashes on successive days. During the Monaco event, Austrian Karl Wendlinger had an accident in his Sauber in the tunnel; he went into a coma and was to miss the rest of the season. The German Michael Schumacher won the 1994 Monaco event. Schumacher also won the 1995 event. The 1996 race saw Michael Schumacher take pole position before crashing out on the first lap after being overtaken by Damon Hill. Hill led the first 40 laps before his engine expired in the tunnel. Jean Alesi took the lead but suffered suspension failure 20 laps later. Olivier Panis, who started in 14th place, moved into the lead and stayed there until the end of the race, being pushed all the way by David Coulthard. It was Panis's only win, and the last for his Ligier team. Only three cars crossed the finish line, but seven were classified.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Seven-time world champion Schumacher would eventually win the race five times, matching Graham Hill's record. In his appearance at the 2006 event, he attracted criticism when, while provisionally holding pole position and with the qualifying session drawing to a close, he stopped his car at the Rascasse hairpin, blocking the track and obliging competitors to slow down. Although Schumacher claimed it was the unintentional result of a genuine car failure, the FIA disagreed and he was sent to the back of the grid.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In July 2010, Bernie Ecclestone announced that a 10-year deal had been reached with the race organisers, keeping the race on the calendar until at least 2020.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Due to the COVID-19 pandemic, the FIA announced the 2020 Monaco Grand Prix's postponement, along with the two other races scheduled for May 2020, to help prevent the spread of the virus. However, later the same day the Automobile Club de Monaco confirmed that the Grand Prix was instead cancelled, making 2020 the first time the Grand Prix was not run since 1954. It returned in 2021, on 23 May, where Max Verstappen won his first Monaco Grand Prix. The 2022 event saw the Monégasque driver, Charles Leclerc of Scuderia Ferrari, achieve his first Monaco Grand Prix pole position at the Circuit de Monaco (he had taken pole the previous year but could not start due to driveshaft failure). However, a critical strategical error meant Leclerc would drop to fourth, with Verstappen's teammate Sergio Pérez winning the race. The race was delayed due to heavy rain; two formation laps were completed before the start procedure was suspended and further delayed an hour from its 15:00 local time intended start. In addition to a red flag due to a big crash from Mick Schumacher, this dropped the laps completed from the intended 78 to 64.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In September 2022 the Grand Prix signed a new race contract to remain on the F1 calendar until the 2025 season.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The Circuit de Monaco consists of the city streets of Monte Carlo and La Condamine, which includes the famous harbour. It is unique in having been held on the same circuit every time it has been run over such a long period – only the Italian Grand Prix, which has been held at Autodromo Nazionale Monza during every Formula One regulated year except 1980, has a similarly lengthy and close relationship with a single circuit.",
"title": "Circuit"
},
{
"paragraph_id": 28,
"text": "The race circuit has many elevation changes, tight corners, and a narrow course that makes it one of the most demanding tracks in Formula One racing. As of 2022, two drivers have crashed and ended up in the harbour, the most famous being Alberto Ascari in 1955. Despite the fact that the course has had minor changes several times during its history, it is still considered the ultimate test of driving skills in Formula One, and if it were not already an existing Grand Prix, it would not be permitted to be added to the schedule for safety reasons. Even in 1929, La Vie Automobile magazine offered the opinion that \"Any respectable traffic system would have covered the track with <<Danger>> sign posts left, right and centre\".",
"title": "Circuit"
},
{
"paragraph_id": 29,
"text": "Triple Formula One champion Nelson Piquet was fond of saying that racing at Monaco was \"like trying to cycle round your living room\", but added that \"a win here was worth two anywhere else\".",
"title": "Circuit"
},
{
"paragraph_id": 30,
"text": "Notably, the course includes a tunnel. The contrast of daylight and gloom when entering/exiting the tunnel presents \"challenges not faced elsewhere\", as the drivers have to \"adjust their vision as they emerge from the tunnel at the fastest point of the track and brake for the chicane in the daylight.\".",
"title": "Circuit"
},
{
"paragraph_id": 31,
"text": "The fastest-ever qualifying lap was set by Lewis Hamilton in qualifying (Q3) for the 2019 Monaco Grand Prix, at a time of 1:10.166.",
"title": "Circuit"
},
{
"paragraph_id": 32,
"text": "During the Grand Prix weekend, spectators crowd around the Monaco Circuit. There are a number of temporary grandstands built around the circuit, mostly around the harbour area. The rich and famous spectators often arrive on their boats and the yachts through the harbour. Balconies around Monaco become viewing areas for the race as well. Many hotels and residents cash in on the bird's eye views of the race.",
"title": "Circuit"
},
{
"paragraph_id": 33,
"text": "The Monaco Grand Prix is organised each year by the Automobile Club de Monaco which also runs the Monte Carlo Rally and previously ran the Junior Monaco Kart Cup.",
"title": "Organization"
},
{
"paragraph_id": 34,
"text": "The Monaco Grand Prix differs in several ways from other Grands Prix. The practice session for the race was traditionally held on the Thursday preceding the race instead of Friday. This allows the streets to be opened to the public again on Friday. From the 2022 event onwards the first two Formula One practice sessions will now be held on Friday, bringing the running schedule for Formula One in line with other Grands Prix. Support races will still be run on Thursday. Until the late 1990s the race started at 3:30 p.m. local time – an hour and a half later than other European Formula One races. In recent years the race has fallen in line with the other Formula One races for the convenience of television viewers. Also, earlier the event was traditionally held on the week of Ascension Day. For many years, the numbers of cars admitted to Grands Prix was at the discretion of the race organisers – Monaco had the smallest grids, ostensibly because of its narrow and twisting track. Only 18 cars were permitted to start the 1975 Monaco Grand Prix, compared to 23 to 26 cars at all other rounds that year.",
"title": "Organization"
},
{
"paragraph_id": 35,
"text": "The erecting of the circuit takes six weeks, and the removal after the race takes three weeks. Until 2017, there was no proper podium at the race. Instead, a section of the track was closed after the race to act as parc fermé, a place where the cars are held for official inspection. The first three drivers in the race left their cars there and walked directly to the royal box where the 'podium' ceremony was held, which was considered a custom for the race. The trophies were handed out before the national anthems for the winning driver and team are played, as opposed to other Grands Prix where the anthems are played first.",
"title": "Organization"
},
{
"paragraph_id": 36,
"text": "The Monaco Grand Prix is widely considered to be one of the most important and prestigious automobile races in the world alongside the Indianapolis 500 and the 24 Hours of Le Mans. These three races are considered to form a Triple Crown of the three most famous motor races in the world. As of 2023, Graham Hill is the only driver to have won the Triple Crown, by winning all three races. The practice session for Monaco overlaps with that for the Indianapolis 500, and the races themselves sometimes clash. As the two races take place on opposite sides of the Atlantic Ocean and form part of different championships, it is difficult for one driver to compete effectively in both during his career. Juan Pablo Montoya and Fernando Alonso are the only active drivers to have won two of the three events.",
"title": "Fame"
},
{
"paragraph_id": 37,
"text": "In awarding its first gold medal for motorsport to Prince Rainier III, the Fédération Internationale de l'Automobile (FIA) characterised the Monaco Grand Prix as contributing \"an exceptional location of glamour and prestige\" to motorsport. The Grand Prix has been run under the patronage of three generations of Monaco's royal family: Louis II, Rainier III and Albert II, all of whom have taken a close interest in the race. A large part of the principality's income comes from tourists attracted by the warm climate and the famous casino, but it is also a tax haven and is home to many millionaires, including several Formula One drivers.",
"title": "Fame"
},
{
"paragraph_id": 38,
"text": "Monaco has produced four native Formula One drivers - Louis Chiron, André Testut, Olivier Beretta, and Charles Leclerc - but its tax status has made it home to many drivers over the years, including Gilles Villeneuve and Ayrton Senna. Of the 2006 Formula One contenders, several have property in the principality, including Jenson Button and David Coulthard, who was part owner of a hotel there. Because of the small size of the town and the location of the circuit, drivers whose races end early can usually get back to their apartments in minutes. Ayrton Senna famously retired to his apartment after crashing out of the lead of the 1988 race. In the 2006 race, after retiring due to a mechanical failure while in second place, Kimi Räikkönen retired to his yacht, which was parked in the harbour.",
"title": "Fame"
},
{
"paragraph_id": 39,
"text": "The Grand Prix attracts big-name celebrities each year who come to experience the glamour and prestige of the event. Big parties are held in the nightclubs on the Grand Prix weekend, and the Port Hercule fills up with party-goers joining in the celebrations.",
"title": "Fame"
},
{
"paragraph_id": 40,
"text": "Criticism from drivers and commentators",
"title": "Fame"
},
{
"paragraph_id": 41,
"text": "In the 21st century, several commentators and F1 drivers have called the Grand Prix the most boring race of all circuits, both to drive and to watch as a spectator. Criticism has been directed towards how few overtake attempts are performed, as well as how frequently the driver who sets the pole position wins. Fernando Alonso has said that the race is \"the most boring race ever,\" and Lewis Hamilton stated that the 2022 Grand Prix \"wasn't really racing.\"",
"title": "Fame"
},
{
"paragraph_id": 42,
"text": "Drivers in bold are competing in the Formula One championship in the current season.",
"title": "Winners"
},
{
"paragraph_id": 43,
"text": "Teams in bold are competing in the Formula One championship in the current season. A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship.",
"title": "Winners"
},
{
"paragraph_id": 44,
"text": "Manufacturers in bold are competing in the Formula One championship in the current season. A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship.",
"title": "Winners"
},
{
"paragraph_id": 45,
"text": "* Between 1998 and 2005 built by Ilmor, funded by Mercedes",
"title": "Winners"
},
{
"paragraph_id": 46,
"text": "** Built by Cosworth, funded by Ford",
"title": "Winners"
},
{
"paragraph_id": 47,
"text": "*** Built by Porsche",
"title": "Winners"
},
{
"paragraph_id": 48,
"text": "A pink background indicates an event which was not part of the Formula One World Championship. A yellow background indicates an event which was part of the pre-war European Championship.",
"title": "Winners"
}
]
| The Monaco Grand Prix is a Formula One motor racing event held annually on the Circuit de Monaco, in late May or early June. Run since 1929, it is widely considered to be one of the most important and prestigious automobile races in the world, and is one of the races—along with the Indianapolis 500 and the 24 Hours of Le Mans—that form the Triple Crown of Motorsport. The circuit has been called "an exceptional location of glamour and prestige". The Formula One event is usually held on the last weekend of May and is known as one of the largest weekends in motor racing, as the Formula One race occurs on the same Sunday as the Indianapolis 500 and the Coca-Cola 600. The race is held on a narrow course laid out in the streets of Monaco, with many elevation changes and tight corners as well as the tunnel, making it one of the most demanding circuits in Formula One. In spite of the relatively low average speeds, the Monaco circuit is a dangerous place to race due to how narrow the track is, and the race often involves the intervention of a safety car. It is the only Grand Prix that does not adhere to the FIA's mandated 305-kilometre (190-mile) minimum race distance for F1 races. The first Monaco Grand Prix took place on 14 April 1929, and the race eventually became part of the pre-Second World War European Championship and was included in the first World Championship of Drivers in 1950. It was twice designated the European Grand Prix, in 1955 and 1963, when this title was an honorary designation given each year to one Grand Prix race in Europe. Graham Hill was known as "Mr. Monaco" due to his five Monaco wins in the 1960s. Ayrton Senna won the race more times than any other driver, with six victories, winning five races consecutively between 1989 and 1993. | 2001-08-31T08:36:33Z | 2023-12-15T22:35:03Z | [
"Template:Inflation/year",
"Template:Use British English",
"Template:Convert",
"Template:Cite journal",
"Template:Flagicon",
"Template:Inflation/fn",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite web",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Cite news",
"Template:As of",
"Template:Inflation",
"Template:Refend",
"Template:Monaco Grand Prix",
"Template:About",
"Template:Webarchive",
"Template:Cite book",
"Template:Refbegin",
"Template:Formula One races",
"Template:Authority control",
"Template:Lang-fr",
"Template:F1",
"Template:Frac",
"Template:Main",
"Template:Clr",
"Template:Cite magazine",
"Template:ISBN",
"Template:Commons category",
"Template:Short description",
"Template:F1 race"
]
| https://en.wikipedia.org/wiki/Monaco_Grand_Prix |
10,947 | Fission | Fission, a splitting of something into two or more parts, may refer to: | [
{
"paragraph_id": 0,
"text": "Fission, a splitting of something into two or more parts, may refer to:",
"title": ""
}
]
| Fission, a splitting of something into two or more parts, may refer to: Fission (biology), the division of a single entity into two or more parts and the regeneration of those parts into separate entities resembling the original
Nuclear fission, when the nucleus of an atom splits into smaller parts
Fission (band), a Swedish death metal band
Fission (album), by Jens Johansson | 2021-07-12T08:37:57Z | [
"Template:TOC right",
"Template:Disambiguation",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/Fission |
|
10,948 | Fusion | Fusion, or synthesis, is the process of combining two or more distinct entities into a new whole.
Fusion may also refer to: | [
{
"paragraph_id": 0,
"text": "Fusion, or synthesis, is the process of combining two or more distinct entities into a new whole.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fusion may also refer to:",
"title": ""
}
]
| Fusion, or synthesis, is the process of combining two or more distinct entities into a new whole. Fusion may also refer to: | 2001-09-26T18:50:14Z | 2023-12-12T20:42:51Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:For",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Fusion |
10,949 | Four color theorem | In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary curve segment, not merely a corner where three or more regions meet. It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubters remain.
The four color theorem was proved in 1976 by Kenneth Appel and Wolfgang Haken after many false proofs and counterexamples (unlike the five color theorem, proved in the 1800s, which states that five colors are enough to color a map). To dispel any remaining doubts about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. In 2005, the theorem was also proved by Georges Gonthier with general-purpose theorem-proving software.
In graph-theoretic terms, the theorem states that for loopless planar graph G {\displaystyle G} , its chromatic number is χ ( G ) ≤ 4 {\displaystyle \chi (G)\leq 4} .
The intuitive statement of the four color theorem – "given any separation of a plane into contiguous regions, the regions can be colored using at most four colors so that no two adjacent regions have the same color" – needs to be interpreted appropriately to be correct.
First, regions are adjacent if they share a boundary segment; two regions that share only isolated boundary points are not considered adjacent. (Otherwise, a map in a shape of a pie chart would make an arbitrarily large number of regions 'adjacent' to each other at a common corner, and require arbitrarily large number of colors as a result.) Second, bizarre regions, such as those with finite area but infinitely long perimeter, are not allowed; maps with such regions can require more than four colors. (To be safe, we can restrict to regions whose boundaries consist of finitely many straight line segments. It is allowed that a region has enclaves, that is it entirely surrounds one or more other regions.) Note that the notion of "contiguous region" (technically: connected open subset of the plane) is not the same as that of a "country" on regular maps, since countries need not be contiguous (they may have exclaves, e.g., the Cabinda Province as part of Angola, Nakhchivan as part of Azerbaijan, Kaliningrad as part of Russia, France with its overseas territories, and Alaska as part of the United States are not contiguous). If we required the entire territory of a country to receive the same color, then four colors are not always sufficient. For instance, consider a simplified map:
In this map, the two regions labeled A belong to the same country. If we wanted those regions to receive the same color, then five colors would be required, since the two A regions together are adjacent to four other regions, each of which is adjacent to all the others. Forcing two separate regions to have the same color can be modelled by adding a 'handle' joining them outside the plane.
Such construction makes the problem equivalent to coloring a map on a torus (a surface of genus 1), which requires up to 7 colors for an arbitrary map. A similar construction also applies if a single color is used for multiple disjoint areas, as for bodies of water on real maps, or there are more countries with disjoint territories. In such cases more colors might be required with a growing genus of a resulting surface. (See the section Generalizations below.)
A simpler statement of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves without crossings that lead from one region's vertex, across a shared boundary segment, to an adjacent region's vertex. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short:
As far as is known, the conjecture was first proposed on October 23, 1852, when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan (the former advisor of Francis) at University College London. Francis inquired with Frederick regarding it, who then took it to De Morgan (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa). According to De Morgan:
A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact—and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary line are differently colored—four colors may be wanted but not more—the following is his case in which four colors are wanted. Query cannot a necessity for five or more be invented…
"F.G.", perhaps one of the two Guthries, published the question in The Athenaeum in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by Arthur Cayley (1879) in turn credits the conjecture to De Morgan.
There were several early failed attempts at proving the theorem. De Morgan believed that it followed from a simple fact about four regions, though he didn't believe that fact could be derived from more elementary facts.
This arises in the following way. We never need four colours in a neighborhood unless there be four counties, each of which has boundary lines in common with each of the other three. Such a thing cannot happen with four areas unless one or more of them be inclosed by the rest; and the colour used for the inclosed county is thus set free to go on with. Now this principle, that four areas cannot each have common boundary with all the other three without inclosure, is not, we fully believe, capable of demonstration upon anything more evident and more elementary; it must stand as a postulate.
One proposed proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and in 1891, Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years.
In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to surfaces of arbitrary genus.
Tait, in 1880, showed that the four color theorem is equivalent to the statement that a certain type of graph (called a snark in modern terminology) must be non-planar.
In 1943, Hugo Hadwiger formulated the Hadwiger conjecture, a far-reaching generalization of the four-color problem that still remains unsolved.
During the 1960s and 1970s, German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel–Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work.
Others took up his methods, including his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proved the theorem. They were assisted in some algorithmic work by John A. Koch.
If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts:
Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,834 reducible configurations (later reduced to 1,482) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand with the assistance of Haken's daughter Dorothea Blostein (Appel & Haken 1989).
Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating "Four colors suffice." At the same time the unusual nature of the proof—it was the first major theorem to be proved with extensive computer assistance—and the complexity of the human-verifiable portion aroused considerable controversy.
In the early 1980s, rumors spread of a flaw in the Appel–Haken proof. Ulrich Schmidt at RWTH Aachen had examined Appel and Haken's proof for his master's thesis that was published in 1981. He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure (Appel & Haken 1989). In 1986, Appel and Haken were asked by the editor of Mathematical Intelligencer to write an article addressing the rumors of flaws in their proof. They replied that the rumors were due to a "misinterpretation of [Schmidt's] results" and obliged with a detailed article. Their magnum opus, Every Planar Map is Four-Colorable, a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989; it explained and corrected the error discovered by Schmidt as well as several further errors found by others (Appel & Haken 1989).
Since the proving of the theorem, a new approach has led to both a shorter proof and a more efficient algorithm for 4-coloring maps. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic-time algorithm (requiring only O(n) time, where n is the number of vertices), improving on a quartic-time algorithm based on Appel and Haken's proof. The new proof, based on the same ideas, is similar to Appel and Haken's but more efficient because it reduces the complexity of the problem and requires checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by a computer and are impractical to check by hand. In 2001, the same authors announced an alternative proof, by proving the snark conjecture. This proof remains unpublished, however.
In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel.
The following discussion is a summary based on the introduction to Every Planar Map is Four Colorable (Appel & Haken 1989). Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above.
Kempe's argument goes as follows. First, if planar regions separated by the graph are not triangulated, i.e. do not have exactly three edges in their boundaries, we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated.
Suppose v, e, and f are the number of vertices, edges, and regions (faces). Since each region is triangular and each edge is shared by two regions, we have that 2e = 3f. This together with Euler's formula, v − e + f = 2, can be used to show that 6v − 2e = 12. Now, the degree of a vertex is the number of edges abutting it. If vn is the number of vertices of degree n and D is the maximum degree of any vertex,
But since 12 > 0 and 6 − i ≤ 0 for all i ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less.
If there is a graph requiring 5 colors, then there is a minimal such graph, where removing any vertex makes it four-colorable. Call this graph G. Then G cannot have a vertex of degree 3 or less, because if d(v) ≤ 3, we can remove v from G, four-color the smaller graph, then add back v and extend the four-coloring to it by choosing a color different from its neighbors.
Kempe also showed correctly that G can have no vertex of degree 4. As before we remove the vertex v and four-color the remaining vertices. If all four neighbors of v are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and v can now be added back and colored red.
This leaves only the case where G has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem.
In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering configurations, which are connected subgraphs of G with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in G. As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a reducible configuration. If at least one of a set of configurations must occur somewhere in G, that set is called unavoidable. The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem.
Because G is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in G adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the ring of the configuration; a configuration with k vertices in its ring is a k-ring configuration, and the configuration together with its ring is called the ringed configuration. As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called initially good. For example, the single-vertex configuration above with 3 or less neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance.
Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative "electrical charge" is distributed amongst the vertices so that the total is positive.
Recall the formula above:
Each vertex is assigned an initial charge of 6-deg(v). Then one "flows" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the discharging procedure. Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively charged vertices, so enumerating all such possible configurations gives an unavoidable set.
As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years.
A technical detail not discussed here but required to complete the proof is immersion reducibility.
The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused, as a matter of policy, to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it. Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were refuted. But many more, authored by amateurs, were never published at all.
Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors.
This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid.
Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors.
Other false disproofs violate the assumptions of the theorem, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point.
While every planar map can be colored with four colors, it is NP-complete in complexity to decide whether an arbitrary planar map can be colored with just three colors.
A cubic map can be colored with only three colors if and only if each interior region has an even number of neighboring regions. In the US states map example, landlocked Missouri (MO) has eight neighbors (an even number): it must be differently colored from all of them, but the neighbors can alternate colors, thus this part of the map needs only three colors. However, landlocked Nevada (NV) has five neighbors (an odd number): one of the neighbors must be differently colored from it and all the others, thus four colors are needed here.
The four color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is k-colorable, then the whole graph is also k-colorable Nash-Williams (1967). This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for first-order logic, simply by expressing the colorability of an infinite graph with a set of logical formulae.
One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula
where the outermost brackets denote the floor function.
Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, g:
This formula, the Heawood conjecture, was proposed by P. J. Heawood in 1890 and, after contributions by several people, proved by Gerhard Ringel and J. W. T. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) but requires only 6 colors, as shown by Philip Franklin in 1934.
For example, the torus has Euler characteristic χ = 0 (and genus g = 1) and thus p = 7, so no more than 7 colors are required to color any map on a torus. This upper bound of 7 is sharp: certain toroidal polyhedra such as the Szilassi polyhedron require seven colors.
A Möbius strip requires six colors (Tietze 1910) as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) (Borodin 1984). If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed (Borodin 1984).
For graphs whose vertices are represented as pairs of points on two distinct surfaces, with edges drawn as non-crossing curves on one of the two surfaces, the chromatic number can be at least 9 and is at most 12, but more precise bounds are not known; this is Gerhard Ringel's Earth–Moon problem.
There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of n flexible rods, one can arrange that every rod touches every other rod. The set would then require n colors, or n+1 including the empty space that also touches every rod. The number n can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880. Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area) an unbounded number of colors may be necessary (Reed & Allwright 2008; Magnant & Martin (2011)).
Dror Bar-Natan gave a statement concerning Lie algebras and Vassiliev invariants which is equivalent to the four color theorem.
Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to cartographers. According to an article by the math historian Kenneth May, "Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property". The theorem also does not guarantee the usual cartographic requirement that non-contiguous regions of the same country (such as the exclave Alaska and the rest of the United States) be colored identically. | [
{
"paragraph_id": 0,
"text": "In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary curve segment, not merely a corner where three or more regions meet. It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubters remain.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The four color theorem was proved in 1976 by Kenneth Appel and Wolfgang Haken after many false proofs and counterexamples (unlike the five color theorem, proved in the 1800s, which states that five colors are enough to color a map). To dispel any remaining doubts about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. In 2005, the theorem was also proved by Georges Gonthier with general-purpose theorem-proving software.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In graph-theoretic terms, the theorem states that for loopless planar graph G {\\displaystyle G} , its chromatic number is χ ( G ) ≤ 4 {\\displaystyle \\chi (G)\\leq 4} .",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 3,
"text": "The intuitive statement of the four color theorem – \"given any separation of a plane into contiguous regions, the regions can be colored using at most four colors so that no two adjacent regions have the same color\" – needs to be interpreted appropriately to be correct.",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 4,
"text": "First, regions are adjacent if they share a boundary segment; two regions that share only isolated boundary points are not considered adjacent. (Otherwise, a map in a shape of a pie chart would make an arbitrarily large number of regions 'adjacent' to each other at a common corner, and require arbitrarily large number of colors as a result.) Second, bizarre regions, such as those with finite area but infinitely long perimeter, are not allowed; maps with such regions can require more than four colors. (To be safe, we can restrict to regions whose boundaries consist of finitely many straight line segments. It is allowed that a region has enclaves, that is it entirely surrounds one or more other regions.) Note that the notion of \"contiguous region\" (technically: connected open subset of the plane) is not the same as that of a \"country\" on regular maps, since countries need not be contiguous (they may have exclaves, e.g., the Cabinda Province as part of Angola, Nakhchivan as part of Azerbaijan, Kaliningrad as part of Russia, France with its overseas territories, and Alaska as part of the United States are not contiguous). If we required the entire territory of a country to receive the same color, then four colors are not always sufficient. For instance, consider a simplified map:",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 5,
"text": "In this map, the two regions labeled A belong to the same country. If we wanted those regions to receive the same color, then five colors would be required, since the two A regions together are adjacent to four other regions, each of which is adjacent to all the others. Forcing two separate regions to have the same color can be modelled by adding a 'handle' joining them outside the plane.",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 6,
"text": "Such construction makes the problem equivalent to coloring a map on a torus (a surface of genus 1), which requires up to 7 colors for an arbitrary map. A similar construction also applies if a single color is used for multiple disjoint areas, as for bodies of water on real maps, or there are more countries with disjoint territories. In such cases more colors might be required with a growing genus of a resulting surface. (See the section Generalizations below.)",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 7,
"text": "A simpler statement of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves without crossings that lead from one region's vertex, across a shared boundary segment, to an adjacent region's vertex. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short:",
"title": "Precise formulation of the theorem"
},
{
"paragraph_id": 8,
"text": "As far as is known, the conjecture was first proposed on October 23, 1852, when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan (the former advisor of Francis) at University College London. Francis inquired with Frederick regarding it, who then took it to De Morgan (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa). According to De Morgan:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact—and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary line are differently colored—four colors may be wanted but not more—the following is his case in which four colors are wanted. Query cannot a necessity for five or more be invented…",
"title": "History"
},
{
"paragraph_id": 10,
"text": "\"F.G.\", perhaps one of the two Guthries, published the question in The Athenaeum in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by Arthur Cayley (1879) in turn credits the conjecture to De Morgan.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "There were several early failed attempts at proving the theorem. De Morgan believed that it followed from a simple fact about four regions, though he didn't believe that fact could be derived from more elementary facts.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "This arises in the following way. We never need four colours in a neighborhood unless there be four counties, each of which has boundary lines in common with each of the other three. Such a thing cannot happen with four areas unless one or more of them be inclosed by the rest; and the colour used for the inclosed county is thus set free to go on with. Now this principle, that four areas cannot each have common boundary with all the other three without inclosure, is not, we fully believe, capable of demonstration upon anything more evident and more elementary; it must stand as a postulate.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "One proposed proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and in 1891, Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to surfaces of arbitrary genus.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Tait, in 1880, showed that the four color theorem is equivalent to the statement that a certain type of graph (called a snark in modern terminology) must be non-planar.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 1943, Hugo Hadwiger formulated the Hadwiger conjecture, a far-reaching generalization of the four-color problem that still remains unsolved.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "During the 1960s and 1970s, German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel–Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Others took up his methods, including his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proved the theorem. They were assisted in some algorithmic work by John A. Koch.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts:",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,834 reducible configurations (later reduced to 1,482) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand with the assistance of Haken's daughter Dorothea Blostein (Appel & Haken 1989).",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating \"Four colors suffice.\" At the same time the unusual nature of the proof—it was the first major theorem to be proved with extensive computer assistance—and the complexity of the human-verifiable portion aroused considerable controversy.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In the early 1980s, rumors spread of a flaw in the Appel–Haken proof. Ulrich Schmidt at RWTH Aachen had examined Appel and Haken's proof for his master's thesis that was published in 1981. He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure (Appel & Haken 1989). In 1986, Appel and Haken were asked by the editor of Mathematical Intelligencer to write an article addressing the rumors of flaws in their proof. They replied that the rumors were due to a \"misinterpretation of [Schmidt's] results\" and obliged with a detailed article. Their magnum opus, Every Planar Map is Four-Colorable, a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989; it explained and corrected the error discovered by Schmidt as well as several further errors found by others (Appel & Haken 1989).",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Since the proving of the theorem, a new approach has led to both a shorter proof and a more efficient algorithm for 4-coloring maps. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic-time algorithm (requiring only O(n) time, where n is the number of vertices), improving on a quartic-time algorithm based on Appel and Haken's proof. The new proof, based on the same ideas, is similar to Appel and Haken's but more efficient because it reduces the complexity of the problem and requires checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by a computer and are impractical to check by hand. In 2001, the same authors announced an alternative proof, by proving the snark conjecture. This proof remains unpublished, however.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The following discussion is a summary based on the introduction to Every Planar Map is Four Colorable (Appel & Haken 1989). Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 26,
"text": "Kempe's argument goes as follows. First, if planar regions separated by the graph are not triangulated, i.e. do not have exactly three edges in their boundaries, we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 27,
"text": "Suppose v, e, and f are the number of vertices, edges, and regions (faces). Since each region is triangular and each edge is shared by two regions, we have that 2e = 3f. This together with Euler's formula, v − e + f = 2, can be used to show that 6v − 2e = 12. Now, the degree of a vertex is the number of edges abutting it. If vn is the number of vertices of degree n and D is the maximum degree of any vertex,",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 28,
"text": "But since 12 > 0 and 6 − i ≤ 0 for all i ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 29,
"text": "If there is a graph requiring 5 colors, then there is a minimal such graph, where removing any vertex makes it four-colorable. Call this graph G. Then G cannot have a vertex of degree 3 or less, because if d(v) ≤ 3, we can remove v from G, four-color the smaller graph, then add back v and extend the four-coloring to it by choosing a color different from its neighbors.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 30,
"text": "Kempe also showed correctly that G can have no vertex of degree 4. As before we remove the vertex v and four-color the remaining vertices. If all four neighbors of v are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and v can now be added back and colored red.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 31,
"text": "This leaves only the case where G has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 32,
"text": "In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering configurations, which are connected subgraphs of G with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in G. As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a reducible configuration. If at least one of a set of configurations must occur somewhere in G, that set is called unavoidable. The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 33,
"text": "Because G is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in G adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the ring of the configuration; a configuration with k vertices in its ring is a k-ring configuration, and the configuration together with its ring is called the ringed configuration. As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called initially good. For example, the single-vertex configuration above with 3 or less neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 34,
"text": "Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative \"electrical charge\" is distributed amongst the vertices so that the total is positive.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 35,
"text": "Recall the formula above:",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 36,
"text": "Each vertex is assigned an initial charge of 6-deg(v). Then one \"flows\" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the discharging procedure. Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively charged vertices, so enumerating all such possible configurations gives an unavoidable set.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 37,
"text": "As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 38,
"text": "A technical detail not discussed here but required to complete the proof is immersion reducibility.",
"title": "Summary of proof ideas"
},
{
"paragraph_id": 39,
"text": "The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, The New York Times refused, as a matter of policy, to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it. Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were refuted. But many more, authored by amateurs, were never published at all.",
"title": "False disproofs"
},
{
"paragraph_id": 40,
"text": "Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors.",
"title": "False disproofs"
},
{
"paragraph_id": 41,
"text": "This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid.",
"title": "False disproofs"
},
{
"paragraph_id": 42,
"text": "Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors.",
"title": "False disproofs"
},
{
"paragraph_id": 43,
"text": "Other false disproofs violate the assumptions of the theorem, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point.",
"title": "False disproofs"
},
{
"paragraph_id": 44,
"text": "While every planar map can be colored with four colors, it is NP-complete in complexity to decide whether an arbitrary planar map can be colored with just three colors.",
"title": "Three-coloring"
},
{
"paragraph_id": 45,
"text": "A cubic map can be colored with only three colors if and only if each interior region has an even number of neighboring regions. In the US states map example, landlocked Missouri (MO) has eight neighbors (an even number): it must be differently colored from all of them, but the neighbors can alternate colors, thus this part of the map needs only three colors. However, landlocked Nevada (NV) has five neighbors (an odd number): one of the neighbors must be differently colored from it and all the others, thus four colors are needed here.",
"title": "Three-coloring"
},
{
"paragraph_id": 46,
"text": "The four color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is k-colorable, then the whole graph is also k-colorable Nash-Williams (1967). This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for first-order logic, simply by expressing the colorability of an infinite graph with a set of logical formulae.",
"title": "Generalizations"
},
{
"paragraph_id": 47,
"text": "One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number p of colors needed depends on the surface's Euler characteristic χ according to the formula",
"title": "Generalizations"
},
{
"paragraph_id": 48,
"text": "where the outermost brackets denote the floor function.",
"title": "Generalizations"
},
{
"paragraph_id": 49,
"text": "Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, g:",
"title": "Generalizations"
},
{
"paragraph_id": 50,
"text": "This formula, the Heawood conjecture, was proposed by P. J. Heawood in 1890 and, after contributions by several people, proved by Gerhard Ringel and J. W. T. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) but requires only 6 colors, as shown by Philip Franklin in 1934.",
"title": "Generalizations"
},
{
"paragraph_id": 51,
"text": "For example, the torus has Euler characteristic χ = 0 (and genus g = 1) and thus p = 7, so no more than 7 colors are required to color any map on a torus. This upper bound of 7 is sharp: certain toroidal polyhedra such as the Szilassi polyhedron require seven colors.",
"title": "Generalizations"
},
{
"paragraph_id": 52,
"text": "A Möbius strip requires six colors (Tietze 1910) as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) (Borodin 1984). If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed (Borodin 1984).",
"title": "Generalizations"
},
{
"paragraph_id": 53,
"text": "For graphs whose vertices are represented as pairs of points on two distinct surfaces, with edges drawn as non-crossing curves on one of the two surfaces, the chromatic number can be at least 9 and is at most 12, but more precise bounds are not known; this is Gerhard Ringel's Earth–Moon problem.",
"title": "Generalizations"
},
{
"paragraph_id": 54,
"text": "There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of n flexible rods, one can arrange that every rod touches every other rod. The set would then require n colors, or n+1 including the empty space that also touches every rod. The number n can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880. Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area) an unbounded number of colors may be necessary (Reed & Allwright 2008; Magnant & Martin (2011)).",
"title": "Generalizations"
},
{
"paragraph_id": 55,
"text": "Dror Bar-Natan gave a statement concerning Lie algebras and Vassiliev invariants which is equivalent to the four color theorem.",
"title": "Relation to other areas of mathematics"
},
{
"paragraph_id": 56,
"text": "Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to cartographers. According to an article by the math historian Kenneth May, \"Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property\". The theorem also does not guarantee the usual cartographic requirement that non-contiguous regions of the same country (such as the exclave Alaska and the rest of the United States) be colored identically.",
"title": "Use outside of mathematics"
}
]
| In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary curve segment, not merely a corner where three or more regions meet. It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubters remain. The four color theorem was proved in 1976 by Kenneth Appel and Wolfgang Haken after many false proofs and counterexamples. To dispel any remaining doubts about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. In 2005, the theorem was also proved by Georges Gonthier with general-purpose theorem-proving software. | 2001-08-05T03:13:51Z | 2023-12-18T16:20:58Z | [
"Template:Multiple image",
"Template:Portal",
"Template:Refend",
"Template:Harvtxt",
"Template:Harvnb",
"Template:Clear",
"Template:Refbegin",
"Template:Short description",
"Template:Pn",
"Template:Harvs",
"Template:Harv",
"Template:Dead link",
"Template:Springer",
"Template:Sfnp",
"Template:No footnotes",
"Template:Citation",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Four_color_theorem |
10,951 | Fahrenheit 451 | Fahrenheit 451 is a 1953 dystopian novel by American writer Ray Bradbury. It presents an American society where books have been personified and outlawed and "firemen" burn any that are found. The novel follows in the viewpoint of Guy Montag, a fireman who soon becomes disillusioned with his role of censoring literature and destroying knowledge, eventually quitting his job and committing himself to the preservation of literary and cultural writings.
Fahrenheit 451 was written by Bradbury during the Second Red Scare and the McCarthy era, inspired by the book burnings in Nazi Germany and by ideological repression in the Soviet Union. Bradbury's claimed motivation for writing the novel has changed multiple times. In a 1956 radio interview, Bradbury said that he wrote the book because of his concerns about the threat of burning books in the United States. In later years, he described the book as a commentary on how mass media reduces interest in reading literature. In a 1994 interview, Bradbury cited political correctness as an allegory for the censorship in the book, calling it "the real enemy these days" and labelling it as "thought control and freedom of speech control."
The writing and theme within Fahrenheit 451 was explored by Bradbury in some of his previous short stories. Between 1947 and 1948, Bradbury wrote "Bright Phoenix", a short story about a librarian who confronts a "Chief Censor", who burns books. An encounter Bradbury had in 1949 with the police inspired him to write the short story "The Pedestrian" in 1951. In "The Pedestrian", a man going for a nighttime walk in his neighborhood is harassed and detained by the police. In the society of "The Pedestrian", citizens are expected to watch television as a leisurely activity, a detail that would be included in Fahrenheit 451. Elements of both "Bright Phoenix" and "The Pedestrian" would be combined into The Fireman, a novella published in 1951. Bradbury was urged by Stanley Kauffmann, a publisher at Ballantine Books, to make The Fireman into a full novel. Bradbury finished the manuscript for Fahrenheit 451 in 1953, and the novel was published later that year.
Upon its release, Fahrenheit 451 was a critical success, albeit with notable outliers. The novel's subject matter led to its censorship in apartheid South Africa and various schools in the United States. In 1954, Fahrenheit 451 won the American Academy of Arts and Letters Award in Literature and the Commonwealth Club of California Gold Medal. It later won the Prometheus "Hall of Fame" Award in 1984 and a "Retro" Hugo Award in 2004. Bradbury was honored with a Spoken Word Grammy nomination for his 1976 audiobook version. The novel has also been adapted into films, stage plays, and video games. Film adaptations of the novel include a 1966 film directed by François Truffaut starring Oskar Werner as Guy Montag, an adaptation that was met with mixed critical reception, and a 2018 television film directed by Ramin Bahrani starring Michael B. Jordan as Montag that also received a mixed critical reception. Bradbury himself published a stage play version in 1979 and helped develop a 1984 interactive fiction video game of the same name, as well as a collection of his short stories titled A Pleasure to Burn. Two BBC Radio dramatizations were also produced.
Shortly after the atomic bombings of Hiroshima and Nagasaki at the conclusion of World War II, the United States focused its concern on the Soviet atomic bomb project and the expansion of communism. The House Un-American Activities Committee (HUAC), formed in 1938 to investigate American citizens and organizations suspected of having communist ties, held hearings in 1947 to investigate alleged communist influence in Hollywood movie-making. These hearings resulted in the blacklisting of the so-called "Hollywood Ten", a group of influential screenwriters and directors.
The year HUAC began investigating Hollywood is often considered the beginning of the Cold War, as in March 1947, the Truman Doctrine was announced. By about 1950, the Cold War was in full swing, and the American public's fear of nuclear warfare and communist influence was at a feverish level.
The government's interference in the affairs of artists and creative types infuriated Bradbury; he was bitter and concerned about the workings of his government, and a late 1949 nighttime encounter with an overzealous police officer would inspire Bradbury to write "The Pedestrian", a short story which would go on to become "The Fireman" and then Fahrenheit 451. The rise of Senator Joseph McCarthy's hearings hostile to accused communists, beginning in 1950, deepened Bradbury's contempt for government overreach.
The Golden Age of Radio occurred between the early 1920s to the late 1950s, during Bradbury's early life, while the transition to the Golden Age of Television began right around the time he started to work on the stories that would eventually lead to Fahrenheit 451. Bradbury saw these forms of media as a threat to the reading of books, indeed as a threat to society, as he believed they could act as a distraction from important affairs. This contempt for mass media and technology would express itself through Mildred and her friends and is an important theme in the book.
Bradbury's lifelong passion for books began at an early age. After he graduated from high school, his family could not afford for him to attend college, so Bradbury began spending time at the Los Angeles Public Library where he educated himself. As a frequent visitor to his local libraries in the 1920s and 1930s, he recalls being disappointed because they did not stock popular science fiction novels, like those of H. G. Wells, because, at the time, they were not deemed literary enough. Between this and learning about the destruction of the Library of Alexandria, a great impression was made on Bradbury about the vulnerability of books to censure and destruction. Later, as a teenager, Bradbury was horrified by the Nazi book burnings and later by Joseph Stalin's campaign of political repression, the "Great Purge", in which writers and poets, among many others, were arrested and often executed.
In a distant future, Guy Montag is a fireman employed to burn outlawed books, along with the houses they are hidden in. One fall night while returning from work, he meets his new neighbor Clarisse McClellan, a teenage girl whose free-thinking ideals and liberating spirit cause him to question his life and perceived happiness. Montag returns home to find that his wife Mildred has overdosed on sleeping pills, and he calls for medical attention. Two EMTs later pump her stomach and change her blood. After they leave to rescue another overdose victim, Montag overhears Clarisse and her family talking about their illiterate society. Shortly afterward, Montag's mind is bombarded with Clarisse's subversive thoughts and the memory of Mildred's near-death. Over the next few days, Clarisse meets Montag each night as he walks home. Clarisse's simple pleasures and interests make her an outcast among her peers, and she is forced to go to therapy for her behavior. Montag always looks forward to the meetings, but one day, Clarisse goes missing.
In the following days, while he and other firemen are ransacking the book-filled house of an old woman and drenching it in kerosene, Montag steals a book. The woman refuses to leave her house and her books, choosing instead to light a match and burn herself alive. Jarred by the suicide, Montag returns home and hides the book under his pillow. Later, Montag asks Mildred if she has heard anything about Clarisse. She reveals that Clarisse's family moved away after Clarisse was hit by a speeding car and died four days ago. Dismayed by her failure to mention this earlier, Montag uneasily tries to fall asleep. Outside he suspects the presence of "The Mechanical Hound", an eight-legged robotic dog-like creature that resides in the firehouse and aids the firemen in hunting book hoarders.
Montag awakens ill the next morning. Mildred tries to care for her husband but finds herself more involved in the "parlor wall" entertainment in the living room – large televisions filling the walls. Montag suggests he should take a break from being a fireman, and Mildred panics over the thought of losing the house and her parlor wall "family". Captain Beatty, Montag's fire chief, visits Montag to see how he is doing. Sensing his concerns, Beatty recounts the history of how books had lost their value and how the firemen were adapted for their current role: over decades, people began to embrace new media (like film and television), sports, and an ever-quickening pace of life. Books were abridged or degraded to accommodate shorter attention spans. At the same time, advances in technology resulted in nearly all buildings being made with fireproof materials, and firemen preventing fires were no longer necessary. The government then instead turned the firemen into officers of society's peace of mind: instead of putting out fires, they were charge with starting them, specifically to burn books, which were condemned as sources of confusing and depressing thoughts that complicated people's lives. After an awkward exchange between Mildred and Montag over the book hidden under his pillow, Beatty becomes suspicious and casually adds a passing threat before leaving; he says that if a fireman had a book, he would be asked to burn it within the following twenty-four hours. If he refused, the other firemen would come and burn it for him. The encounter leaves Montag utterly shaken.
Montag later reveals to Mildred that, over the last year, he has accumulated books that are hidden in their ceiling. In a panic, Mildred grabs a book and rushes to throw it in the kitchen incinerator, but Montag subdues her and says they are going to read the books to see if they have value. If they do not, he promises the books will be burned and their lives will return to normal.
Mildred refuses to go along with Montag's plan, questioning why she or anyone else should care about books. Montag goes on a rant about Mildred's suicide attempt, Clarisse's disappearance and death, the woman who burned herself, and the imminent war that goes ignored by the masses. He suggests that perhaps the books of the past have messages that can save society from its own destruction. Even still, Mildred remains unconvinced.
Conceding that Mildred is a lost cause, Montag will need help to understand the books. He remembers an old man named Faber, an English professor before books were banned, whom he once met in a park. Montag visits Faber's home carrying a copy of the Bible, the book he stole at the woman's house. Once there, after multiple attempts to ask, Montag forces the scared and reluctant Faber into helping him by methodically ripping pages from the Bible. Faber concedes and gives Montag a homemade earpiece communicator so that he can offer constant guidance.
At home, Mildred's friends, Mrs. Bowles and Mrs. Phelps, arrive to watch the "parlor walls". Not interested in this entertainment, Montag turns off the walls and tries to engage the women in meaningful conversation, only for them to reveal just how indifferent, ignorant, and callous they truly are. Enraged, Montag shows them a book of poetry. This confuses the women and alarms Faber, who is listening remotely. Mildred tries to dismiss Montag's actions as a tradition firemen act out once a year: they find an old book and read it as a way to make fun of how silly the past is. Montag proceeds to recite a poem (specifically Dover Beach), causing Mrs. Phelps to cry. Soon, the two women leave.
Montag hides his books in the backyard before returning to the firehouse late at night. There, Montag hands Beatty a book to cover for the one he believes Beatty knows he stole the night before, which is tossed into the trash. Beatty reveals that, despite his disillusionment, he was once an enthusiastic reader. A fire alarm sounds and Beatty picks up the address from the dispatcher system. They drive in the fire truck to the unexpected destination: Montag's house.
Beatty orders Montag to destroy his house with a flamethrower, rather than the more powerful "salamander" that is usually used by the fire team, and tells him that his wife and her friends reported him. Montag watches as Mildred walks out of the house, too traumatized about losing her parlor wall 'family' to even acknowledge her husband's existence or the situation going on around her, and catches a taxi. Montag complies, destroying the home piece by piece, but Beatty discovers his earpiece and plans to hunt down Faber. Montag threatens Beatty with the flamethrower and, after Beatty taunts him, Montag burns Beatty alive. As Montag tries to escape the scene, the Mechanical Hound attacks him, managing to inject his leg with an anesthetic. He destroys the Hound with the flamethrower and limps away. While escaping, he concludes that Beatty had wanted to die a long time ago and had purposely goaded Montag as well as provided him with a weapon.
Montag runs towards Faber's house. En route, he crosses a road as a car attempts to run him over, but he manages to evade the vehicle, almost suffering the same fate as Clarisse and losing his knee. Faber urges him to make his way to the countryside and contact a group of exiled book-lovers who live there. Faber will be leaving on a bus heading to St. Louis, Missouri, where he and Montag can rendezvous later. Meanwhile, another Mechanical Hound is released to track down and kill Montag, with news helicopters following it to create a public spectacle. After wiping his scent from around the house in hopes of thwarting the Hound, Montag leaves. He escapes the manhunt by wading into a river and floating downstream, where he meets the book-lovers. They predicted Montag's arrival while watching the TV.
The drifters are all former intellectuals. They have each memorized books should the day arrive that society comes to an end, with the survivors learning to embrace the literature of the past. Wanting to contribute to the group, Montag finds that he partially memorized the Book of Ecclesiastes, discovering that the group has a special way of unlocking photographic memory. While discussing about their learnings, Montag and the group watch helplessly as bombers fly overhead and annihilate the city with nuclear weapons: the war has begun and ended in the same night. While Faber would have left on the early bus, everyone else (possibly including Mildred) is killed. Injured and dirtied, Montag and the group manage to survive the shockwave.
When the war is over, the exiles return to the city to rebuild society.
Character development and personality are key to any novel. The characters in Fahrenheit 451 are multi-dimensional in many aspects. Characters not only develop because of who they are, but by what they have been through, and also external surroundings. In the article "Distortion of 'Self-Image': Effects of Mental Delirium in Fahrenheit 451 by Ray Bradbury", states that "the Self is the conscious image of one's cognition or mental identity, an element that evaluates external factors such as the environment or other's self." (Jerrin, Beeto, Bhuvaneswari).
The title page of the book explains the title as follows: Fahrenheit 451—The temperature at which book paper catches fire and burns.... On inquiring about the temperature at which paper would catch fire, Bradbury had been told that 451 °F (233 °C) was the autoignition temperature of paper. In various studies, scientists have placed the autoignition temperature at a range of temperatures between 424 and 475 °F (218 and 246 °C), depending on the type of paper.
Fahrenheit 451 developed out of a series of ideas Bradbury had visited in previously written stories. For many years, he tended to single out "The Pedestrian" in interviews and lectures as sort of a proto-Fahrenheit 451. In the Preface of his 2006 anthology Match to Flame: The Fictional Paths to Fahrenheit 451 he states that this is an oversimplification. The full genealogy of Fahrenheit 451 given in Match to Flame is involved. The following covers the most salient aspects.
Between 1947 and 1948, Bradbury wrote the short story "Bright Phoenix" (not published until the May 1963 issue of The Magazine of Fantasy & Science Fiction) about a librarian who confronts a book-burning "Chief Censor" named Jonathan Barnes.
In late 1949, Bradbury was stopped and questioned by a police officer while walking late one night. When asked "What are you doing?", Bradbury wisecracked, "Putting one foot in front of another." This incident inspired Bradbury to write the 1951 short story "The Pedestrian".
In The Pedestrian, Leonard Mead is harassed and detained by the city's remotely operated police cruiser (there's only one) for taking nighttime walks, something that has become extremely rare in this future-based setting: everybody else stays inside and watches television ("viewing screens"). Alone and without an alibi, Mead is taken to the "Psychiatric Center for Research on Regressive Tendencies" for his peculiar habit. Fahrenheit 451 would later echo this theme of an authoritarian society distracted by broadcast media.
Bradbury expanded the book-burning premise of "Bright Phoenix" and the totalitarian future of "The Pedestrian" into "The Fireman", a novella published in the February 1951 issue of Galaxy Science Fiction. "The Fireman" was written in the basement of UCLA's Powell Library on a typewriter that he rented for a fee of ten cents per half hour. The first draft was 25,000 words long and was completed in nine days.
Urged by a publisher at Ballantine Books to double the length of his story to make a novel, Bradbury returned to the same typing room and made the story 25,000 words longer, again taking just nine days. The title "Fahrenheit 451" came to him on January 22. The final manuscript was ready in mid-August, 1953. The resulting novel, which some considered as a fix-up (despite being an expanded rewrite of one single novella), was published by Ballantine in 1953.
Bradbury has supplemented the novel with various front and back matter, including a 1979 coda, a 1982 afterword, a 1993 foreword, and several introductions.
The first U.S. printing was a paperback version from October 1953 by The Ballantine Publishing Group. Shortly after the paperback, a hardback version was released that included a special edition of 200 signed and numbered copies bound in asbestos. These were technically collections because the novel was published with two short stories: The Playground and And the Rock Cried Out, which have been absent in later printings. A few months later, the novel was serialized in the March, April, and May 1954 issues of nascent Playboy magazine.
Starting in January 1967, Fahrenheit 451 was subject to expurgation by its publisher, Ballantine Books with the release of the "Bal-Hi Edition" aimed at high school students. Among the changes made by the publisher were the censorship of the words "hell", "damn", and "abortion"; the modification of seventy-five passages; and the changing of two incidents.
In the first incident a drunk man was changed to a "sick man", while the second involved cleaning fluff out of a human navel, which instead became "cleaning ears" in the other. For a while both the censored and uncensored versions were available concurrently but by 1973 Ballantine was publishing only the censored version. That continued until 1979, when it came to Bradbury's attention:
In 1979, one of Bradbury's friends showed him an expurgated copy of the book. Bradbury demanded that Ballantine Books withdraw that version and replace it with the original, and in 1980 the original version once again became available. In this reinstated work, in the Author's Afterword, Bradbury relates to the reader that it is not uncommon for a publisher to expurgate an author's work, but he asserts that he himself will not tolerate the practice of manuscript "mutilation".
The "Bal-Hi" editions are now referred to by the publisher as the "Revised Bal-Hi" editions.
An audiobook version read by Bradbury himself was released in 1976 and received a Spoken Word Grammy nomination. Another audiobook was released in 2005 narrated by Christopher Hurt. The e-book version was released in December 2011.
In 1954, Galaxy Science Fiction reviewer Groff Conklin placed the novel "among the great works of the imagination written in English in the last decade or more." The Chicago Sunday Tribune's August Derleth described the book as "a savage and shockingly prophetic view of one possible future way of life", calling it "compelling" and praising Bradbury for his "brilliant imagination". Over half a century later, Sam Weller wrote, "upon its publication, Fahrenheit 451 was hailed as a visionary work of social commentary." Today, Fahrenheit 451 is still viewed as an important cautionary tale about conformity and the evils of government censorship.
When the novel was first published, there were those who did not find merit in the tale. Anthony Boucher and J. Francis McComas were less enthusiastic, faulting the book for being "simply padded, occasionally with startlingly ingenious gimmickry, ... often with coruscating cascades of verbal brilliance [but] too often merely with words." Reviewing the book for Astounding Science Fiction, P. Schuyler Miller characterized the title piece as "one of Bradbury's bitter, almost hysterical diatribes," while praising its "emotional drive and compelling, nagging detail." Similarly, The New York Times was unimpressed with the novel and further accused Bradbury of developing a "virulent hatred for many aspects of present-day culture, namely, such monstrosities as radio, TV, most movies, amateur and professional sports, automobiles, and other similar aberrations which he feels debase the bright simplicity of the thinking man's existence."
Fahrenheit 451 was number seven on the list of "Top Check Outs OF ALL TIME" by the New York Public Library
In the years since its publication, Fahrenheit 451 has occasionally been banned, censored, or redacted in some schools at the behest of parents or teaching staff either unaware of or indifferent to the inherent irony in such censorship. Notable incidents include:
Discussions about Fahrenheit 451 often center on its story foremost as a warning against state-based censorship. Indeed, when Bradbury wrote the novel during the McCarthy era, he was concerned about censorship in the United States. During a radio interview in 1956, Bradbury said:
I wrote this book at a time when I was worried about the way things were going in this country four years ago. Too many people were afraid of their shadows; there was a threat of book burning. Many of the books were being taken off the shelves at that time. And of course, things have changed a lot in four years. Things are going back in a very healthy direction. But at the time I wanted to do some sort of story where I could comment on what would happen to a country if we let ourselves go too far in this direction, where then all thinking stops, and the dragon swallows his tail, and we sort of vanish into a limbo and we destroy ourselves by this sort of action.
As time went by, Bradbury tended to dismiss censorship as a chief motivating factor for writing the story. Instead he usually claimed that the real messages of Fahrenheit 451 were about the dangers of an illiterate society infatuated with mass media and the threat of minority and special interest groups to books. In the late 1950s, Bradbury recounted:
In writing the short novel Fahrenheit 451, I thought I was describing a world that might evolve in four or five decades. But only a few weeks ago, in Beverly Hills one night, a husband and wife passed me, walking their dog. I stood staring after them, absolutely stunned. The woman held in one hand a small cigarette-package-sized radio, its antenna quivering. From this sprang tiny copper wires which ended in a dainty cone plugged into her right ear. There she was, oblivious to man and dog, listening to far winds and whispers and soap-opera cries, sleep-walking, helped up and down curbs by a husband who might just as well not have been there. This was not fiction.
This story echoes Mildred's "Seashell ear-thimbles" (i.e., a brand of in-ear headphones) that act as an emotional barrier between her and Montag. In a 2007 interview, Bradbury maintained that people misinterpret his book and that Fahrenheit 451 is really a statement on how mass media like television marginalizes the reading of literature. Regarding minorities, he wrote in his 1979 Coda:
'There is more than one way to burn a book. And the world is full of people running about with lit matches. Every minority, be it Baptist/Unitarian, Irish/Italian/Octogenarian/Zen Buddhist, Zionist/Seventh-day Adventist, Women's Lib/Republican, Mattachine/Four Square Gospel feels it has the will, the right, the duty to douse the kerosene, light the fuse. [...] Fire-Captain Beatty, in my novel Fahrenheit 451, described how the books were burned first by minorities, each ripping a page or a paragraph from this book, then that, until the day came when the books were empty and the minds shut and the libraries closed forever. [...] Only six weeks ago, I discovered that, over the years, some cubby-hole editors at Ballantine Books, fearful of contaminating the young, had, bit by bit, censored some seventy-five separate sections from the novel. Students, reading the novel, which, after all, deals with censorship and book-burning in the future, wrote to tell me of this exquisite irony. Judy-Lynn del Rey, one of the new Ballantine editors, is having the entire book reset and republished this summer with all the damns and hells back in place.
Book-burning censorship, Bradbury would argue, was a side-effect of these two primary factors; this is consistent with Captain Beatty's speech to Montag about the history of the firemen. According to Bradbury, it is the people, not the state, who are the culprit in Fahrenheit 451. Fahrenheit's censorship is not the result of an authoritarian program to retain power, but the result of a fragmented society seeking to accommodate its challenges by deploying the power of entertainment and technology. As Captain Beatty explains (p. 55):
"...The bigger your market, Montag, the less you handle controversy, remember that! All the minor minorities with their navels to be kept clean."[...] "It didn't come from the Government down. There was no dictum, no declaration, no censorship, to start with, no! Technology, mass exploitation, and minority pressure carried the trick, thank God."
A variety of other themes in the novel besides censorship have been suggested. Two major themes are resistance to conformity and control of individuals via technology and mass media. Bradbury explores how the government is able to use mass media to influence society and suppress individualism through book burning. The characters Beatty and Faber point out that the American population is to blame. Due to their constant desire for a simplistic, positive image, books must be suppressed. Beatty blames the minority groups, who would take offense to published works that displayed them in an unfavorable light. Faber went further to state that the American population simply stopped reading on their own. He notes that the book burnings themselves became a form of entertainment for the general public.
In a 1994 interview, Bradbury stated that Fahrenheit 451 was more relevant during this time than in any other, stating that, "it works even better because we have political correctness now. Political correctness is the real enemy these days. The black groups want to control our thinking and you can't say certain things. The homosexual groups don't want you to criticize them. It's thought control and freedom of speech control."
Fahrenheit 451 is set in an unspecified city and time, though it is written as if set in a distant future. The earliest editions make clear that it takes place no earlier than the year 2022 due to a reference to an atomic war taking place during that year.
Bradbury described himself as "a preventer of futures, not a predictor of them." He did not believe that book burning was an inevitable part of the future; he wanted to warn against its development. In a later interview, when asked if he believes that teaching Fahrenheit 451 in schools will prevent his totalitarian vision of the future, Bradbury replied in the negative. Rather, he states that education must be at the kindergarten and first-grade level. If students are unable to read then, they will be unable to read Fahrenheit 451.
As to technology, Sam Weller notes that Bradbury "predicted everything from flat-panel televisions to earbud headphones and twenty-four-hour banking machines."
Playhouse 90 broadcast "A Sound of Different Drummers" on CBS in 1957, written by Robert Alan Aurthur. The play combined plot ideas from Fahrenheit 451 and Nineteen Eighty-Four. Bradbury sued and eventually won on appeal.
A film adaptation written and directed by François Truffaut and starring Oskar Werner and Julie Christie was released in 1966.
A film adaptation directed by Ramin Bahrani and starring Michael B. Jordan, Michael Shannon, Sofia Boutella, and Lilly Singh was released in 2018 for HBO.
In the late 1970s Bradbury adapted his book into a play. At least part of it was performed at the Colony Theatre in Los Angeles in 1979, but it was not in print until 1986 and the official world premiere was only in November 1988 by the Fort Wayne, Indiana Civic Theatre. The stage adaptation diverges considerably from the book and seems influenced by Truffaut's movie. For example, fire chief Beatty's character is fleshed out and is the wordiest role in the play. As in the movie, Clarisse does not simply disappear but in the finale meets up with Montag as a book character (she as Robert Louis Stevenson, he as Edgar Allan Poe).
The UK premiere of Bradbury's stage adaptation was not until 2003 in Nottingham, while it took until 2006 before the Godlight Theatre Company produced and performed its New York City premiere at 59E59 Theaters. After the completion of the New York run, the production then transferred to the Edinburgh Festival where it was a 2006 Edinburgh Festival Pick of the Fringe.
The Off-Broadway theatre The American Place Theatre presented a one man show adaptation of Fahrenheit 451 as a part of their 2008–2009 Literature to Life season.
Fahrenheit 451 inspired the Birmingham Repertory Theatre production Time Has Fallen Asleep in the Afternoon Sunshine, which was performed at the Birmingham Central Library in April 2012.
In 1982, Gregory Evans' radio dramatization of the novel was broadcast on BBC Radio 4 starring Michael Pennington as Montag. It was broadcast eight more times on BBC Radio 4 Extra, twice each in 2010, 2012, 2013, and 2015.
BBC Radio's second dramatization, by David Calcutt, was broadcast on BBC Radio 4 in 2003, starring Stephen Tomlin in the same role.
In 1984 the new wave band Scortilla released the song Fahrenheit 451 inspired by the book by R. Bradbury and the film by F. Truffaut.
In 1984, the novel was adapted into a computer text adventure game of the same name by the software company Trillium.
In June 2009, a graphic novel edition of the book was published. Entitled Ray Bradbury's Fahrenheit 451: The Authorized Adaptation, the paperback graphic adaptation was illustrated by Tim Hamilton. The introduction in the novel is written by Bradbury himself.
Michael Moore's 2004 documentary Fahrenheit 9/11 refers to Bradbury's novel and the September 11 attacks, emphasized by the film's tagline "The temperature where freedom burns". The film takes a critical look at the presidency of George W. Bush, the War on Terror, and its coverage in the news media, and became the highest grossing documentary of all time. Bradbury was upset by what he considered the appropriation of his title, and wanted the film renamed. Moore filmed a subsequent documentary about the election of Donald Trump called Fahrenheit 11/9 in 2018.
In 2015, the Internet Engineering Steering Group approved the publication of An HTTP Status Code to Report Legal Obstacles, now RFC 7725, which specifies that websites forced to block resources for legal reasons should return a status code of 451 when users request those resources.
Guy Montag (as Gui Montag), is used in the 1998 real-time strategy game Starcraft as a terran firebat hero.
Jerrin, Neil Beeto, and G. Bhuvaneswari. "Distortion of 'Self-Image': Effects of Mental Delirium in Fahrenheit 451 by Ray Bradbury." Theory & Practice in Language Studies, vol. 12, no. 8, Aug. 2022, pp. 1634–40. EBSCOhost, https://doi.org/10.17507/tpls.1208.21 | [
{
"paragraph_id": 0,
"text": "Fahrenheit 451 is a 1953 dystopian novel by American writer Ray Bradbury. It presents an American society where books have been personified and outlawed and \"firemen\" burn any that are found. The novel follows in the viewpoint of Guy Montag, a fireman who soon becomes disillusioned with his role of censoring literature and destroying knowledge, eventually quitting his job and committing himself to the preservation of literary and cultural writings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fahrenheit 451 was written by Bradbury during the Second Red Scare and the McCarthy era, inspired by the book burnings in Nazi Germany and by ideological repression in the Soviet Union. Bradbury's claimed motivation for writing the novel has changed multiple times. In a 1956 radio interview, Bradbury said that he wrote the book because of his concerns about the threat of burning books in the United States. In later years, he described the book as a commentary on how mass media reduces interest in reading literature. In a 1994 interview, Bradbury cited political correctness as an allegory for the censorship in the book, calling it \"the real enemy these days\" and labelling it as \"thought control and freedom of speech control.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "The writing and theme within Fahrenheit 451 was explored by Bradbury in some of his previous short stories. Between 1947 and 1948, Bradbury wrote \"Bright Phoenix\", a short story about a librarian who confronts a \"Chief Censor\", who burns books. An encounter Bradbury had in 1949 with the police inspired him to write the short story \"The Pedestrian\" in 1951. In \"The Pedestrian\", a man going for a nighttime walk in his neighborhood is harassed and detained by the police. In the society of \"The Pedestrian\", citizens are expected to watch television as a leisurely activity, a detail that would be included in Fahrenheit 451. Elements of both \"Bright Phoenix\" and \"The Pedestrian\" would be combined into The Fireman, a novella published in 1951. Bradbury was urged by Stanley Kauffmann, a publisher at Ballantine Books, to make The Fireman into a full novel. Bradbury finished the manuscript for Fahrenheit 451 in 1953, and the novel was published later that year.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Upon its release, Fahrenheit 451 was a critical success, albeit with notable outliers. The novel's subject matter led to its censorship in apartheid South Africa and various schools in the United States. In 1954, Fahrenheit 451 won the American Academy of Arts and Letters Award in Literature and the Commonwealth Club of California Gold Medal. It later won the Prometheus \"Hall of Fame\" Award in 1984 and a \"Retro\" Hugo Award in 2004. Bradbury was honored with a Spoken Word Grammy nomination for his 1976 audiobook version. The novel has also been adapted into films, stage plays, and video games. Film adaptations of the novel include a 1966 film directed by François Truffaut starring Oskar Werner as Guy Montag, an adaptation that was met with mixed critical reception, and a 2018 television film directed by Ramin Bahrani starring Michael B. Jordan as Montag that also received a mixed critical reception. Bradbury himself published a stage play version in 1979 and helped develop a 1984 interactive fiction video game of the same name, as well as a collection of his short stories titled A Pleasure to Burn. Two BBC Radio dramatizations were also produced.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Shortly after the atomic bombings of Hiroshima and Nagasaki at the conclusion of World War II, the United States focused its concern on the Soviet atomic bomb project and the expansion of communism. The House Un-American Activities Committee (HUAC), formed in 1938 to investigate American citizens and organizations suspected of having communist ties, held hearings in 1947 to investigate alleged communist influence in Hollywood movie-making. These hearings resulted in the blacklisting of the so-called \"Hollywood Ten\", a group of influential screenwriters and directors.",
"title": "Historical and biographical context"
},
{
"paragraph_id": 5,
"text": "The year HUAC began investigating Hollywood is often considered the beginning of the Cold War, as in March 1947, the Truman Doctrine was announced. By about 1950, the Cold War was in full swing, and the American public's fear of nuclear warfare and communist influence was at a feverish level.",
"title": "Historical and biographical context"
},
{
"paragraph_id": 6,
"text": "The government's interference in the affairs of artists and creative types infuriated Bradbury; he was bitter and concerned about the workings of his government, and a late 1949 nighttime encounter with an overzealous police officer would inspire Bradbury to write \"The Pedestrian\", a short story which would go on to become \"The Fireman\" and then Fahrenheit 451. The rise of Senator Joseph McCarthy's hearings hostile to accused communists, beginning in 1950, deepened Bradbury's contempt for government overreach.",
"title": "Historical and biographical context"
},
{
"paragraph_id": 7,
"text": "The Golden Age of Radio occurred between the early 1920s to the late 1950s, during Bradbury's early life, while the transition to the Golden Age of Television began right around the time he started to work on the stories that would eventually lead to Fahrenheit 451. Bradbury saw these forms of media as a threat to the reading of books, indeed as a threat to society, as he believed they could act as a distraction from important affairs. This contempt for mass media and technology would express itself through Mildred and her friends and is an important theme in the book.",
"title": "Historical and biographical context"
},
{
"paragraph_id": 8,
"text": "Bradbury's lifelong passion for books began at an early age. After he graduated from high school, his family could not afford for him to attend college, so Bradbury began spending time at the Los Angeles Public Library where he educated himself. As a frequent visitor to his local libraries in the 1920s and 1930s, he recalls being disappointed because they did not stock popular science fiction novels, like those of H. G. Wells, because, at the time, they were not deemed literary enough. Between this and learning about the destruction of the Library of Alexandria, a great impression was made on Bradbury about the vulnerability of books to censure and destruction. Later, as a teenager, Bradbury was horrified by the Nazi book burnings and later by Joseph Stalin's campaign of political repression, the \"Great Purge\", in which writers and poets, among many others, were arrested and often executed.",
"title": "Historical and biographical context"
},
{
"paragraph_id": 9,
"text": "In a distant future, Guy Montag is a fireman employed to burn outlawed books, along with the houses they are hidden in. One fall night while returning from work, he meets his new neighbor Clarisse McClellan, a teenage girl whose free-thinking ideals and liberating spirit cause him to question his life and perceived happiness. Montag returns home to find that his wife Mildred has overdosed on sleeping pills, and he calls for medical attention. Two EMTs later pump her stomach and change her blood. After they leave to rescue another overdose victim, Montag overhears Clarisse and her family talking about their illiterate society. Shortly afterward, Montag's mind is bombarded with Clarisse's subversive thoughts and the memory of Mildred's near-death. Over the next few days, Clarisse meets Montag each night as he walks home. Clarisse's simple pleasures and interests make her an outcast among her peers, and she is forced to go to therapy for her behavior. Montag always looks forward to the meetings, but one day, Clarisse goes missing.",
"title": "Plot summary"
},
{
"paragraph_id": 10,
"text": "In the following days, while he and other firemen are ransacking the book-filled house of an old woman and drenching it in kerosene, Montag steals a book. The woman refuses to leave her house and her books, choosing instead to light a match and burn herself alive. Jarred by the suicide, Montag returns home and hides the book under his pillow. Later, Montag asks Mildred if she has heard anything about Clarisse. She reveals that Clarisse's family moved away after Clarisse was hit by a speeding car and died four days ago. Dismayed by her failure to mention this earlier, Montag uneasily tries to fall asleep. Outside he suspects the presence of \"The Mechanical Hound\", an eight-legged robotic dog-like creature that resides in the firehouse and aids the firemen in hunting book hoarders.",
"title": "Plot summary"
},
{
"paragraph_id": 11,
"text": "Montag awakens ill the next morning. Mildred tries to care for her husband but finds herself more involved in the \"parlor wall\" entertainment in the living room – large televisions filling the walls. Montag suggests he should take a break from being a fireman, and Mildred panics over the thought of losing the house and her parlor wall \"family\". Captain Beatty, Montag's fire chief, visits Montag to see how he is doing. Sensing his concerns, Beatty recounts the history of how books had lost their value and how the firemen were adapted for their current role: over decades, people began to embrace new media (like film and television), sports, and an ever-quickening pace of life. Books were abridged or degraded to accommodate shorter attention spans. At the same time, advances in technology resulted in nearly all buildings being made with fireproof materials, and firemen preventing fires were no longer necessary. The government then instead turned the firemen into officers of society's peace of mind: instead of putting out fires, they were charge with starting them, specifically to burn books, which were condemned as sources of confusing and depressing thoughts that complicated people's lives. After an awkward exchange between Mildred and Montag over the book hidden under his pillow, Beatty becomes suspicious and casually adds a passing threat before leaving; he says that if a fireman had a book, he would be asked to burn it within the following twenty-four hours. If he refused, the other firemen would come and burn it for him. The encounter leaves Montag utterly shaken.",
"title": "Plot summary"
},
{
"paragraph_id": 12,
"text": "Montag later reveals to Mildred that, over the last year, he has accumulated books that are hidden in their ceiling. In a panic, Mildred grabs a book and rushes to throw it in the kitchen incinerator, but Montag subdues her and says they are going to read the books to see if they have value. If they do not, he promises the books will be burned and their lives will return to normal.",
"title": "Plot summary"
},
{
"paragraph_id": 13,
"text": "Mildred refuses to go along with Montag's plan, questioning why she or anyone else should care about books. Montag goes on a rant about Mildred's suicide attempt, Clarisse's disappearance and death, the woman who burned herself, and the imminent war that goes ignored by the masses. He suggests that perhaps the books of the past have messages that can save society from its own destruction. Even still, Mildred remains unconvinced.",
"title": "Plot summary"
},
{
"paragraph_id": 14,
"text": "Conceding that Mildred is a lost cause, Montag will need help to understand the books. He remembers an old man named Faber, an English professor before books were banned, whom he once met in a park. Montag visits Faber's home carrying a copy of the Bible, the book he stole at the woman's house. Once there, after multiple attempts to ask, Montag forces the scared and reluctant Faber into helping him by methodically ripping pages from the Bible. Faber concedes and gives Montag a homemade earpiece communicator so that he can offer constant guidance.",
"title": "Plot summary"
},
{
"paragraph_id": 15,
"text": "At home, Mildred's friends, Mrs. Bowles and Mrs. Phelps, arrive to watch the \"parlor walls\". Not interested in this entertainment, Montag turns off the walls and tries to engage the women in meaningful conversation, only for them to reveal just how indifferent, ignorant, and callous they truly are. Enraged, Montag shows them a book of poetry. This confuses the women and alarms Faber, who is listening remotely. Mildred tries to dismiss Montag's actions as a tradition firemen act out once a year: they find an old book and read it as a way to make fun of how silly the past is. Montag proceeds to recite a poem (specifically Dover Beach), causing Mrs. Phelps to cry. Soon, the two women leave.",
"title": "Plot summary"
},
{
"paragraph_id": 16,
"text": "Montag hides his books in the backyard before returning to the firehouse late at night. There, Montag hands Beatty a book to cover for the one he believes Beatty knows he stole the night before, which is tossed into the trash. Beatty reveals that, despite his disillusionment, he was once an enthusiastic reader. A fire alarm sounds and Beatty picks up the address from the dispatcher system. They drive in the fire truck to the unexpected destination: Montag's house.",
"title": "Plot summary"
},
{
"paragraph_id": 17,
"text": "Beatty orders Montag to destroy his house with a flamethrower, rather than the more powerful \"salamander\" that is usually used by the fire team, and tells him that his wife and her friends reported him. Montag watches as Mildred walks out of the house, too traumatized about losing her parlor wall 'family' to even acknowledge her husband's existence or the situation going on around her, and catches a taxi. Montag complies, destroying the home piece by piece, but Beatty discovers his earpiece and plans to hunt down Faber. Montag threatens Beatty with the flamethrower and, after Beatty taunts him, Montag burns Beatty alive. As Montag tries to escape the scene, the Mechanical Hound attacks him, managing to inject his leg with an anesthetic. He destroys the Hound with the flamethrower and limps away. While escaping, he concludes that Beatty had wanted to die a long time ago and had purposely goaded Montag as well as provided him with a weapon.",
"title": "Plot summary"
},
{
"paragraph_id": 18,
"text": "Montag runs towards Faber's house. En route, he crosses a road as a car attempts to run him over, but he manages to evade the vehicle, almost suffering the same fate as Clarisse and losing his knee. Faber urges him to make his way to the countryside and contact a group of exiled book-lovers who live there. Faber will be leaving on a bus heading to St. Louis, Missouri, where he and Montag can rendezvous later. Meanwhile, another Mechanical Hound is released to track down and kill Montag, with news helicopters following it to create a public spectacle. After wiping his scent from around the house in hopes of thwarting the Hound, Montag leaves. He escapes the manhunt by wading into a river and floating downstream, where he meets the book-lovers. They predicted Montag's arrival while watching the TV.",
"title": "Plot summary"
},
{
"paragraph_id": 19,
"text": "The drifters are all former intellectuals. They have each memorized books should the day arrive that society comes to an end, with the survivors learning to embrace the literature of the past. Wanting to contribute to the group, Montag finds that he partially memorized the Book of Ecclesiastes, discovering that the group has a special way of unlocking photographic memory. While discussing about their learnings, Montag and the group watch helplessly as bombers fly overhead and annihilate the city with nuclear weapons: the war has begun and ended in the same night. While Faber would have left on the early bus, everyone else (possibly including Mildred) is killed. Injured and dirtied, Montag and the group manage to survive the shockwave.",
"title": "Plot summary"
},
{
"paragraph_id": 20,
"text": "When the war is over, the exiles return to the city to rebuild society.",
"title": "Plot summary"
},
{
"paragraph_id": 21,
"text": "Character development and personality are key to any novel. The characters in Fahrenheit 451 are multi-dimensional in many aspects. Characters not only develop because of who they are, but by what they have been through, and also external surroundings. In the article \"Distortion of 'Self-Image': Effects of Mental Delirium in Fahrenheit 451 by Ray Bradbury\", states that \"the Self is the conscious image of one's cognition or mental identity, an element that evaluates external factors such as the environment or other's self.\" (Jerrin, Beeto, Bhuvaneswari).",
"title": "Characters"
},
{
"paragraph_id": 22,
"text": "The title page of the book explains the title as follows: Fahrenheit 451—The temperature at which book paper catches fire and burns.... On inquiring about the temperature at which paper would catch fire, Bradbury had been told that 451 °F (233 °C) was the autoignition temperature of paper. In various studies, scientists have placed the autoignition temperature at a range of temperatures between 424 and 475 °F (218 and 246 °C), depending on the type of paper.",
"title": "Title"
},
{
"paragraph_id": 23,
"text": "Fahrenheit 451 developed out of a series of ideas Bradbury had visited in previously written stories. For many years, he tended to single out \"The Pedestrian\" in interviews and lectures as sort of a proto-Fahrenheit 451. In the Preface of his 2006 anthology Match to Flame: The Fictional Paths to Fahrenheit 451 he states that this is an oversimplification. The full genealogy of Fahrenheit 451 given in Match to Flame is involved. The following covers the most salient aspects.",
"title": "Writing and development"
},
{
"paragraph_id": 24,
"text": "Between 1947 and 1948, Bradbury wrote the short story \"Bright Phoenix\" (not published until the May 1963 issue of The Magazine of Fantasy & Science Fiction) about a librarian who confronts a book-burning \"Chief Censor\" named Jonathan Barnes.",
"title": "Writing and development"
},
{
"paragraph_id": 25,
"text": "In late 1949, Bradbury was stopped and questioned by a police officer while walking late one night. When asked \"What are you doing?\", Bradbury wisecracked, \"Putting one foot in front of another.\" This incident inspired Bradbury to write the 1951 short story \"The Pedestrian\".",
"title": "Writing and development"
},
{
"paragraph_id": 26,
"text": "In The Pedestrian, Leonard Mead is harassed and detained by the city's remotely operated police cruiser (there's only one) for taking nighttime walks, something that has become extremely rare in this future-based setting: everybody else stays inside and watches television (\"viewing screens\"). Alone and without an alibi, Mead is taken to the \"Psychiatric Center for Research on Regressive Tendencies\" for his peculiar habit. Fahrenheit 451 would later echo this theme of an authoritarian society distracted by broadcast media.",
"title": "Writing and development"
},
{
"paragraph_id": 27,
"text": "Bradbury expanded the book-burning premise of \"Bright Phoenix\" and the totalitarian future of \"The Pedestrian\" into \"The Fireman\", a novella published in the February 1951 issue of Galaxy Science Fiction. \"The Fireman\" was written in the basement of UCLA's Powell Library on a typewriter that he rented for a fee of ten cents per half hour. The first draft was 25,000 words long and was completed in nine days.",
"title": "Writing and development"
},
{
"paragraph_id": 28,
"text": "Urged by a publisher at Ballantine Books to double the length of his story to make a novel, Bradbury returned to the same typing room and made the story 25,000 words longer, again taking just nine days. The title \"Fahrenheit 451\" came to him on January 22. The final manuscript was ready in mid-August, 1953. The resulting novel, which some considered as a fix-up (despite being an expanded rewrite of one single novella), was published by Ballantine in 1953.",
"title": "Writing and development"
},
{
"paragraph_id": 29,
"text": "Bradbury has supplemented the novel with various front and back matter, including a 1979 coda, a 1982 afterword, a 1993 foreword, and several introductions.",
"title": "Writing and development"
},
{
"paragraph_id": 30,
"text": "The first U.S. printing was a paperback version from October 1953 by The Ballantine Publishing Group. Shortly after the paperback, a hardback version was released that included a special edition of 200 signed and numbered copies bound in asbestos. These were technically collections because the novel was published with two short stories: The Playground and And the Rock Cried Out, which have been absent in later printings. A few months later, the novel was serialized in the March, April, and May 1954 issues of nascent Playboy magazine.",
"title": "Publication history"
},
{
"paragraph_id": 31,
"text": "Starting in January 1967, Fahrenheit 451 was subject to expurgation by its publisher, Ballantine Books with the release of the \"Bal-Hi Edition\" aimed at high school students. Among the changes made by the publisher were the censorship of the words \"hell\", \"damn\", and \"abortion\"; the modification of seventy-five passages; and the changing of two incidents.",
"title": "Publication history"
},
{
"paragraph_id": 32,
"text": "In the first incident a drunk man was changed to a \"sick man\", while the second involved cleaning fluff out of a human navel, which instead became \"cleaning ears\" in the other. For a while both the censored and uncensored versions were available concurrently but by 1973 Ballantine was publishing only the censored version. That continued until 1979, when it came to Bradbury's attention:",
"title": "Publication history"
},
{
"paragraph_id": 33,
"text": "In 1979, one of Bradbury's friends showed him an expurgated copy of the book. Bradbury demanded that Ballantine Books withdraw that version and replace it with the original, and in 1980 the original version once again became available. In this reinstated work, in the Author's Afterword, Bradbury relates to the reader that it is not uncommon for a publisher to expurgate an author's work, but he asserts that he himself will not tolerate the practice of manuscript \"mutilation\".",
"title": "Publication history"
},
{
"paragraph_id": 34,
"text": "The \"Bal-Hi\" editions are now referred to by the publisher as the \"Revised Bal-Hi\" editions.",
"title": "Publication history"
},
{
"paragraph_id": 35,
"text": "An audiobook version read by Bradbury himself was released in 1976 and received a Spoken Word Grammy nomination. Another audiobook was released in 2005 narrated by Christopher Hurt. The e-book version was released in December 2011.",
"title": "Publication history"
},
{
"paragraph_id": 36,
"text": "In 1954, Galaxy Science Fiction reviewer Groff Conklin placed the novel \"among the great works of the imagination written in English in the last decade or more.\" The Chicago Sunday Tribune's August Derleth described the book as \"a savage and shockingly prophetic view of one possible future way of life\", calling it \"compelling\" and praising Bradbury for his \"brilliant imagination\". Over half a century later, Sam Weller wrote, \"upon its publication, Fahrenheit 451 was hailed as a visionary work of social commentary.\" Today, Fahrenheit 451 is still viewed as an important cautionary tale about conformity and the evils of government censorship.",
"title": "Reception"
},
{
"paragraph_id": 37,
"text": "When the novel was first published, there were those who did not find merit in the tale. Anthony Boucher and J. Francis McComas were less enthusiastic, faulting the book for being \"simply padded, occasionally with startlingly ingenious gimmickry, ... often with coruscating cascades of verbal brilliance [but] too often merely with words.\" Reviewing the book for Astounding Science Fiction, P. Schuyler Miller characterized the title piece as \"one of Bradbury's bitter, almost hysterical diatribes,\" while praising its \"emotional drive and compelling, nagging detail.\" Similarly, The New York Times was unimpressed with the novel and further accused Bradbury of developing a \"virulent hatred for many aspects of present-day culture, namely, such monstrosities as radio, TV, most movies, amateur and professional sports, automobiles, and other similar aberrations which he feels debase the bright simplicity of the thinking man's existence.\"",
"title": "Reception"
},
{
"paragraph_id": 38,
"text": "Fahrenheit 451 was number seven on the list of \"Top Check Outs OF ALL TIME\" by the New York Public Library",
"title": "Reception"
},
{
"paragraph_id": 39,
"text": "In the years since its publication, Fahrenheit 451 has occasionally been banned, censored, or redacted in some schools at the behest of parents or teaching staff either unaware of or indifferent to the inherent irony in such censorship. Notable incidents include:",
"title": "Reception"
},
{
"paragraph_id": 40,
"text": "Discussions about Fahrenheit 451 often center on its story foremost as a warning against state-based censorship. Indeed, when Bradbury wrote the novel during the McCarthy era, he was concerned about censorship in the United States. During a radio interview in 1956, Bradbury said:",
"title": "Themes"
},
{
"paragraph_id": 41,
"text": "I wrote this book at a time when I was worried about the way things were going in this country four years ago. Too many people were afraid of their shadows; there was a threat of book burning. Many of the books were being taken off the shelves at that time. And of course, things have changed a lot in four years. Things are going back in a very healthy direction. But at the time I wanted to do some sort of story where I could comment on what would happen to a country if we let ourselves go too far in this direction, where then all thinking stops, and the dragon swallows his tail, and we sort of vanish into a limbo and we destroy ourselves by this sort of action.",
"title": "Themes"
},
{
"paragraph_id": 42,
"text": "As time went by, Bradbury tended to dismiss censorship as a chief motivating factor for writing the story. Instead he usually claimed that the real messages of Fahrenheit 451 were about the dangers of an illiterate society infatuated with mass media and the threat of minority and special interest groups to books. In the late 1950s, Bradbury recounted:",
"title": "Themes"
},
{
"paragraph_id": 43,
"text": "In writing the short novel Fahrenheit 451, I thought I was describing a world that might evolve in four or five decades. But only a few weeks ago, in Beverly Hills one night, a husband and wife passed me, walking their dog. I stood staring after them, absolutely stunned. The woman held in one hand a small cigarette-package-sized radio, its antenna quivering. From this sprang tiny copper wires which ended in a dainty cone plugged into her right ear. There she was, oblivious to man and dog, listening to far winds and whispers and soap-opera cries, sleep-walking, helped up and down curbs by a husband who might just as well not have been there. This was not fiction.",
"title": "Themes"
},
{
"paragraph_id": 44,
"text": "This story echoes Mildred's \"Seashell ear-thimbles\" (i.e., a brand of in-ear headphones) that act as an emotional barrier between her and Montag. In a 2007 interview, Bradbury maintained that people misinterpret his book and that Fahrenheit 451 is really a statement on how mass media like television marginalizes the reading of literature. Regarding minorities, he wrote in his 1979 Coda:",
"title": "Themes"
},
{
"paragraph_id": 45,
"text": "'There is more than one way to burn a book. And the world is full of people running about with lit matches. Every minority, be it Baptist/Unitarian, Irish/Italian/Octogenarian/Zen Buddhist, Zionist/Seventh-day Adventist, Women's Lib/Republican, Mattachine/Four Square Gospel feels it has the will, the right, the duty to douse the kerosene, light the fuse. [...] Fire-Captain Beatty, in my novel Fahrenheit 451, described how the books were burned first by minorities, each ripping a page or a paragraph from this book, then that, until the day came when the books were empty and the minds shut and the libraries closed forever. [...] Only six weeks ago, I discovered that, over the years, some cubby-hole editors at Ballantine Books, fearful of contaminating the young, had, bit by bit, censored some seventy-five separate sections from the novel. Students, reading the novel, which, after all, deals with censorship and book-burning in the future, wrote to tell me of this exquisite irony. Judy-Lynn del Rey, one of the new Ballantine editors, is having the entire book reset and republished this summer with all the damns and hells back in place.",
"title": "Themes"
},
{
"paragraph_id": 46,
"text": "Book-burning censorship, Bradbury would argue, was a side-effect of these two primary factors; this is consistent with Captain Beatty's speech to Montag about the history of the firemen. According to Bradbury, it is the people, not the state, who are the culprit in Fahrenheit 451. Fahrenheit's censorship is not the result of an authoritarian program to retain power, but the result of a fragmented society seeking to accommodate its challenges by deploying the power of entertainment and technology. As Captain Beatty explains (p. 55):",
"title": "Themes"
},
{
"paragraph_id": 47,
"text": "\"...The bigger your market, Montag, the less you handle controversy, remember that! All the minor minorities with their navels to be kept clean.\"[...] \"It didn't come from the Government down. There was no dictum, no declaration, no censorship, to start with, no! Technology, mass exploitation, and minority pressure carried the trick, thank God.\"",
"title": "Themes"
},
{
"paragraph_id": 48,
"text": "A variety of other themes in the novel besides censorship have been suggested. Two major themes are resistance to conformity and control of individuals via technology and mass media. Bradbury explores how the government is able to use mass media to influence society and suppress individualism through book burning. The characters Beatty and Faber point out that the American population is to blame. Due to their constant desire for a simplistic, positive image, books must be suppressed. Beatty blames the minority groups, who would take offense to published works that displayed them in an unfavorable light. Faber went further to state that the American population simply stopped reading on their own. He notes that the book burnings themselves became a form of entertainment for the general public.",
"title": "Themes"
},
{
"paragraph_id": 49,
"text": "In a 1994 interview, Bradbury stated that Fahrenheit 451 was more relevant during this time than in any other, stating that, \"it works even better because we have political correctness now. Political correctness is the real enemy these days. The black groups want to control our thinking and you can't say certain things. The homosexual groups don't want you to criticize them. It's thought control and freedom of speech control.\"",
"title": "Themes"
},
{
"paragraph_id": 50,
"text": "Fahrenheit 451 is set in an unspecified city and time, though it is written as if set in a distant future. The earliest editions make clear that it takes place no earlier than the year 2022 due to a reference to an atomic war taking place during that year.",
"title": "Predictions for the future"
},
{
"paragraph_id": 51,
"text": "Bradbury described himself as \"a preventer of futures, not a predictor of them.\" He did not believe that book burning was an inevitable part of the future; he wanted to warn against its development. In a later interview, when asked if he believes that teaching Fahrenheit 451 in schools will prevent his totalitarian vision of the future, Bradbury replied in the negative. Rather, he states that education must be at the kindergarten and first-grade level. If students are unable to read then, they will be unable to read Fahrenheit 451.",
"title": "Predictions for the future"
},
{
"paragraph_id": 52,
"text": "As to technology, Sam Weller notes that Bradbury \"predicted everything from flat-panel televisions to earbud headphones and twenty-four-hour banking machines.\"",
"title": "Predictions for the future"
},
{
"paragraph_id": 53,
"text": "Playhouse 90 broadcast \"A Sound of Different Drummers\" on CBS in 1957, written by Robert Alan Aurthur. The play combined plot ideas from Fahrenheit 451 and Nineteen Eighty-Four. Bradbury sued and eventually won on appeal.",
"title": "Adaptations"
},
{
"paragraph_id": 54,
"text": "A film adaptation written and directed by François Truffaut and starring Oskar Werner and Julie Christie was released in 1966.",
"title": "Adaptations"
},
{
"paragraph_id": 55,
"text": "A film adaptation directed by Ramin Bahrani and starring Michael B. Jordan, Michael Shannon, Sofia Boutella, and Lilly Singh was released in 2018 for HBO.",
"title": "Adaptations"
},
{
"paragraph_id": 56,
"text": "In the late 1970s Bradbury adapted his book into a play. At least part of it was performed at the Colony Theatre in Los Angeles in 1979, but it was not in print until 1986 and the official world premiere was only in November 1988 by the Fort Wayne, Indiana Civic Theatre. The stage adaptation diverges considerably from the book and seems influenced by Truffaut's movie. For example, fire chief Beatty's character is fleshed out and is the wordiest role in the play. As in the movie, Clarisse does not simply disappear but in the finale meets up with Montag as a book character (she as Robert Louis Stevenson, he as Edgar Allan Poe).",
"title": "Adaptations"
},
{
"paragraph_id": 57,
"text": "The UK premiere of Bradbury's stage adaptation was not until 2003 in Nottingham, while it took until 2006 before the Godlight Theatre Company produced and performed its New York City premiere at 59E59 Theaters. After the completion of the New York run, the production then transferred to the Edinburgh Festival where it was a 2006 Edinburgh Festival Pick of the Fringe.",
"title": "Adaptations"
},
{
"paragraph_id": 58,
"text": "The Off-Broadway theatre The American Place Theatre presented a one man show adaptation of Fahrenheit 451 as a part of their 2008–2009 Literature to Life season.",
"title": "Adaptations"
},
{
"paragraph_id": 59,
"text": "Fahrenheit 451 inspired the Birmingham Repertory Theatre production Time Has Fallen Asleep in the Afternoon Sunshine, which was performed at the Birmingham Central Library in April 2012.",
"title": "Adaptations"
},
{
"paragraph_id": 60,
"text": "In 1982, Gregory Evans' radio dramatization of the novel was broadcast on BBC Radio 4 starring Michael Pennington as Montag. It was broadcast eight more times on BBC Radio 4 Extra, twice each in 2010, 2012, 2013, and 2015.",
"title": "Adaptations"
},
{
"paragraph_id": 61,
"text": "BBC Radio's second dramatization, by David Calcutt, was broadcast on BBC Radio 4 in 2003, starring Stephen Tomlin in the same role.",
"title": "Adaptations"
},
{
"paragraph_id": 62,
"text": "In 1984 the new wave band Scortilla released the song Fahrenheit 451 inspired by the book by R. Bradbury and the film by F. Truffaut.",
"title": "Adaptations"
},
{
"paragraph_id": 63,
"text": "In 1984, the novel was adapted into a computer text adventure game of the same name by the software company Trillium.",
"title": "Adaptations"
},
{
"paragraph_id": 64,
"text": "In June 2009, a graphic novel edition of the book was published. Entitled Ray Bradbury's Fahrenheit 451: The Authorized Adaptation, the paperback graphic adaptation was illustrated by Tim Hamilton. The introduction in the novel is written by Bradbury himself.",
"title": "Adaptations"
},
{
"paragraph_id": 65,
"text": "Michael Moore's 2004 documentary Fahrenheit 9/11 refers to Bradbury's novel and the September 11 attacks, emphasized by the film's tagline \"The temperature where freedom burns\". The film takes a critical look at the presidency of George W. Bush, the War on Terror, and its coverage in the news media, and became the highest grossing documentary of all time. Bradbury was upset by what he considered the appropriation of his title, and wanted the film renamed. Moore filmed a subsequent documentary about the election of Donald Trump called Fahrenheit 11/9 in 2018.",
"title": "Cultural references"
},
{
"paragraph_id": 66,
"text": "In 2015, the Internet Engineering Steering Group approved the publication of An HTTP Status Code to Report Legal Obstacles, now RFC 7725, which specifies that websites forced to block resources for legal reasons should return a status code of 451 when users request those resources.",
"title": "Cultural references"
},
{
"paragraph_id": 67,
"text": "Guy Montag (as Gui Montag), is used in the 1998 real-time strategy game Starcraft as a terran firebat hero.",
"title": "Cultural references"
},
{
"paragraph_id": 68,
"text": "Jerrin, Neil Beeto, and G. Bhuvaneswari. \"Distortion of 'Self-Image': Effects of Mental Delirium in Fahrenheit 451 by Ray Bradbury.\" Theory & Practice in Language Studies, vol. 12, no. 8, Aug. 2022, pp. 1634–40. EBSCOhost, https://doi.org/10.17507/tpls.1208.21",
"title": "References"
}
]
| Fahrenheit 451 is a 1953 dystopian novel by American writer Ray Bradbury. It presents an American society where books have been personified and outlawed and "firemen" burn any that are found. The novel follows in the viewpoint of Guy Montag, a fireman who soon becomes disillusioned with his role of censoring literature and destroying knowledge, eventually quitting his job and committing himself to the preservation of literary and cultural writings. Fahrenheit 451 was written by Bradbury during the Second Red Scare and the McCarthy era, inspired by the book burnings in Nazi Germany and by ideological repression in the Soviet Union. Bradbury's claimed motivation for writing the novel has changed multiple times. In a 1956 radio interview, Bradbury said that he wrote the book because of his concerns about the threat of burning books in the United States. In later years, he described the book as a commentary on how mass media reduces interest in reading literature. In a 1994 interview, Bradbury cited political correctness as an allegory for the censorship in the book, calling it "the real enemy these days" and labelling it as "thought control and freedom of speech control." The writing and theme within Fahrenheit 451 was explored by Bradbury in some of his previous short stories. Between 1947 and 1948, Bradbury wrote "Bright Phoenix", a short story about a librarian who confronts a "Chief Censor", who burns books. An encounter Bradbury had in 1949 with the police inspired him to write the short story "The Pedestrian" in 1951. In "The Pedestrian", a man going for a nighttime walk in his neighborhood is harassed and detained by the police. In the society of "The Pedestrian", citizens are expected to watch television as a leisurely activity, a detail that would be included in Fahrenheit 451. Elements of both "Bright Phoenix" and "The Pedestrian" would be combined into The Fireman, a novella published in 1951. Bradbury was urged by Stanley Kauffmann, a publisher at Ballantine Books, to make The Fireman into a full novel. Bradbury finished the manuscript for Fahrenheit 451 in 1953, and the novel was published later that year. Upon its release, Fahrenheit 451 was a critical success, albeit with notable outliers. The novel's subject matter led to its censorship in apartheid South Africa and various schools in the United States. In 1954, Fahrenheit 451 won the American Academy of Arts and Letters Award in Literature and the Commonwealth Club of California Gold Medal. It later won the Prometheus "Hall of Fame" Award in 1984 and a "Retro" Hugo Award in 2004. Bradbury was honored with a Spoken Word Grammy nomination for his 1976 audiobook version. The novel has also been adapted into films, stage plays, and video games. Film adaptations of the novel include a 1966 film directed by François Truffaut starring Oskar Werner as Guy Montag, an adaptation that was met with mixed critical reception, and a 2018 television film directed by Ramin Bahrani starring Michael B. Jordan as Montag that also received a mixed critical reception. Bradbury himself published a stage play version in 1979 and helped develop a 1984 interactive fiction video game of the same name, as well as a collection of his short stories titled A Pleasure to Burn. Two BBC Radio dramatizations were also produced. | 2001-08-05T23:41:26Z | 2023-12-27T23:50:52Z | [
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Cite magazine",
"Template:IMDb title",
"Template:Authority control",
"Template:Convert",
"Template:Main",
"Template:Cite journal",
"Template:Wikiquote-inline",
"Template:Short description",
"Template:Cite AV media",
"Template:Further",
"Template:Webarchive",
"Template:Retro Hugo Award Best Novel",
"Template:Use mdy dates",
"Template:Infobox book",
"Template:Cn",
"Template:Cite news",
"Template:Commons category-inline",
"Template:ISFDB title",
"Template:Ray Bradbury",
"Template:About",
"Template:Use American English"
]
| https://en.wikipedia.org/wiki/Fahrenheit_451 |
10,957 | Francis Xavier | Francis Xavier, SJ (born Francisco de Jasso y Azpilicueta; Latin: Franciscus Xaverius; Basque: Frantzisko Xabierkoa; French: François Xavier; Spanish: Francisco Javier; Portuguese: Francisco Xavier; 7 April 1506 – 3 December 1552), venerated as Saint Francis Xavier, was a Spanish Catholic missionary and saint who co-founded the Society of Jesus and, as a representative of the Portuguese empire, led the first Christian mission to Japan.
Born in the town of Xavier, Spain, he was a companion of Ignatius of Loyola and one of the first seven Jesuits who took vows of poverty and chastity at Montmartre, Paris in 1534. He led an extensive mission into Asia, mainly the Portuguese Empire in the East, and was influential in evangelisation work, most notably in early modern India. He was extensively involved in the missionary activity in Portuguese India. In 1546, Francis Xavier proposed the establishment of the Goan Inquisition in a letter addressed to the Portuguese King, John III. While some sources claim that he actually asked for a special minister whose sole office would be to further Christianity in Goa, others disagree with this assertion. As a representative of the king of Portugal, he was also the first major Christian missionary to venture into Borneo, the Maluku Islands, Japan, and other areas. In those areas, struggling to learn the local languages and in the face of opposition, he had less success than he had enjoyed in India. Xavier was about to extend his mission to Ming China, when he died on Shangchuan Island.
He was beatified by Pope Paul V on 25 October 1619 and canonized by Pope Gregory XV on 12 March 1622. In 1624, he was made co-patron of Navarre. Known as the "Apostle of the Indies", "Apostle of the Far East", "Apostle of China" and "Apostle of Japan", he is considered to be one of the greatest missionaries since Paul the Apostle. In 1927, Pope Pius XI published the decree "Apostolicorum in Missionibus" naming Francis Xavier, along with Thérèse of Lisieux, co-patron of all foreign missions. He is now co-patron saint of Navarre, with Fermin. The Day of Navarre in Navarre, Spain, marks the anniversary of Francis Xavier's death, on 3 December.
Francis Xavier was born in the Castle of Xavier, in the Kingdom of Navarre, on 7 April 1506 into an influential noble family. He was the youngest son of Don Juan de Jasso y Atondo, Lord of Idocín, president of the Royal Council of the Kingdom of Navarre, and seneschal of the Castle of Xavier (a doctor in law by the University of Bologna, belonging to a prosperous noble family of Saint-Jean-Pied-de-Port, later privy counsellor and finance minister to King John III of Navarre) and Doña María de Azpilcueta y Aznárez, sole heiress to the Castle of Xavier (related to the theologian and philosopher Martín de Azpilcueta). His brother Miguel de Jasso (later known as Miguel de Javier) became Lord of Xavier and Idocín at the death of his parents (a direct ancestor of the Counts of Javier). Basque and Romance were his two mother tongues.
In 1512, Ferdinand, King of Aragon and regent of Castile, invaded Navarre, initiating a war that lasted over 18 years. Three years later, Francis's father died when Francis was only nine years old. In 1516, Francis's brothers participated in a failed Navarrese-French attempt to expel the Spanish invaders from the kingdom. The Spanish Governor, Cardinal Cisneros, confiscated the family lands, demolished the outer wall, the gates, and two towers of the family castle, and filled in the moat. In addition, the height of the keep was reduced by half. Only the family residence inside the castle was left. In 1522, one of Francis's brothers participated with 200 Navarrese nobles in dogged but failed resistance against the Castilian Count of Miranda in Amaiur, Baztan, the last Navarrese territorial position south of the Pyrenees.
In 1525, Francis went to study in Paris at the Collège Sainte-Barbe, University of Paris, where he spent the next eleven years. In the early days he acquired some reputation as an athlete and a high-jumper.
In 1529, Francis shared lodgings with his friend Pierre Favre. A new student, Ignatius of Loyola, came to room with them. At 38, Ignatius was much older than Pierre and Francis, who were both 23 at the time. Ignatius convinced Pierre to become a priest, but was unable to convince Francis, who had aspirations of worldly advancement. At first, Francis regarded the new lodger as a joke and was sarcastic about his efforts to convert students. When Pierre left their lodgings to visit his family and Ignatius was alone with Francis, he was able to slowly break down Francis's resistance. According to most biographies Ignatius is said to have posed the question: "What will it profit a man to gain the whole world, and lose his own soul?" However, according to James Broderick such method is not characteristic of Ignatius and there is no evidence that he employed it at all.
In 1530, Francis received the degree of Master of Arts, and afterwards taught Aristotelian philosophy at Beauvais College, University of Paris.
On 15 August 1534, seven students met in a crypt beneath the Church of Saint Denis (now Saint Pierre de Montmartre), on the hill of Montmartre, overlooking Paris. They were Francis, Ignatius of Loyola, Alfonso Salmeron, Diego Laínez, Nicolás Bobadilla from Spain, Peter Faber from Savoy, and Simão Rodrigues from Portugal. They made private vows of poverty, chastity, and obedience to the Pope, and also vowed to go to the Holy Land to convert infidels. Francis began his study of theology in 1534 and was ordained on 24 June 1537.
In 1539, after long discussions, Ignatius drew up a formula for a new religious order, the Society of Jesus (the Jesuits). Ignatius's plan for the order was approved by Pope Paul III in 1540.
In 1540, King John of Portugal had Pedro Mascarenhas, Portuguese ambassador to the Holy See, request Jesuit missionaries to spread the faith in his new possessions in India, where the king believed that Christian values were eroding among the Portuguese. After successive appeals to the Pope asking for missionaries for the East Indies under the Padroado agreement, John III was encouraged by Diogo de Gouveia, rector of the Collège Sainte-Barbe, to recruit the newly graduated students who had established the Society of Jesus.
Ignatius promptly appointed Nicholas Bobadilla and Simão Rodrigues. At the last moment, however, Bobadilla became seriously ill. With some hesitance and uneasiness, Ignatius asked Francis to go in Bobadilla's place. Thus, Francis Xavier began his life as the first Jesuit missionary almost accidentally.
Leaving Rome on 15 March 1540, in the Ambassador's train, Francis took with him a breviary, a catechism, and De Institutione bene vivendi by Croatian humanist Marko Marulić, a Latin book that had become popular in the Counter-Reformation. According to a 1549 letter of F. Balthasar Gago from Goa, it was the only book that Francis read or studied. Francis reached Lisbon in June 1540 and, four days after his arrival, he and Rodrigues were summoned to a private audience with the King and the Queen.
Francis Xavier devoted much of his life to missions in Asia, mainly in four centres: Malacca, Amboina and Ternate (in the Maluku Islands of Indonesia), Japan, and off-shore China. His growing information about new places indicated to him that he had to go to what he understood were centres of influence for the whole region. China loomed large from his days in India. Japan was particularly attractive because of its culture. For him, these areas were interconnected; they could not be evangelised separately.
Francis Xavier left Lisbon on 7 April 1541, his thirty-fifth birthday, along with two other Jesuits and the new viceroy Martim Afonso de Sousa, on board the Santiago. As he departed, Francis was given a brief from the pope appointing him apostolic nuncio to the East. From August until March 1542 he remained in Portuguese Mozambique, and arrived in Goa, then capital of Portuguese India, on 6 May 1542, thirteen months after leaving Lisbon.
The Portuguese, following quickly on the great voyages of discovery, had established themselves at Goa thirty years earlier. Francis's primary mission, as ordered by King John III, was to restore Christianity among the Portuguese settlers. According to Teotonio R. DeSouza, recent critical accounts indicate that apart from the posted civil servants, "the great majority of those who were dispatched as 'discoverers' were the riff-raff of Portuguese society, picked up from Portuguese jails." Nor did the soldiers, sailors, or merchants come to do missionary work, and Imperial policy permitted the outflow of disaffected nobility. Many of the arrivals formed liaisons with local women and adopted Indian culture. Missionaries often wrote against the "scandalous and undisciplined" behaviour of their fellow Christians.
The Christian population had churches, clergy, and a bishop, but there were few preachers and no priests beyond the walls of Goa. Xavier decided that he must begin by instructing the Portuguese themselves, and gave much of his time to the teaching of children. The first five months he spent in preaching and ministering to the sick in the hospitals. After that, he walked through the streets ringing a bell to summon the children and servants to catechism. He was invited to head Saint Paul's College, a pioneer seminary for the education of secular priests, which became the first Jesuit headquarters in Asia.
Conversion efforts
Xavier soon learned that along the Pearl Fishery Coast, which extends from Cape Comorin on the southern tip of India to the island of Mannar, off Ceylon (Sri Lanka), there was a Jāti of people called Paravas. Many of them had been baptised ten years before, merely to please the Portuguese who had helped them against the Moors, but remained uninstructed in the faith. Accompanied by several native clerics from the seminary at Goa, he set sail for Cape Comorin in October 1542. He taught those who had already been baptised and preached to those who weren't. His efforts with the high-caste Brahmins remained unavailing. The Brahmin and Muslim authorities in Travancore opposed Xavier with violence; time and again his hut was burned down over his head, and once he saved his life only by hiding among the branches of a large tree.
He devoted almost three years to the work of preaching to the people of southern India and Ceylon, converting many. He built nearly 40 churches along the coast, including St. Stephen's Church, Kombuthurai, mentioned in his letters dated 1544.
During this time, he was able to visit the tomb of Thomas the Apostle in Mylapore (now part of Madras/Chennai then in Portuguese India). He set his sights eastward in 1545 and planned a missionary journey to Makassar on the island of Celebes (today's Indonesia).
As the first Jesuit in India, Francis had difficulty achieving much success in his missionary trips. His successors, such as de Nobili, Matteo Ricci, and Beschi, attempted to convert the noblemen first as a means to influence more people, while Francis had initially interacted most with the lower classes; (later though, in Japan, Francis changed tack by paying tribute to the Emperor and seeking an audience with him).
In the spring of 1545 Xavier started for Portuguese Malacca. He laboured there for the last months of that year. About January 1546, Xavier left Malacca for the Maluku Islands, where the Portuguese had some settlements. For a year and a half, he preached the Gospel there. He went first to Ambon Island, where he stayed until mid-June. He then visited the other Maluku Islands, including Ternate, Baranura, and Morotai. Shortly after Easter 1547, he returned to Ambon Island; a few months later he returned to Malacca. While there, Malacca was attacked by the Acehnese from Sumatra, and through preaching Xavier inspired the Portuguese to seek battle, achieving a victory at the Battle of Perlis River, despite being heavily outnumbered.
In Malacca in December 1547, Francis Xavier met a Japanese man named Anjirō. Anjirō had heard of Francis in 1545 and had travelled from Kagoshima to Malacca to meet him. Having been charged with murder, Anjirō had fled Japan. He told Francis extensively about his former life, and the customs and culture of his homeland. Anjirō became the first Japanese Christian and adopted the name 'Paulo de Santa Fé'. He later helped Xavier as a mediator and interpreter for the mission to Japan that now seemed much more possible.
In January 1548 Francis returned to Goa to attend to his responsibilities as superior of the mission there. The next 15 months were occupied with various journeys and administrative measures. He left Goa on 15 April 1549, stopped at Malacca, and visited Canton. He was accompanied by Anjiro, two other Japanese men, Father Cosme de Torrès and Brother Juan Fernández. He had taken with him presents for the "King of Japan" since he was intending to introduce himself as the Apostolic Nuncio.
Europeans had already come to Japan; the Portuguese had landed in 1543 on the island of Tanegashima, where they introduced matchlock firearms to Japan.
From Amboina, he wrote to his companions in Europe: "I asked a Portuguese merchant, ... who had been for many days in Anjirō's country of Japan, to give me ... some information on that land and its people from what he had seen and heard. ...All the Portuguese merchants coming from Japan tell me that if I go there I shall do great service for God our Lord, more than with the pagans of India, for they are a very reasonable people." (To His Companions Residing in Rome, From Cochin, 20 January 1548, no. 18, p. 178).
Francis Xavier reached Japan on 27 July 1549, with Anjiro and three other Jesuits, but he was not permitted to enter any port his ship arrived at until 15 August, when he went ashore at Kagoshima, the principal port of Satsuma Province on the island of Kyūshū. As a representative of the Portuguese king, he was received in a friendly manner. Shimazu Takahisa (1514–1571), daimyō of Satsuma, gave a friendly reception to Francis on 29 September 1549, but in the following year he forbade the conversion of his subjects to Christianity under penalty of death; Christians in Kagoshima could not be given any catechism in the following years. The Portuguese missionary Pedro de Alcáçova would later write in 1554:
In Cangoxima, the first place Father Master Francisco stopped at, there were a good number of Christians, although there was no one there to teach them; the shortage of labourers prevented the whole kingdom from becoming Christian.
Francis was the first Jesuit to go to Japan as a missionary. He brought with him paintings of the Madonna and the Madonna and Child. These paintings were used to help teach the Japanese about Christianity. There was a huge language barrier as Japanese was unlike other languages the missionaries had previously encountered. For a long time, Francis struggled to learn the language. He was hosted by Anjirō's family until October 1550. From October to December 1550, he resided in Yamaguchi. Shortly before Christmas, he left for Kyoto but failed to meet with the Emperor. He returned to Yamaguchi in March 1551, where the daimyo of the province gave him permission to preach.
Having learned that evangelical poverty did not have the appeal in Japan that it had in Europe and in India, he decided to change his approach. Hearing after a time that a Portuguese ship had arrived at a port in the province of Bungo in Kyushu and that the prince there would like to see him, Xavier now set out southward. The Jesuit, in a fine cassock, surplice, and stole, was attended by thirty gentlemen and as many servants, all in their best clothes. Five of them bore on cushions valuable articles, including a portrait of Our Lady and a pair of velvet slippers, these not gifts for the prince, but solemn offerings to Xavier, to impress the onlookers with his eminence. Handsomely dressed, with his companions acting as attendants, he presented himself before Oshindono, the ruler of Nagate, and as a representative of the great kingdom of Portugal, offered him letters and presents: a musical instrument, a watch, and other attractive objects which had been given him by the authorities in India for the emperor.
For forty-five years the Jesuits were the only missionaries in Asia, but the Franciscans began proselytizing in Asia, as well. Christian missionaries were later forced into exile, along with their assistants. However, some were able to stay behind. Christianity was then kept underground so as to not be persecuted.
The Japanese people were not easily converted; many of the people were already Buddhist or Shinto. Francis tried to combat the disposition of some of the Japanese that a God who had created everything, including evil, could not be good. Despite Francis's different religion, he felt that they were good people, much like Europeans, and could be converted.
Xavier was welcomed by the Shingon monks since he used the word Dainichi for the Christian God; attempting to adapt the concept to local traditions. As Xavier learned more about the religious nuances of the word, he changed to Deusu from the Latin and Portuguese Deus. The monks later realised that Xavier was preaching a rival religion and grew more resistant towards his attempts at conversion.
With the passage of time, his sojourn in Japan could be considered somewhat fruitful as attested by congregations established in Hirado, Yamaguchi, and Bungo. Xavier worked for more than two years in Japan and saw his successor-Jesuits established. He then decided to return to India. Historians debate the exact path by which he returned, but from evidence attributed to the captain of his ship, he may have travelled through Tanegeshima and Minato, and avoided Kagoshima because of the hostility of the daimyo.
During his trip from Japan back to India, a tempest forced him to stop on an island near Guangzhou, Guangdong, China, where he met Diogo Pereira, a rich merchant and an old friend from Cochin. Pereira showed him a letter from Portuguese prisoners in Guangzhou, asking for a Portuguese ambassador to speak to the Chinese Emperor on their behalf. Later during the voyage, he stopped at Malacca on 27 December 1551 and was back in Goa by January 1552.
On 17 April he set sail with Diogo Pereira on the Santa Cruz for China. He planned to introduce himself as Apostolic Nuncio and Pereira as the ambassador of the King of Portugal. But then he realized that he had forgotten his testimonial letters as an Apostolic Nuncio. Back in Malacca, he was confronted by the captain Álvaro de Ataíde da Gama who now had total control over the harbour. The captain refused to recognize his title of Nuncio, asked Pereira to resign from his title of ambassador, named a new crew for the ship, and demanded the gifts for the Chinese Emperor be left in Malacca.
In late August 1552, the Santa Cruz reached the Chinese island of Shangchuan, 14 km away from the southern coast of mainland China, near Taishan, Guangdong, 200 km south-west of what later became Hong Kong. At this time, he was accompanied only by a Jesuit student, Álvaro Ferreira, a Chinese man called António, and a Malabar servant called Christopher. Around mid-November, he sent a letter saying that a man had agreed to take him to the mainland in exchange for a large sum of money. Having sent back Álvaro Ferreira, he remained alone with António. He died from a fever at Shangchuan, Taishan, China, on 3 December 1552, while he was waiting for a boat that would take him to mainland China.
Xavier was first buried on a beach at Shangchuan Island, Taishan, Guangdong. His body was taken from the island in February 1553 and temporarily buried in St. Paul's Church in Portuguese Malacca on 22 March 1553. An open grave in the church now marks the place of Xavier's burial. Pereira came back from Goa, removed the corpse shortly after 15 April 1553, and moved it to his house. On 11 December 1553, Xavier's body was shipped to Goa.
The mostly-incorruptible body is now in the Basilica of Bom Jesus in Goa, where it was placed in a glass container encased in a silver casket on 2 December 1637. This casket, constructed by Goan silversmiths between 1636 and 1637, was an exemplary blend of Italian and Indian aesthetic sensibilities. There are 32 silver plates on all four sides of the casket, depicting different episodes from the life of Xavier:
The right forearm, which Xavier used to bless and baptise his converts, was detached by Superior General Claudio Acquaviva in 1614. It has been displayed since in a silver reliquary at the main Jesuit church in Rome, Il Gesù.
Another of Xavier's arm bones was brought to Macau where it was kept in a silver reliquary. The relic was destined for Japan but religious persecution there persuaded the church to keep it in Macau's Cathedral of St. Paul. It was subsequently moved to St. Joseph's and in 1978 to the Chapel of St. Francis Xavier on Coloane Island. More recently the relic was moved to St. Joseph's Church.
A relict from the right hand of St Francis Xavier is on display at St Mary's Cathedral, Sydney.
In 2006, on the 500th anniversary of his birth, the Xavier Tomb Monument and Chapel on Shangchuan Island, in ruins after years of neglect under communist rule in China, was restored with support from the alumni of Wah Yan College, a Jesuit high school in Hong Kong.
From December 2017 to February 2018, Catholic Christian Outreach (CCO) in cooperation with the Jesuits, and the Archdiocese of Ottawa (Canada) brought Xavier's right forearm to tour throughout Canada. The faithful, especially university students participating with CCO at Rise Up 2017 in Ottawa, venerated the relics. The tour continued to every city where CCO and/or the Jesuits are present in Canada: Quebec City, St. John's, Halifax, St. Francis Xavier University in Antigonish (neither CCO nor the Jesuits are present here), Kingston, Toronto, Winnipeg, Saskatoon, Regina, Calgary, Vancouver, Victoria, and Montreal before returning to Ottawa. The relic was then returned to Rome with a Mass of Thanksgiving celebrated by Archbishop Terrence Prendergast at the Church of the Gesu.
Francis Xavier was beatified by Paul V on 25 October 1619, and was canonized by Gregory XV on 12 March 1622, at the same time as Ignatius Loyola. Pius XI proclaimed him the "Patron of Catholic Missions". His feast day is 3 December.
Saint Francis Xavier's relics are kept in a silver casket, elevated inside the Bom Jesus Basilica and are exposed (being brought to ground level) generally every ten years, but this is discretionary. The sacred relics went on display starting on 22 November 2014 at the XVII Solemn Exposition. The display closed on 4 January 2015. The previous exposition, the sixteenth, was held from 21 November 2004 to 2 January 2005.
Relics of Saint Francis Xavier are also found in the Espirito Santo (Holy Spirit) Church, Margão, in Sanv Fransiku Xavierachi Igorz (Church of St. Francis Xavier), Batpal, Canacona, Goa, and at St. Francis Xavier Chapel, Portais, Panjim.
Other pilgrimage centres include Xavier's birthplace in Navarra; the Church of the Gesù, Rome; Malacca (where he was buried for two years, before being brought to Goa); and Sancian (place of death).
Xavier is a major venerated saint in both Sonora and the neighbouring U.S. state of Arizona. In Magdalena de Kino in Sonora, Mexico, in the Church of Santa María Magdalena, there is a reclining statue of San Francisco Xavier brought by pioneer Jesuit missionary Padre Eusebio Kino in the early 18th century. The statue is said to be miraculous and is the object of pilgrimage for many in the region. Also the Mission San Xavier del Bac is a pilgrimage site. The mission is an active parish church ministering to the people of the San Xavier District, Tohono O'odham Nation, and nearby Tucson, Arizona.
Francis Xavier is honored in the Church of England and in the Episcopal Church on 3 December.
The Novena of Grace is a popular devotion to Francis Xavier, typically prayed either on the nine days before 3 December, or on 4 March through 12 March (the anniversary of Pope Gregory XV's canonisation of Xavier in 1622). It began with the Italian Jesuit missionary Marcello Mastrilli. Before he could travel to the Far East, Mastrilli was gravely injured in a freak accident after a festive celebration dedicated to the Immaculate Conception in Naples. Delirious and on the verge of death, Mastrilli saw Xavier, who he later said asked him to choose between travelling or death by holding the respective symbols, to which Mastrilli answered, "I choose that which God wills." Upon regaining his health, Mastrilli made his way via Goa and the Philippines to Satsuma, Japan. The Tokugawa shogunate beheaded the missionary in October 1637, after undergoing three days of tortures involving the volcanic sulphurous fumes from Mt. Unzen, known as the Hell mouth or "pit" that had supposedly caused an earlier missionary to renounce his faith.
Francis Xavier became widely noteworthy for his missionary work, both as an organiser and as a pioneer; he reputedly converted more people than anyone else had done since Paul the Apostle. In 2006 Pope Benedict XVI said of both Ignatius of Loyola and Francis Xavier: "not only their history which was interwoven for many years from Paris and Rome, but a unique desire – a unique passion, it could be said – moved and sustained them through different human events: the passion to give to God-Trinity a glory always greater and to work for the proclamation of the Gospel of Christ to the peoples who had been ignored." His personal efforts most affected religious practice in India and in the East Indies (Indonesia, Malaysia, Timor). As of 2021 India still has numerous Jesuit missions and many more schools. Xavier also worked to propagate Christianity in China and Japan. However, following the persecutions (1587 onwards) instituted by Toyotomi Hideyoshi and the subsequent closing of Japan to foreigners (1633 onwards), the Christians of Japan had to go underground to preserve an independent Christian culture. Likewise, while Xavier inspired many missionaries to China, Chinese Christians also were forced underground there and developed their own Christian culture.
A small chapel designed by Achille-Antoine Hermitte was completed in 1869 over Xavier's death-place on Shangchuan Island, Canton. It was damaged and restored several times; the most recent restoration in 2006 marked the 500th anniversary of the saint's birth.
Francis Xavier is the patron saint of his native Navarre, which celebrates his feast day on 3 December as a government holiday. In addition to Roman Catholic Masses remembering Xavier on that day (now known as the Day of Navarra), celebrations in the surrounding weeks honour the region's cultural heritage. Furthermore, in the 1940s, devoted Catholics instituted the Javierada, an annual day-long pilgrimage (often on foot) from the capital at Pamplona to Xavier, where the Jesuits built a basilica and museum and restored Francis Xavier's family's castle.
As the foremost saint from Navarre and one of the main Jesuit saints, Francis Xavier is much venerated in Spain and the Hispanic countries where Francisco Javier or Javier are common male given names. The alternative spelling Xavier is also popular in the Basque Country, Portugal, Catalonia, Brazil, France, Belgium, and southern Italy. In India, the spelling Xavier is almost always used, and the name is quite common among Christians, especially in Goa and in the southern states of Tamil Nadu, Kerala, and Karnataka. The names Francisco Xavier, António Xavier, João Xavier, Caetano Xavier, Domingos Xavier and so forth, were very common till quite recently in Goa. Fransiskus Xaverius is commonly used as a name for Indonesian Catholics, usually abbreviated as FX. In Austria and Bavaria the name is spelt as Xaver (pronounced /ˈksaːfɐ/) and often used in addition to Francis as Franz-Xaver (/frant͡sˈksaːfɐ/). In Polish the name becomes Ksawery. Many Catalan men are named for him, often using the two-name combination Francesc Xavier. In English-speaking countries, "Xavier" until recently was likely to follow "Francis"; in the 2000s, however, "Xavier" by itself became more popular than "Francis", and after 2001 featured as one of the hundred most common male baby names in the US. Furthermore, the Sevier family name, possibly most famous in the United States for John Sevier (1745–1815), originated from the name "Xavier".
Many churches all over the world, often founded by Jesuits, have been named in honour of Xavier. The many in the United States include the historic St. Francis Xavier Shrine at Warwick, Maryland (founded 1720), and the Basilica of St. Francis Xavier in Dyersville, Iowa. Note also the American educational teaching order, the Xaverian Brothers, and the Mission San Xavier del Bac in Tucson, Arizona (founded in 1692, and known for its Spanish Colonial architecture).
Shortly before leaving for the East, Xavier issued a famous instruction to Father Gaspar Barazeuz who was leaving to go to Ormuz (a kingdom on an island in the Persian Gulf, formerly attached to the Empire of Persia, now part of Iran), that he should mix with sinners:
And if you wish to bring forth much fruit, both for yourselves and for your neighbours, and to live consoled, converse with sinners, making them unburden themselves to you. These are the living books by which you are to study, both for your preaching and for your own consolation. I do not say that you should not on occasion read written books... to support what you say against vices with authorities from the Holy Scriptures and examples from the lives of the saints.
Modern scholars assess the number of people converted to Christianity by Francis Xavier at around 30,000. While some of Xavier's methods have subsequently come under criticism, he has also earned praise. He insisted that missionaries adapt to many of the customs, and most certainly to the language, of the culture they wish to evangelise. And unlike later missionaries, Xavier supported an educated native clergy. Though for a time it seemed that persecution had subsequently destroyed his work in Japan, Protestant missionaries three centuries later discovered that approximately 100,000 Christians still practised the faith in the Nagasaki area.
Francis Xavier's work initiated permanent change in eastern Indonesia, and he became known as the "Apostle of the Indies" – in 1546–1547 he worked in the Maluku Islands among the people of Ambon, Ternate, and Morotai (or Moro), and laid the foundations for a permanent mission. After he left the Maluku Islands, others carried on his work, and by the 1560s there were 10,000 Roman Catholics in the area, mostly on Ambon. By the 1590s, there were 50,000 to 60,000.
In 1546, Francis Xavier proposed the establishment of the Goa Inquisition in a letter addressed to the Portuguese King, John III. Xavier addresses the King as the 'Vicar of Christ', owing to his royal patronage over Christianity in the East Indies. In a letter dated 20 January 1548, he requests the king to be tough on the Portuguese governor in India so that he may be active in propagating the faith. Xavier also wrote to the Portuguese king asking for protection in regards to new converts who were being harassed by Portuguese commandants. Francis Xavier died in 1552 without ever living to see the commencement of the Goa Inquisition. | [
{
"paragraph_id": 0,
"text": "Francis Xavier, SJ (born Francisco de Jasso y Azpilicueta; Latin: Franciscus Xaverius; Basque: Frantzisko Xabierkoa; French: François Xavier; Spanish: Francisco Javier; Portuguese: Francisco Xavier; 7 April 1506 – 3 December 1552), venerated as Saint Francis Xavier, was a Spanish Catholic missionary and saint who co-founded the Society of Jesus and, as a representative of the Portuguese empire, led the first Christian mission to Japan.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Born in the town of Xavier, Spain, he was a companion of Ignatius of Loyola and one of the first seven Jesuits who took vows of poverty and chastity at Montmartre, Paris in 1534. He led an extensive mission into Asia, mainly the Portuguese Empire in the East, and was influential in evangelisation work, most notably in early modern India. He was extensively involved in the missionary activity in Portuguese India. In 1546, Francis Xavier proposed the establishment of the Goan Inquisition in a letter addressed to the Portuguese King, John III. While some sources claim that he actually asked for a special minister whose sole office would be to further Christianity in Goa, others disagree with this assertion. As a representative of the king of Portugal, he was also the first major Christian missionary to venture into Borneo, the Maluku Islands, Japan, and other areas. In those areas, struggling to learn the local languages and in the face of opposition, he had less success than he had enjoyed in India. Xavier was about to extend his mission to Ming China, when he died on Shangchuan Island.",
"title": ""
},
{
"paragraph_id": 2,
"text": "He was beatified by Pope Paul V on 25 October 1619 and canonized by Pope Gregory XV on 12 March 1622. In 1624, he was made co-patron of Navarre. Known as the \"Apostle of the Indies\", \"Apostle of the Far East\", \"Apostle of China\" and \"Apostle of Japan\", he is considered to be one of the greatest missionaries since Paul the Apostle. In 1927, Pope Pius XI published the decree \"Apostolicorum in Missionibus\" naming Francis Xavier, along with Thérèse of Lisieux, co-patron of all foreign missions. He is now co-patron saint of Navarre, with Fermin. The Day of Navarre in Navarre, Spain, marks the anniversary of Francis Xavier's death, on 3 December.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Francis Xavier was born in the Castle of Xavier, in the Kingdom of Navarre, on 7 April 1506 into an influential noble family. He was the youngest son of Don Juan de Jasso y Atondo, Lord of Idocín, president of the Royal Council of the Kingdom of Navarre, and seneschal of the Castle of Xavier (a doctor in law by the University of Bologna, belonging to a prosperous noble family of Saint-Jean-Pied-de-Port, later privy counsellor and finance minister to King John III of Navarre) and Doña María de Azpilcueta y Aznárez, sole heiress to the Castle of Xavier (related to the theologian and philosopher Martín de Azpilcueta). His brother Miguel de Jasso (later known as Miguel de Javier) became Lord of Xavier and Idocín at the death of his parents (a direct ancestor of the Counts of Javier). Basque and Romance were his two mother tongues.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "In 1512, Ferdinand, King of Aragon and regent of Castile, invaded Navarre, initiating a war that lasted over 18 years. Three years later, Francis's father died when Francis was only nine years old. In 1516, Francis's brothers participated in a failed Navarrese-French attempt to expel the Spanish invaders from the kingdom. The Spanish Governor, Cardinal Cisneros, confiscated the family lands, demolished the outer wall, the gates, and two towers of the family castle, and filled in the moat. In addition, the height of the keep was reduced by half. Only the family residence inside the castle was left. In 1522, one of Francis's brothers participated with 200 Navarrese nobles in dogged but failed resistance against the Castilian Count of Miranda in Amaiur, Baztan, the last Navarrese territorial position south of the Pyrenees.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "In 1525, Francis went to study in Paris at the Collège Sainte-Barbe, University of Paris, where he spent the next eleven years. In the early days he acquired some reputation as an athlete and a high-jumper.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "In 1529, Francis shared lodgings with his friend Pierre Favre. A new student, Ignatius of Loyola, came to room with them. At 38, Ignatius was much older than Pierre and Francis, who were both 23 at the time. Ignatius convinced Pierre to become a priest, but was unable to convince Francis, who had aspirations of worldly advancement. At first, Francis regarded the new lodger as a joke and was sarcastic about his efforts to convert students. When Pierre left their lodgings to visit his family and Ignatius was alone with Francis, he was able to slowly break down Francis's resistance. According to most biographies Ignatius is said to have posed the question: \"What will it profit a man to gain the whole world, and lose his own soul?\" However, according to James Broderick such method is not characteristic of Ignatius and there is no evidence that he employed it at all.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "In 1530, Francis received the degree of Master of Arts, and afterwards taught Aristotelian philosophy at Beauvais College, University of Paris.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "On 15 August 1534, seven students met in a crypt beneath the Church of Saint Denis (now Saint Pierre de Montmartre), on the hill of Montmartre, overlooking Paris. They were Francis, Ignatius of Loyola, Alfonso Salmeron, Diego Laínez, Nicolás Bobadilla from Spain, Peter Faber from Savoy, and Simão Rodrigues from Portugal. They made private vows of poverty, chastity, and obedience to the Pope, and also vowed to go to the Holy Land to convert infidels. Francis began his study of theology in 1534 and was ordained on 24 June 1537.",
"title": "Missionary work"
},
{
"paragraph_id": 9,
"text": "In 1539, after long discussions, Ignatius drew up a formula for a new religious order, the Society of Jesus (the Jesuits). Ignatius's plan for the order was approved by Pope Paul III in 1540.",
"title": "Missionary work"
},
{
"paragraph_id": 10,
"text": "In 1540, King John of Portugal had Pedro Mascarenhas, Portuguese ambassador to the Holy See, request Jesuit missionaries to spread the faith in his new possessions in India, where the king believed that Christian values were eroding among the Portuguese. After successive appeals to the Pope asking for missionaries for the East Indies under the Padroado agreement, John III was encouraged by Diogo de Gouveia, rector of the Collège Sainte-Barbe, to recruit the newly graduated students who had established the Society of Jesus.",
"title": "Missionary work"
},
{
"paragraph_id": 11,
"text": "Ignatius promptly appointed Nicholas Bobadilla and Simão Rodrigues. At the last moment, however, Bobadilla became seriously ill. With some hesitance and uneasiness, Ignatius asked Francis to go in Bobadilla's place. Thus, Francis Xavier began his life as the first Jesuit missionary almost accidentally.",
"title": "Missionary work"
},
{
"paragraph_id": 12,
"text": "Leaving Rome on 15 March 1540, in the Ambassador's train, Francis took with him a breviary, a catechism, and De Institutione bene vivendi by Croatian humanist Marko Marulić, a Latin book that had become popular in the Counter-Reformation. According to a 1549 letter of F. Balthasar Gago from Goa, it was the only book that Francis read or studied. Francis reached Lisbon in June 1540 and, four days after his arrival, he and Rodrigues were summoned to a private audience with the King and the Queen.",
"title": "Missionary work"
},
{
"paragraph_id": 13,
"text": "Francis Xavier devoted much of his life to missions in Asia, mainly in four centres: Malacca, Amboina and Ternate (in the Maluku Islands of Indonesia), Japan, and off-shore China. His growing information about new places indicated to him that he had to go to what he understood were centres of influence for the whole region. China loomed large from his days in India. Japan was particularly attractive because of its culture. For him, these areas were interconnected; they could not be evangelised separately.",
"title": "Missionary work"
},
{
"paragraph_id": 14,
"text": "Francis Xavier left Lisbon on 7 April 1541, his thirty-fifth birthday, along with two other Jesuits and the new viceroy Martim Afonso de Sousa, on board the Santiago. As he departed, Francis was given a brief from the pope appointing him apostolic nuncio to the East. From August until March 1542 he remained in Portuguese Mozambique, and arrived in Goa, then capital of Portuguese India, on 6 May 1542, thirteen months after leaving Lisbon.",
"title": "Missionary work"
},
{
"paragraph_id": 15,
"text": "The Portuguese, following quickly on the great voyages of discovery, had established themselves at Goa thirty years earlier. Francis's primary mission, as ordered by King John III, was to restore Christianity among the Portuguese settlers. According to Teotonio R. DeSouza, recent critical accounts indicate that apart from the posted civil servants, \"the great majority of those who were dispatched as 'discoverers' were the riff-raff of Portuguese society, picked up from Portuguese jails.\" Nor did the soldiers, sailors, or merchants come to do missionary work, and Imperial policy permitted the outflow of disaffected nobility. Many of the arrivals formed liaisons with local women and adopted Indian culture. Missionaries often wrote against the \"scandalous and undisciplined\" behaviour of their fellow Christians.",
"title": "Missionary work"
},
{
"paragraph_id": 16,
"text": "The Christian population had churches, clergy, and a bishop, but there were few preachers and no priests beyond the walls of Goa. Xavier decided that he must begin by instructing the Portuguese themselves, and gave much of his time to the teaching of children. The first five months he spent in preaching and ministering to the sick in the hospitals. After that, he walked through the streets ringing a bell to summon the children and servants to catechism. He was invited to head Saint Paul's College, a pioneer seminary for the education of secular priests, which became the first Jesuit headquarters in Asia.",
"title": "Missionary work"
},
{
"paragraph_id": 17,
"text": "Conversion efforts",
"title": "Missionary work"
},
{
"paragraph_id": 18,
"text": "Xavier soon learned that along the Pearl Fishery Coast, which extends from Cape Comorin on the southern tip of India to the island of Mannar, off Ceylon (Sri Lanka), there was a Jāti of people called Paravas. Many of them had been baptised ten years before, merely to please the Portuguese who had helped them against the Moors, but remained uninstructed in the faith. Accompanied by several native clerics from the seminary at Goa, he set sail for Cape Comorin in October 1542. He taught those who had already been baptised and preached to those who weren't. His efforts with the high-caste Brahmins remained unavailing. The Brahmin and Muslim authorities in Travancore opposed Xavier with violence; time and again his hut was burned down over his head, and once he saved his life only by hiding among the branches of a large tree.",
"title": "Missionary work"
},
{
"paragraph_id": 19,
"text": "He devoted almost three years to the work of preaching to the people of southern India and Ceylon, converting many. He built nearly 40 churches along the coast, including St. Stephen's Church, Kombuthurai, mentioned in his letters dated 1544.",
"title": "Missionary work"
},
{
"paragraph_id": 20,
"text": "During this time, he was able to visit the tomb of Thomas the Apostle in Mylapore (now part of Madras/Chennai then in Portuguese India). He set his sights eastward in 1545 and planned a missionary journey to Makassar on the island of Celebes (today's Indonesia).",
"title": "Missionary work"
},
{
"paragraph_id": 21,
"text": "As the first Jesuit in India, Francis had difficulty achieving much success in his missionary trips. His successors, such as de Nobili, Matteo Ricci, and Beschi, attempted to convert the noblemen first as a means to influence more people, while Francis had initially interacted most with the lower classes; (later though, in Japan, Francis changed tack by paying tribute to the Emperor and seeking an audience with him).",
"title": "Missionary work"
},
{
"paragraph_id": 22,
"text": "In the spring of 1545 Xavier started for Portuguese Malacca. He laboured there for the last months of that year. About January 1546, Xavier left Malacca for the Maluku Islands, where the Portuguese had some settlements. For a year and a half, he preached the Gospel there. He went first to Ambon Island, where he stayed until mid-June. He then visited the other Maluku Islands, including Ternate, Baranura, and Morotai. Shortly after Easter 1547, he returned to Ambon Island; a few months later he returned to Malacca. While there, Malacca was attacked by the Acehnese from Sumatra, and through preaching Xavier inspired the Portuguese to seek battle, achieving a victory at the Battle of Perlis River, despite being heavily outnumbered.",
"title": "Missionary work"
},
{
"paragraph_id": 23,
"text": "In Malacca in December 1547, Francis Xavier met a Japanese man named Anjirō. Anjirō had heard of Francis in 1545 and had travelled from Kagoshima to Malacca to meet him. Having been charged with murder, Anjirō had fled Japan. He told Francis extensively about his former life, and the customs and culture of his homeland. Anjirō became the first Japanese Christian and adopted the name 'Paulo de Santa Fé'. He later helped Xavier as a mediator and interpreter for the mission to Japan that now seemed much more possible.",
"title": "Missionary work"
},
{
"paragraph_id": 24,
"text": "In January 1548 Francis returned to Goa to attend to his responsibilities as superior of the mission there. The next 15 months were occupied with various journeys and administrative measures. He left Goa on 15 April 1549, stopped at Malacca, and visited Canton. He was accompanied by Anjiro, two other Japanese men, Father Cosme de Torrès and Brother Juan Fernández. He had taken with him presents for the \"King of Japan\" since he was intending to introduce himself as the Apostolic Nuncio.",
"title": "Missionary work"
},
{
"paragraph_id": 25,
"text": "Europeans had already come to Japan; the Portuguese had landed in 1543 on the island of Tanegashima, where they introduced matchlock firearms to Japan.",
"title": "Missionary work"
},
{
"paragraph_id": 26,
"text": "From Amboina, he wrote to his companions in Europe: \"I asked a Portuguese merchant, ... who had been for many days in Anjirō's country of Japan, to give me ... some information on that land and its people from what he had seen and heard. ...All the Portuguese merchants coming from Japan tell me that if I go there I shall do great service for God our Lord, more than with the pagans of India, for they are a very reasonable people.\" (To His Companions Residing in Rome, From Cochin, 20 January 1548, no. 18, p. 178).",
"title": "Missionary work"
},
{
"paragraph_id": 27,
"text": "Francis Xavier reached Japan on 27 July 1549, with Anjiro and three other Jesuits, but he was not permitted to enter any port his ship arrived at until 15 August, when he went ashore at Kagoshima, the principal port of Satsuma Province on the island of Kyūshū. As a representative of the Portuguese king, he was received in a friendly manner. Shimazu Takahisa (1514–1571), daimyō of Satsuma, gave a friendly reception to Francis on 29 September 1549, but in the following year he forbade the conversion of his subjects to Christianity under penalty of death; Christians in Kagoshima could not be given any catechism in the following years. The Portuguese missionary Pedro de Alcáçova would later write in 1554:",
"title": "Missionary work"
},
{
"paragraph_id": 28,
"text": "In Cangoxima, the first place Father Master Francisco stopped at, there were a good number of Christians, although there was no one there to teach them; the shortage of labourers prevented the whole kingdom from becoming Christian.",
"title": "Missionary work"
},
{
"paragraph_id": 29,
"text": "Francis was the first Jesuit to go to Japan as a missionary. He brought with him paintings of the Madonna and the Madonna and Child. These paintings were used to help teach the Japanese about Christianity. There was a huge language barrier as Japanese was unlike other languages the missionaries had previously encountered. For a long time, Francis struggled to learn the language. He was hosted by Anjirō's family until October 1550. From October to December 1550, he resided in Yamaguchi. Shortly before Christmas, he left for Kyoto but failed to meet with the Emperor. He returned to Yamaguchi in March 1551, where the daimyo of the province gave him permission to preach.",
"title": "Missionary work"
},
{
"paragraph_id": 30,
"text": "Having learned that evangelical poverty did not have the appeal in Japan that it had in Europe and in India, he decided to change his approach. Hearing after a time that a Portuguese ship had arrived at a port in the province of Bungo in Kyushu and that the prince there would like to see him, Xavier now set out southward. The Jesuit, in a fine cassock, surplice, and stole, was attended by thirty gentlemen and as many servants, all in their best clothes. Five of them bore on cushions valuable articles, including a portrait of Our Lady and a pair of velvet slippers, these not gifts for the prince, but solemn offerings to Xavier, to impress the onlookers with his eminence. Handsomely dressed, with his companions acting as attendants, he presented himself before Oshindono, the ruler of Nagate, and as a representative of the great kingdom of Portugal, offered him letters and presents: a musical instrument, a watch, and other attractive objects which had been given him by the authorities in India for the emperor.",
"title": "Missionary work"
},
{
"paragraph_id": 31,
"text": "For forty-five years the Jesuits were the only missionaries in Asia, but the Franciscans began proselytizing in Asia, as well. Christian missionaries were later forced into exile, along with their assistants. However, some were able to stay behind. Christianity was then kept underground so as to not be persecuted.",
"title": "Missionary work"
},
{
"paragraph_id": 32,
"text": "The Japanese people were not easily converted; many of the people were already Buddhist or Shinto. Francis tried to combat the disposition of some of the Japanese that a God who had created everything, including evil, could not be good. Despite Francis's different religion, he felt that they were good people, much like Europeans, and could be converted.",
"title": "Missionary work"
},
{
"paragraph_id": 33,
"text": "Xavier was welcomed by the Shingon monks since he used the word Dainichi for the Christian God; attempting to adapt the concept to local traditions. As Xavier learned more about the religious nuances of the word, he changed to Deusu from the Latin and Portuguese Deus. The monks later realised that Xavier was preaching a rival religion and grew more resistant towards his attempts at conversion.",
"title": "Missionary work"
},
{
"paragraph_id": 34,
"text": "With the passage of time, his sojourn in Japan could be considered somewhat fruitful as attested by congregations established in Hirado, Yamaguchi, and Bungo. Xavier worked for more than two years in Japan and saw his successor-Jesuits established. He then decided to return to India. Historians debate the exact path by which he returned, but from evidence attributed to the captain of his ship, he may have travelled through Tanegeshima and Minato, and avoided Kagoshima because of the hostility of the daimyo.",
"title": "Missionary work"
},
{
"paragraph_id": 35,
"text": "During his trip from Japan back to India, a tempest forced him to stop on an island near Guangzhou, Guangdong, China, where he met Diogo Pereira, a rich merchant and an old friend from Cochin. Pereira showed him a letter from Portuguese prisoners in Guangzhou, asking for a Portuguese ambassador to speak to the Chinese Emperor on their behalf. Later during the voyage, he stopped at Malacca on 27 December 1551 and was back in Goa by January 1552.",
"title": "Missionary work"
},
{
"paragraph_id": 36,
"text": "On 17 April he set sail with Diogo Pereira on the Santa Cruz for China. He planned to introduce himself as Apostolic Nuncio and Pereira as the ambassador of the King of Portugal. But then he realized that he had forgotten his testimonial letters as an Apostolic Nuncio. Back in Malacca, he was confronted by the captain Álvaro de Ataíde da Gama who now had total control over the harbour. The captain refused to recognize his title of Nuncio, asked Pereira to resign from his title of ambassador, named a new crew for the ship, and demanded the gifts for the Chinese Emperor be left in Malacca.",
"title": "Missionary work"
},
{
"paragraph_id": 37,
"text": "In late August 1552, the Santa Cruz reached the Chinese island of Shangchuan, 14 km away from the southern coast of mainland China, near Taishan, Guangdong, 200 km south-west of what later became Hong Kong. At this time, he was accompanied only by a Jesuit student, Álvaro Ferreira, a Chinese man called António, and a Malabar servant called Christopher. Around mid-November, he sent a letter saying that a man had agreed to take him to the mainland in exchange for a large sum of money. Having sent back Álvaro Ferreira, he remained alone with António. He died from a fever at Shangchuan, Taishan, China, on 3 December 1552, while he was waiting for a boat that would take him to mainland China.",
"title": "Missionary work"
},
{
"paragraph_id": 38,
"text": "Xavier was first buried on a beach at Shangchuan Island, Taishan, Guangdong. His body was taken from the island in February 1553 and temporarily buried in St. Paul's Church in Portuguese Malacca on 22 March 1553. An open grave in the church now marks the place of Xavier's burial. Pereira came back from Goa, removed the corpse shortly after 15 April 1553, and moved it to his house. On 11 December 1553, Xavier's body was shipped to Goa.",
"title": "Burials and relics"
},
{
"paragraph_id": 39,
"text": "The mostly-incorruptible body is now in the Basilica of Bom Jesus in Goa, where it was placed in a glass container encased in a silver casket on 2 December 1637. This casket, constructed by Goan silversmiths between 1636 and 1637, was an exemplary blend of Italian and Indian aesthetic sensibilities. There are 32 silver plates on all four sides of the casket, depicting different episodes from the life of Xavier:",
"title": "Burials and relics"
},
{
"paragraph_id": 40,
"text": "The right forearm, which Xavier used to bless and baptise his converts, was detached by Superior General Claudio Acquaviva in 1614. It has been displayed since in a silver reliquary at the main Jesuit church in Rome, Il Gesù.",
"title": "Burials and relics"
},
{
"paragraph_id": 41,
"text": "Another of Xavier's arm bones was brought to Macau where it was kept in a silver reliquary. The relic was destined for Japan but religious persecution there persuaded the church to keep it in Macau's Cathedral of St. Paul. It was subsequently moved to St. Joseph's and in 1978 to the Chapel of St. Francis Xavier on Coloane Island. More recently the relic was moved to St. Joseph's Church.",
"title": "Burials and relics"
},
{
"paragraph_id": 42,
"text": "A relict from the right hand of St Francis Xavier is on display at St Mary's Cathedral, Sydney.",
"title": "Burials and relics"
},
{
"paragraph_id": 43,
"text": "In 2006, on the 500th anniversary of his birth, the Xavier Tomb Monument and Chapel on Shangchuan Island, in ruins after years of neglect under communist rule in China, was restored with support from the alumni of Wah Yan College, a Jesuit high school in Hong Kong.",
"title": "Burials and relics"
},
{
"paragraph_id": 44,
"text": "From December 2017 to February 2018, Catholic Christian Outreach (CCO) in cooperation with the Jesuits, and the Archdiocese of Ottawa (Canada) brought Xavier's right forearm to tour throughout Canada. The faithful, especially university students participating with CCO at Rise Up 2017 in Ottawa, venerated the relics. The tour continued to every city where CCO and/or the Jesuits are present in Canada: Quebec City, St. John's, Halifax, St. Francis Xavier University in Antigonish (neither CCO nor the Jesuits are present here), Kingston, Toronto, Winnipeg, Saskatoon, Regina, Calgary, Vancouver, Victoria, and Montreal before returning to Ottawa. The relic was then returned to Rome with a Mass of Thanksgiving celebrated by Archbishop Terrence Prendergast at the Church of the Gesu.",
"title": "Burials and relics"
},
{
"paragraph_id": 45,
"text": "Francis Xavier was beatified by Paul V on 25 October 1619, and was canonized by Gregory XV on 12 March 1622, at the same time as Ignatius Loyola. Pius XI proclaimed him the \"Patron of Catholic Missions\". His feast day is 3 December.",
"title": "Veneration"
},
{
"paragraph_id": 46,
"text": "Saint Francis Xavier's relics are kept in a silver casket, elevated inside the Bom Jesus Basilica and are exposed (being brought to ground level) generally every ten years, but this is discretionary. The sacred relics went on display starting on 22 November 2014 at the XVII Solemn Exposition. The display closed on 4 January 2015. The previous exposition, the sixteenth, was held from 21 November 2004 to 2 January 2005.",
"title": "Veneration"
},
{
"paragraph_id": 47,
"text": "Relics of Saint Francis Xavier are also found in the Espirito Santo (Holy Spirit) Church, Margão, in Sanv Fransiku Xavierachi Igorz (Church of St. Francis Xavier), Batpal, Canacona, Goa, and at St. Francis Xavier Chapel, Portais, Panjim.",
"title": "Veneration"
},
{
"paragraph_id": 48,
"text": "Other pilgrimage centres include Xavier's birthplace in Navarra; the Church of the Gesù, Rome; Malacca (where he was buried for two years, before being brought to Goa); and Sancian (place of death).",
"title": "Veneration"
},
{
"paragraph_id": 49,
"text": "Xavier is a major venerated saint in both Sonora and the neighbouring U.S. state of Arizona. In Magdalena de Kino in Sonora, Mexico, in the Church of Santa María Magdalena, there is a reclining statue of San Francisco Xavier brought by pioneer Jesuit missionary Padre Eusebio Kino in the early 18th century. The statue is said to be miraculous and is the object of pilgrimage for many in the region. Also the Mission San Xavier del Bac is a pilgrimage site. The mission is an active parish church ministering to the people of the San Xavier District, Tohono O'odham Nation, and nearby Tucson, Arizona.",
"title": "Veneration"
},
{
"paragraph_id": 50,
"text": "Francis Xavier is honored in the Church of England and in the Episcopal Church on 3 December.",
"title": "Veneration"
},
{
"paragraph_id": 51,
"text": "The Novena of Grace is a popular devotion to Francis Xavier, typically prayed either on the nine days before 3 December, or on 4 March through 12 March (the anniversary of Pope Gregory XV's canonisation of Xavier in 1622). It began with the Italian Jesuit missionary Marcello Mastrilli. Before he could travel to the Far East, Mastrilli was gravely injured in a freak accident after a festive celebration dedicated to the Immaculate Conception in Naples. Delirious and on the verge of death, Mastrilli saw Xavier, who he later said asked him to choose between travelling or death by holding the respective symbols, to which Mastrilli answered, \"I choose that which God wills.\" Upon regaining his health, Mastrilli made his way via Goa and the Philippines to Satsuma, Japan. The Tokugawa shogunate beheaded the missionary in October 1637, after undergoing three days of tortures involving the volcanic sulphurous fumes from Mt. Unzen, known as the Hell mouth or \"pit\" that had supposedly caused an earlier missionary to renounce his faith.",
"title": "Veneration"
},
{
"paragraph_id": 52,
"text": "Francis Xavier became widely noteworthy for his missionary work, both as an organiser and as a pioneer; he reputedly converted more people than anyone else had done since Paul the Apostle. In 2006 Pope Benedict XVI said of both Ignatius of Loyola and Francis Xavier: \"not only their history which was interwoven for many years from Paris and Rome, but a unique desire – a unique passion, it could be said – moved and sustained them through different human events: the passion to give to God-Trinity a glory always greater and to work for the proclamation of the Gospel of Christ to the peoples who had been ignored.\" His personal efforts most affected religious practice in India and in the East Indies (Indonesia, Malaysia, Timor). As of 2021 India still has numerous Jesuit missions and many more schools. Xavier also worked to propagate Christianity in China and Japan. However, following the persecutions (1587 onwards) instituted by Toyotomi Hideyoshi and the subsequent closing of Japan to foreigners (1633 onwards), the Christians of Japan had to go underground to preserve an independent Christian culture. Likewise, while Xavier inspired many missionaries to China, Chinese Christians also were forced underground there and developed their own Christian culture.",
"title": "Legacy"
},
{
"paragraph_id": 53,
"text": "A small chapel designed by Achille-Antoine Hermitte was completed in 1869 over Xavier's death-place on Shangchuan Island, Canton. It was damaged and restored several times; the most recent restoration in 2006 marked the 500th anniversary of the saint's birth.",
"title": "Legacy"
},
{
"paragraph_id": 54,
"text": "Francis Xavier is the patron saint of his native Navarre, which celebrates his feast day on 3 December as a government holiday. In addition to Roman Catholic Masses remembering Xavier on that day (now known as the Day of Navarra), celebrations in the surrounding weeks honour the region's cultural heritage. Furthermore, in the 1940s, devoted Catholics instituted the Javierada, an annual day-long pilgrimage (often on foot) from the capital at Pamplona to Xavier, where the Jesuits built a basilica and museum and restored Francis Xavier's family's castle.",
"title": "Legacy"
},
{
"paragraph_id": 55,
"text": "As the foremost saint from Navarre and one of the main Jesuit saints, Francis Xavier is much venerated in Spain and the Hispanic countries where Francisco Javier or Javier are common male given names. The alternative spelling Xavier is also popular in the Basque Country, Portugal, Catalonia, Brazil, France, Belgium, and southern Italy. In India, the spelling Xavier is almost always used, and the name is quite common among Christians, especially in Goa and in the southern states of Tamil Nadu, Kerala, and Karnataka. The names Francisco Xavier, António Xavier, João Xavier, Caetano Xavier, Domingos Xavier and so forth, were very common till quite recently in Goa. Fransiskus Xaverius is commonly used as a name for Indonesian Catholics, usually abbreviated as FX. In Austria and Bavaria the name is spelt as Xaver (pronounced /ˈksaːfɐ/) and often used in addition to Francis as Franz-Xaver (/frant͡sˈksaːfɐ/). In Polish the name becomes Ksawery. Many Catalan men are named for him, often using the two-name combination Francesc Xavier. In English-speaking countries, \"Xavier\" until recently was likely to follow \"Francis\"; in the 2000s, however, \"Xavier\" by itself became more popular than \"Francis\", and after 2001 featured as one of the hundred most common male baby names in the US. Furthermore, the Sevier family name, possibly most famous in the United States for John Sevier (1745–1815), originated from the name \"Xavier\".",
"title": "Legacy"
},
{
"paragraph_id": 56,
"text": "Many churches all over the world, often founded by Jesuits, have been named in honour of Xavier. The many in the United States include the historic St. Francis Xavier Shrine at Warwick, Maryland (founded 1720), and the Basilica of St. Francis Xavier in Dyersville, Iowa. Note also the American educational teaching order, the Xaverian Brothers, and the Mission San Xavier del Bac in Tucson, Arizona (founded in 1692, and known for its Spanish Colonial architecture).",
"title": "Legacy"
},
{
"paragraph_id": 57,
"text": "Shortly before leaving for the East, Xavier issued a famous instruction to Father Gaspar Barazeuz who was leaving to go to Ormuz (a kingdom on an island in the Persian Gulf, formerly attached to the Empire of Persia, now part of Iran), that he should mix with sinners:",
"title": "Legacy"
},
{
"paragraph_id": 58,
"text": "And if you wish to bring forth much fruit, both for yourselves and for your neighbours, and to live consoled, converse with sinners, making them unburden themselves to you. These are the living books by which you are to study, both for your preaching and for your own consolation. I do not say that you should not on occasion read written books... to support what you say against vices with authorities from the Holy Scriptures and examples from the lives of the saints.",
"title": "Legacy"
},
{
"paragraph_id": 59,
"text": "Modern scholars assess the number of people converted to Christianity by Francis Xavier at around 30,000. While some of Xavier's methods have subsequently come under criticism, he has also earned praise. He insisted that missionaries adapt to many of the customs, and most certainly to the language, of the culture they wish to evangelise. And unlike later missionaries, Xavier supported an educated native clergy. Though for a time it seemed that persecution had subsequently destroyed his work in Japan, Protestant missionaries three centuries later discovered that approximately 100,000 Christians still practised the faith in the Nagasaki area.",
"title": "Legacy"
},
{
"paragraph_id": 60,
"text": "Francis Xavier's work initiated permanent change in eastern Indonesia, and he became known as the \"Apostle of the Indies\" – in 1546–1547 he worked in the Maluku Islands among the people of Ambon, Ternate, and Morotai (or Moro), and laid the foundations for a permanent mission. After he left the Maluku Islands, others carried on his work, and by the 1560s there were 10,000 Roman Catholics in the area, mostly on Ambon. By the 1590s, there were 50,000 to 60,000.",
"title": "Legacy"
},
{
"paragraph_id": 61,
"text": "In 1546, Francis Xavier proposed the establishment of the Goa Inquisition in a letter addressed to the Portuguese King, John III. Xavier addresses the King as the 'Vicar of Christ', owing to his royal patronage over Christianity in the East Indies. In a letter dated 20 January 1548, he requests the king to be tough on the Portuguese governor in India so that he may be active in propagating the faith. Xavier also wrote to the Portuguese king asking for protection in regards to new converts who were being harassed by Portuguese commandants. Francis Xavier died in 1552 without ever living to see the commencement of the Goa Inquisition.",
"title": "Legacy"
}
]
| Francis Xavier, SJ, venerated as Saint Francis Xavier, was a Spanish Catholic missionary and saint who co-founded the Society of Jesus and, as a representative of the Portuguese empire, led the first Christian mission to Japan. Born in the town of Xavier, Spain, he was a companion of Ignatius of Loyola and one of the first seven Jesuits who took vows of poverty and chastity at Montmartre, Paris in 1534. He led an extensive mission into Asia, mainly the Portuguese Empire in the East, and was influential in evangelisation work, most notably in early modern India. He was extensively involved in the missionary activity in Portuguese India. In 1546, Francis Xavier proposed the establishment of the Goan Inquisition in a letter addressed to the Portuguese King, John III. While some sources claim that he actually asked for a special minister whose sole office would be to further Christianity in Goa, others disagree with this assertion. As a representative of the king of Portugal, he was also the first major Christian missionary to venture into Borneo, the Maluku Islands, Japan, and other areas. In those areas, struggling to learn the local languages and in the face of opposition, he had less success than he had enjoyed in India. Xavier was about to extend his mission to Ming China, when he died on Shangchuan Island. He was beatified by Pope Paul V on 25 October 1619 and canonized by Pope Gregory XV on 12 March 1622. In 1624, he was made co-patron of Navarre. Known as the "Apostle of the Indies", "Apostle of the Far East", "Apostle of China" and "Apostle of Japan", he is considered to be one of the greatest missionaries since Paul the Apostle. In 1927, Pope Pius XI published the decree "Apostolicorum in Missionibus" naming Francis Xavier, along with Thérèse of Lisieux, co-patron of all foreign missions. He is now co-patron saint of Navarre, with Fermin. The Day of Navarre in Navarre, Spain, marks the anniversary of Francis Xavier's death, on 3 December. | 2001-10-17T21:16:53Z | 2023-12-21T00:56:38Z | [
"Template:Use shortened footnotes",
"Template:Lang",
"Template:Further",
"Template:Short description",
"Template:Reflist",
"Template:Cite web",
"Template:ISBN",
"Template:Infobox saint",
"Template:Snd",
"Template:Cite book",
"Template:Cite journal",
"Template:In lang",
"Template:Refbegin",
"Template:Authority control",
"Template:Sfn",
"Template:Main",
"Template:Portal",
"Template:Wikiquote",
"Template:Redirect",
"Template:Family name hatnote",
"Template:Infobox manner of address",
"Template:Blockquote",
"Template:Jesuits",
"Template:History of Christianity",
"Template:History of Catholic theology",
"Template:Jesuit",
"Template:Multiple image",
"Template:Webarchive",
"Template:Librivox author",
"Template:Christianity and China",
"Template:Use dmy dates",
"Template:As of",
"Template:IPA",
"Template:Notelist",
"Template:Cite CE1913",
"Template:Internet Archive author",
"Template:History of the Roman Catholic Church",
"Template:Citation needed",
"Template:Cite news",
"Template:Refend",
"Template:Cite EB1911",
"Template:Commons category",
"Template:EB1911 poster"
]
| https://en.wikipedia.org/wiki/Francis_Xavier |
10,958 | Fossil | A fossil (from Classical Latin fossilis, lit. 'obtained by digging') is any preserved remains, impression, or trace of any once-living thing from a past geological age. Examples include bones, shells, exoskeletons, stone imprints of animals or microbes, objects preserved in amber, hair, petrified wood and DNA remnants. The totality of fossils is known as the fossil record.
Paleontology is the study of fossils: their age, method of formation, and evolutionary significance. Specimens are usually considered to be fossils if they are over 10,000 years old. The oldest fossils are around 3.48 billion years old to 4.1 billion years old. The observation in the 19th century that certain fossils were associated with certain rock strata led to the recognition of a geological timescale and the relative ages of different fossils. The development of radiometric dating techniques in the early 20th century allowed scientists to quantitatively measure the absolute ages of rocks and the fossils they host.
There are many processes that lead to fossilization, including permineralization, casts and molds, authigenic mineralization, replacement and recrystallization, adpression, carbonization, and bioimmuration.
Fossils vary in size from one-micrometre (1 µm) bacteria to dinosaurs and trees, many meters long and weighing many tons. A fossil normally preserves only a portion of the deceased organism, usually that portion that was partially mineralized during life, such as the bones and teeth of vertebrates, or the chitinous or calcareous exoskeletons of invertebrates. Fossils may also consist of the marks left behind by the organism while it was alive, such as animal tracks or feces (coprolites). These types of fossil are called trace fossils or ichnofossils, as opposed to body fossils. Some fossils are biochemical and are called chemofossils or biosignatures.
Though the fossil record is incomplete, numerous studies have demonstrated that there is enough information available to give us a good understanding of the pattern of diversification of life on Earth. In addition, the record can predict and fill gaps such as the discovery of Tiktaalik in the arctic of Canada.
The process of fossilization varies according to tissue type and external conditions:
Permineralization is a process of fossilization that occurs when an organism is buried. The empty spaces within an organism (spaces filled with liquid or gas during life) become filled with mineral-rich groundwater. Minerals precipitate from the groundwater, occupying the empty spaces. This process can occur in very small spaces, such as within the cell wall of a plant cell. Small scale permineralization can produce very detailed fossils. For permineralization to occur, the organism must become covered by sediment soon after death, otherwise the remains are destroyed by scavengers or decomposition. The degree to which the remains are decayed when covered determines the later details of the fossil. Some fossils consist only of skeletal remains or teeth; other fossils contain traces of skin, feathers or even soft tissues. This is a form of diagenesis.
In some cases, the original remains of the organism completely dissolve or are otherwise destroyed. The remaining organism-shaped hole in the rock is called an external mold. If this void is later filled with sediment, the resulting cast resembles what the organism looked like. An endocast, or internal mold, is the result of sediments filling an organism's interior, such as the inside of a bivalve or snail or the hollow of a skull. Endocasts are sometimes termed Steinkerns, especially when bivalves are preserved this way.
This is a special form of cast and mold formation. If the chemistry is right, the organism (or fragment of organism) can act as a nucleus for the precipitation of minerals such as siderite, resulting in a nodule forming around it. If this happens rapidly before significant decay to the organic tissue, very fine three-dimensional morphological detail can be preserved. Nodules from the Carboniferous Mazon Creek fossil beds of Illinois, US, are among the best documented examples of such mineralization.
Replacement occurs when the shell, bone, or other tissue is replaced with another mineral. In some cases mineral replacement of the original shell occurs so gradually and at such fine scales that microstructural features are preserved despite the total loss of original material. A shell is said to be recrystallized when the original skeletal compounds are still present but in a different crystal form, as from aragonite to calcite.
Compression fossils, such as those of fossil ferns, are the result of chemical reduction of the complex organic molecules composing the organism's tissues. In this case the fossil consists of original material, albeit in a geochemically altered state. This chemical change is an expression of diagenesis. Often what remains is a carbonaceous film known as a phytoleim, in which case the fossil is known as a compression. Often, however, the phytoleim is lost and all that remains is an impression of the organism in the rock—an impression fossil. In many cases, however, compressions and impressions occur together. For instance, when the rock is broken open, the phytoleim will often be attached to one part (compression), whereas the counterpart will just be an impression. For this reason, one term covers the two modes of preservation: adpression.
Because of their antiquity, an unexpected exception to the alteration of an organism's tissues by chemical reduction of the complex organic molecules during fossilization has been the discovery of soft tissue in dinosaur fossils, including blood vessels, and the isolation of proteins and evidence for DNA fragments. In 2014, Mary Schweitzer and her colleagues reported the presence of iron particles (goethite-aFeO(OH)) associated with soft tissues recovered from dinosaur fossils. Based on various experiments that studied the interaction of iron in haemoglobin with blood vessel tissue they proposed that solution hypoxia coupled with iron chelation enhances the stability and preservation of soft tissue and provides the basis for an explanation for the unforeseen preservation of fossil soft tissues. However, a slightly older study based on eight taxa ranging in time from the Devonian to the Jurassic found that reasonably well-preserved fibrils that probably represent collagen were preserved in all these fossils and that the quality of preservation depended mostly on the arrangement of the collagen fibers, with tight packing favoring good preservation. There seemed to be no correlation between geological age and quality of preservation, within that timeframe.
Fossils that are carbonized or coalified consist of the organic remains which have been reduced primarily to the chemical element carbon. Carbonized fossils consist of a thin film which forms a silhouette of the original organism, and the original organic remains were typically soft tissues. Coalified fossils consist primarily of coal, and the original organic remains were typically woody in composition.
Bioimmuration occurs when a skeletal organism overgrows or otherwise subsumes another organism, preserving the latter, or an impression of it, within the skeleton. Usually it is a sessile skeletal organism, such as a bryozoan or an oyster, which grows along a substrate, covering other sessile sclerobionts. Sometimes the bioimmured organism is soft-bodied and is then preserved in negative relief as a kind of external mold. There are also cases where an organism settles on top of a living skeletal organism that grows upwards, preserving the settler in its skeleton. Bioimmuration is known in the fossil record from the Ordovician to the Recent.
Index fossils (also known as guide fossils, indicator fossils or zone fossils) are fossils used to define and identify geologic periods (or faunal stages). They work on the premise that, although different sediments may look different depending on the conditions under which they were deposited, they may include the remains of the same species of fossil. The shorter the species' time range, the more precisely different sediments can be correlated, and so rapidly evolving species' fossils are particularly valuable. The best index fossils are common, easy to identify at species level and have a broad distribution—otherwise the likelihood of finding and recognizing one in the two sediments is poor.
Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilized hard parts, and they reflect animal behaviours. Many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms).
Coprolites are classified as trace fossils as opposed to body fossils, as they give evidence for the animal's behaviour (in this case, diet) rather than morphology. They were first described by William Buckland in 1829. Prior to this they were known as "fossil fir cones" and "bezoar stones." They serve a valuable purpose in paleontology because they provide direct evidence of the predation and diet of extinct organisms. Coprolites may range in size from a few millimetres to over 60 centimetres.
A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation.
Microfossil is a descriptive term applied to fossilized plants and animals whose size is just at or below the level at which the fossil can be analyzed by the naked eye. A commonly applied cutoff point between "micro" and "macro" fossils is 1 mm. Microfossils may either be complete (or near-complete) organisms in themselves (such as the marine plankters foraminifera and coccolithophores) or component parts (such as small teeth or spores) of larger animals or plants. Microfossils are of critical importance as a reservoir of paleoclimate information, and are also commonly used by biostratigraphers to assist in the correlation of rock units.
Fossil resin (colloquially called amber) is a natural polymer found in many types of strata throughout the world, even the Arctic. The oldest fossil resin dates to the Triassic, though most dates to the Cenozoic. The excretion of the resin by certain plants is thought to be an evolutionary adaptation for protection from insects and to seal wounds. Fossil resin often contains other fossils called inclusions that were captured by the sticky resin. These include bacteria, fungi, other plants, and animals. Animal inclusions are usually small invertebrates, predominantly arthropods such as insects and spiders, and only extremely rarely a vertebrate such as a small lizard. Preservation of inclusions can be exquisite, including small fragments of DNA.
A derived, reworked or remanié fossil is a fossil found in rock that accumulated significantly later than when the fossilized animal or plant died. Reworked fossils are created by erosion exhuming (freeing) fossils from the rock formation in which they were originally deposited and their redeposition in a younger sedimentary deposit.
Fossil wood is wood that is preserved in the fossil record. Wood is usually the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified. The fossil wood may be the only part of the plant that has been preserved; therefore such wood may get a special kind of botanical name. This will usually include "xylon" and a term indicating its presumed affinity, such as Araucarioxylon (wood of Araucaria or some related genus), Palmoxylon (wood of an indeterminate palm), or Castanoxylon (wood of an indeterminate chinkapin).
The term subfossil can be used to refer to remains, such as bones, nests, or fecal deposits, whose fossilization process is not complete, either because the length of time since the animal involved was living is too short or because the conditions in which the remains were buried were not optimal for fossilization. Subfossils are often found in caves or other shelters where they can be preserved for thousands of years. The main importance of subfossil vs. fossil remains is that the former contain organic material, which can be used for radiocarbon dating or extraction and sequencing of DNA, protein, or other biomolecules. Additionally, isotope ratios can provide much information about the ecological conditions under which extinct animals lived. Subfossils are useful for studying the evolutionary history of an environment and can be important to studies in paleoclimatology.
Subfossils are often found in depositionary environments, such as lake sediments, oceanic sediments, and soils. Once deposited, physical and chemical weathering can alter the state of preservation, and small subfossils can also be ingested by living organisms. Subfossil remains that date from the Mesozoic are exceptionally rare, are usually in an advanced state of decay, and are consequently much disputed. The vast bulk of subfossil material comes from Quaternary sediments, including many subfossilized chironomid head capsules, ostracod carapaces, diatoms, and foraminifera.
For remains such as molluscan seashells, which frequently do not change their chemical composition over geological time, and may occasionally even retain such features as the original color markings for millions of years, the label 'subfossil' is applied to shells that are understood to be thousands of years old, but are of Holocene age, and therefore are not old enough to be from the Pleistocene epoch.
Chemical fossils, or chemofossils, are chemicals found in rocks and fossil fuels (petroleum, coal, and natural gas) that provide an organic signature for ancient life. Molecular fossils and isotope ratios represent two types of chemical fossils. The oldest traces of life on Earth are fossils of this type, including carbon isotope anomalies found in zircons that imply the existence of life as early as 4.1 billion years ago.
Paleontology seeks to map out how life evolved across geologic time. A substantial hurdle is the difficulty of working out fossil ages. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to its decay products shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are volcanic ash layers, which may provide termini for the intervening sediments.
Consequently, palaeontologists rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age is claimed to lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly adjacent. However, fossils of species that survived for a relatively short time can be used to match isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age have traces of E. pseudoplanus, they have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and occupy a short time range to be useful. Misleading results are produced if the index fossils are incorrectly dated. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching rocks of the same age across continents. Family-tree relationships also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved earlier.
It is also possible to estimate how long ago two living clades diverged, in other words approximately how long ago their last common ancestor must have lived, by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two.
Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. This is illustrated by the fact that the number of species known through the fossil record is less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived. Because of the specialized and rare circumstances required for a biological structure to fossilize, only a small percentage of life-forms can be expected to be represented in discoveries, and each discovery represents only a snapshot of the process of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which will never demonstrate an exact half-way point.
The fossil record is strongly biased toward organisms with hard-parts, leaving most groups of soft-bodied organisms with little to no role. It is replete with the mollusks, the vertebrates, the echinoderms, the brachiopods and some groups of arthropods.
Fossil sites with exceptional preservation—sometimes including preserved soft tissues—are known as Lagerstätten—German for "storage places". These formations may have resulted from carcass burial in an anoxic environment with minimal bacteria, thus slowing decomposition. Lagerstätten span geological time from the Cambrian period to the present. Worldwide, some of the best examples of near-perfect fossilization are the Cambrian Maotianshan shales and Burgess Shale, the Devonian Hunsrück Slates, the Jurassic Solnhofen limestone, and the Carboniferous Mazon Creek localities.
Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Stromatolites provide some of the most ancient fossil records of life on Earth, dating back more than 3.5 billion years ago.
Stromatolites were much more abundant in Precambrian times. While older, Archean fossil remains are presumed to be colonies of cyanobacteria, younger (that is, Proterozoic) fossils may be primordial forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia. The earliest stromatolite of confirmed microbial origin dates to 2.724 billion years ago.
A 2009 discovery provides strong evidence of microbial stromatolites extending as far back as 3.45 billion years ago.
Stromatolites are a major constituent of the fossil record for life's first 3.5 billion years, peaking about 1.25 billion years ago. They subsequently declined in abundance and diversity, which by the start of the Cambrian had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victims to grazing creatures (the Cambrian substrate revolution), implying that sufficiently complex organisms were common over 1 billion years ago.
The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the end-Ordovician and end-Permian extinctions decimated marine animals, falling back to earlier levels as marine animals recovered. Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes.
While prokaryotic cyanobacteria themselves reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. Cyanobacteria (as well as extremophile Gammaproteobacteria) are thought to be largely responsible for increasing the amount of oxygen in the primeval Earth's atmosphere through their continuing photosynthesis. Cyanobacteria use water, carbon dioxide and sunlight to create their food. A layer of mucus often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the mucus, which can be cemented by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria common in the fossil record and in modern sediments.
The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides an extremely well exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx.
It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions.
On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.
Pseudofossils are visual patterns in rocks that are produced by geologic processes rather than biologic processes. They can easily be mistaken for real fossils. Some pseudofossils, such as geological dendrite crystals, are formed by naturally occurring fissures in the rock that get filled up by percolating minerals. Other types of pseudofossils are kidney ore (round shapes in iron ore) and moss agates, which look like moss or plant leaves. Concretions, spherical or ovoid-shaped nodules found in some sedimentary strata, were once thought to be dinosaur eggs, and are often mistaken for fossils as well.
Gathering fossils dates at least to the beginning of recorded history. The fossils themselves are referred to as the fossil record. The fossil record was one of the early sources of data underlying the study of evolution and continues to be relevant to the history of life on Earth. Paleontologists examine the fossil record to understand the process of evolution and the way particular species have evolved.
Fossils have been visible and common throughout most of natural history, and so documented human interaction with them goes back as far as recorded history, or earlier.
There are many examples of paleolithic stone knives in Europe, with fossil echinoderms set precisely at the hand grip, going all the way back to Homo heidelbergensis and Neanderthals. These ancient peoples also drilled holes through the center of those round fossil shells, apparently using them as beads for necklaces.
The ancient Egyptians gathered fossils of species that resembled the bones of modern species they worshipped. The god Set was associated with the hippopotamus, therefore fossilized bones of hippo-like species were kept in that deity's temples. Five-rayed fossil sea urchin shells were associated with the deity Sopdu, the Morning Star, equivalent of Venus in Roman mythology.
Fossils appear to have directly contributed to the mythology of many civilizations, including the ancient Greeks. Classical Greek historian Herodotos wrote of an area near Hyperborea where gryphons protected golden treasure. There was indeed gold mining in that approximate region, where beaked Protoceratops skulls were common as fossils.
A later Greek scholar, Aristotle, eventually realized that fossil seashells from rocks were similar to those found on the beach, indicating the fossils were once living animals. He had previously explained them in terms of vaporous exhalations, which Persian polymath Avicenna modified into the theory of petrifying fluids (succus lapidificatus). Recognition of fossil seashells as originating in the sea was built upon in the 14th century by Albert of Saxony, and accepted in some form by most naturalists by the 16th century.
Roman naturalist Pliny the Elder wrote of "tongue stones", which he called glossopetra. These were fossil shark teeth, thought by some classical cultures to look like the tongues of people or snakes. He also wrote about the horns of Ammon, which are fossil ammonites, whence the group of shelled octopus-cousins ultimately draws its modern name. Pliny also makes one of the earlier known references to toadstones, thought until the 18th century to be a magical cure for poison originating in the heads of toads, but which are fossil teeth from Lepidotes, a Cretaceous ray-finned fish.
The Plains tribes of North America are thought to have similarly associated fossils, such as the many intact pterosaur fossils naturally exposed in the region, with their own mythology of the thunderbird.
There is no such direct mythological connection known from prehistoric Africa, but there is considerable evidence of tribes there excavating and moving fossils to ceremonial sites, apparently treating them with some reverence.
In Japan, fossil shark teeth were associated with the mythical tengu, thought to be the razor-sharp claws of the creature, documented some time after the 8th century AD.
In medieval China, the fossil bones of ancient mammals including Homo erectus were often mistaken for "dragon bones" and used as medicine and aphrodisiacs. In addition, some of these fossil bones are collected as "art" by scholars, who left scripts on various artifacts, indicating the time they were added to a collection. One good example is the famous scholar Huang Tingjian of the Song Dynasty during the 11th century, who kept a specific seashell fossil with his own poem engraved on it. In his Dream Pool Essays published in 1088, Song dynasty Chinese scholar-official Shen Kuo hypothesized that marine fossils found in a geological stratum of mountains located hundreds of miles from the Pacific Ocean was evidence that a prehistoric seashore had once existed there and shifted over centuries of time. His observation of petrified bamboos in the dry northern climate zone of what is now Yan'an, Shaanxi province, China, led him to advance early ideas of gradual climate change due to bamboo naturally growing in wetter climate areas.
In medieval Christendom, fossilized sea creatures on mountainsides were seen as proof of the biblical deluge of Noah's Ark. After observing the existence of seashells in mountains, the ancient Greek philosopher Xenophanes (c. 570 – 478 BC) speculated that the world was once inundated in a great flood that buried living creatures in drying mud.
In 1027, the Persian Avicenna explained fossils' stoniness in The Book of Healing:
If what is said concerning the petrifaction of animals and plants is true, the cause of this (phenomenon) is a powerful mineralizing and petrifying virtue which arises in certain stony spots, or emanates suddenly from the earth during earthquake and subsidences, and petrifies whatever comes into contact with it. As a matter of fact, the petrifaction of the bodies of plants and animals is not more extraordinary than the transformation of waters.
From the 13th century to the present day, scholars pointed out that the fossil skulls of Deinotherium giganteum, found in Crete and Greece, might have been interpreted as being the skulls of the Cyclopes of Greek mythology, and are possibly the origin of that Greek myth. Their skulls appear to have a single eye-hole in the front, just like their modern elephant cousins, though in fact it's actually the opening for their trunk.
In Norse mythology, echinoderm shells (the round five-part button left over from a sea urchin) were associated with the god Thor, not only being incorporated in thunderstones, representations of Thor's hammer and subsequent hammer-shaped crosses as Christianity was adopted, but also kept in houses to garner Thor's protection.
These grew into the shepherd's crowns of English folklore, used for decoration and as good luck charms, placed by the doorway of homes and churches. In Suffolk, a different species was used as a good-luck charm by bakers, who referred to them as fairy loaves, associating them with the similarly shaped loaves of bread they baked.
More scientific views of fossils emerged during the Renaissance. Leonardo da Vinci concurred with Aristotle's view that fossils were the remains of ancient life. For example, Leonardo noticed discrepancies with the biblical flood narrative as an explanation for fossil origins:
If the Deluge had carried the shells for distances of three and four hundred miles from the sea it would have carried them mixed with various other natural objects all heaped up together; but even at such distances from the sea we see the oysters all together and also the shellfish and the cuttlefish and all the other shells which congregate together, found all together dead; and the solitary shells are found apart from one another as we see them every day on the sea-shores.
And we find oysters together in very large families, among which some may be seen with their shells still joined together, indicating that they were left there by the sea and that they were still living when the strait of Gibraltar was cut through. In the mountains of Parma and Piacenza multitudes of shells and corals with holes may be seen still sticking to the rocks....
In 1666, Nicholas Steno examined a shark, and made the association of its teeth with the "tongue stones" of ancient Greco-Roman mythology, concluding that those were not in fact the tongues of venomous snakes, but the teeth of some long-extinct species of shark.
Robert Hooke (1635–1703) included micrographs of fossils in his Micrographia and was among the first to observe fossil forams. His observations on fossils, which he stated to be the petrified remains of creatures some of which no longer existed, were published posthumously in 1705.
William Smith (1769–1839), an English canal engineer, observed that rocks of different ages (based on the law of superposition) preserved different assemblages of fossils, and that these assemblages succeeded one another in a regular and determinable order. He observed that rocks from distant locations could be correlated based on the fossils they contained. He termed this the principle of faunal succession. This principle became one of Darwin's chief pieces of evidence that biological evolution was real.
Georges Cuvier came to believe that most if not all the animal fossils he examined were remains of extinct species. This led Cuvier to become an active proponent of the geological school of thought called catastrophism. Near the end of his 1796 paper on living and fossil elephants he said:
All of these facts, consistent among themselves, and not opposed by any report, seem to me to prove the existence of a world previous to ours, destroyed by some kind of catastrophe.
Interest in fossils, and geology more generally, expanded during the early nineteenth century. In Britain, Mary Anning's discoveries of fossils, including the first complete ichthyosaur and a complete plesiosaurus skeleton, sparked both public and scholarly interest.
Early naturalists well understood the similarities and differences of living species leading Linnaeus to develop a hierarchical classification system still in use today. Darwin and his contemporaries first linked the hierarchical structure of the tree of life with the then very sparse fossil record. Darwin eloquently described a process of descent with modification, or evolution, whereby organisms either adapt to natural and changing environmental pressures, or they perish.
When Darwin wrote On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, the oldest animal fossils were those from the Cambrian Period, now known to be about 540 million years old. He worried about the absence of older fossils because of the implications on the validity of his theories, but he expressed hope that such fossils would be found, noting that: "only a small portion of the world is known with accuracy." Darwin also pondered the sudden appearance of many groups (i.e. phyla) in the oldest known Cambrian fossiliferous strata.
Since Darwin's time, the fossil record has been extended to between 2.3 and 3.5 billion years. Most of these Precambrian fossils are microscopic bacteria or microfossils. However, macroscopic fossils are now known from the late Proterozoic. The Ediacara biota (also called Vendian biota) dating from 575 million years ago collectively constitutes a richly diverse assembly of early multicellular eukaryotes.
The fossil record and faunal succession form the basis of the science of biostratigraphy or determining the age of rocks based on embedded fossils. For the first 150 years of geology, biostratigraphy and superposition were the only means for determining the relative age of rocks. The geologic time scale was developed based on the relative ages of rock strata as determined by the early paleontologists and stratigraphers.
Since the early years of the twentieth century, absolute dating methods, such as radiometric dating (including potassium/argon, argon/argon, uranium series, and, for very recent fossils, radiocarbon dating) have been used to verify the relative ages obtained by fossils and to provide absolute ages for many fossils. Radiometric dating has shown that the earliest known stromatolites are over 3.4 billion years old.
The fossil record is life's evolutionary epic that unfolded over four billion years as environmental conditions and genetic potential interacted in accordance with natural selection.
The Virtual Fossil Museum
Paleontology has joined with evolutionary biology to share the interdisciplinary task of outlining the tree of life, which inevitably leads backwards in time to Precambrian microscopic life when cell structure and functions evolved. Earth's deep time in the Proterozoic and deeper still in the Archean is only "recounted by microscopic fossils and subtle chemical signals." Molecular biologists, using phylogenetics, can compare protein amino acid or nucleotide sequence homology (i.e., similarity) to evaluate taxonomy and evolutionary distances among organisms, with limited statistical confidence. The study of fossils, on the other hand, can more specifically pinpoint when and in what organism a mutation first appeared. Phylogenetics and paleontology work together in the clarification of science's still dim view of the appearance of life and its evolution.
Niles Eldredge's study of the Phacops trilobite genus supported the hypothesis that modifications to the arrangement of the trilobite's eye lenses proceeded by fits and starts over millions of years during the Devonian. Eldredge's interpretation of the Phacops fossil record was that the aftermaths of the lens changes, but not the rapidly occurring evolutionary process, were fossilized. This and other data led Stephen Jay Gould and Niles Eldredge to publish their seminal paper on punctuated equilibrium in 1971.
Synchrotron X-ray tomographic analysis of early Cambrian bilaterian embryonic microfossils yielded new insights of metazoan evolution at its earliest stages. The tomography technique provides previously unattainable three-dimensional resolution at the limits of fossilization. Fossils of two enigmatic bilaterians, the worm-like Markuelia and a putative, primitive protostome, Pseudooides, provide a peek at germ layer embryonic development. These 543-million-year-old embryos support the emergence of some aspects of arthropod development earlier than previously thought in the late Proterozoic. The preserved embryos from China and Siberia underwent rapid diagenetic phosphatization resulting in exquisite preservation, including cell structures. This research is a notable example of how knowledge encoded by the fossil record continues to contribute otherwise unattainable information on the emergence and development of life on Earth. For example, the research suggests Markuelia has closest affinity to priapulid worms, and is adjacent to the evolutionary branching of Priapulida, Nematoda and Arthropoda.
Despite significant advances in uncovering and identifying paleontological specimens, it is generally accepted that the fossil record is vastly incomplete. Approaches for measuring the completeness of the fossil record have been developed for numerous subsets of species, including those grouped taxonomically, temporally, environmentally/geographically, or in sum. This encompasses the subfield of taphonomy and the study of biases in the paleontological record.
According to one hypothesis, a Corinthian vase from the 6th century BCE is the oldest artistic record of a vertebrate fossil, perhaps a Miocene giraffe combined with elements from other species. However, a subsequent study using artificial intelligence and expert evaluations reject this idea, because mammals do not have the eye bones shown in the painted monster. Morphologically, the vase painting correspond to a carnivorous reptile of the Varanidae family that still lives in regions occupied by the ancient Greek.
Fossil trading is the practice of buying and selling fossils. This is many times done illegally with artifacts stolen from research sites, costing many important scientific specimens each year. The problem is quite pronounced in China, where many specimens have been stolen.
Fossil collecting (sometimes, in a non-scientific sense, fossil hunting) is the collection of fossils for scientific study, hobby, or profit. Fossil collecting, as practiced by amateurs, is the predecessor of modern paleontology and many still collect fossils and study fossils as amateurs. Professionals and amateurs alike collect fossils for their scientific value.
The use of fossils to address health issues is rooted in traditional medicine and include the use of fossils as talismans. The specific fossil to use to alleviate or cure an illness is often based on its resemblance to the symptoms or affected organ. The usefulness of fossils as medicine is almost entirely a placebo effect, though fossil material might conceivably have some antacid activity or supply some essential minerals. The use of dinosaur bones as "dragon bones" has persisted in Traditional Chinese medicine into modern times, with mid-Cretaceous dinosaur bones being used for the purpose in Ruyang County during the early 21st century. | [
{
"paragraph_id": 0,
"text": "A fossil (from Classical Latin fossilis, lit. 'obtained by digging') is any preserved remains, impression, or trace of any once-living thing from a past geological age. Examples include bones, shells, exoskeletons, stone imprints of animals or microbes, objects preserved in amber, hair, petrified wood and DNA remnants. The totality of fossils is known as the fossil record.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Paleontology is the study of fossils: their age, method of formation, and evolutionary significance. Specimens are usually considered to be fossils if they are over 10,000 years old. The oldest fossils are around 3.48 billion years old to 4.1 billion years old. The observation in the 19th century that certain fossils were associated with certain rock strata led to the recognition of a geological timescale and the relative ages of different fossils. The development of radiometric dating techniques in the early 20th century allowed scientists to quantitatively measure the absolute ages of rocks and the fossils they host.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are many processes that lead to fossilization, including permineralization, casts and molds, authigenic mineralization, replacement and recrystallization, adpression, carbonization, and bioimmuration.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fossils vary in size from one-micrometre (1 µm) bacteria to dinosaurs and trees, many meters long and weighing many tons. A fossil normally preserves only a portion of the deceased organism, usually that portion that was partially mineralized during life, such as the bones and teeth of vertebrates, or the chitinous or calcareous exoskeletons of invertebrates. Fossils may also consist of the marks left behind by the organism while it was alive, such as animal tracks or feces (coprolites). These types of fossil are called trace fossils or ichnofossils, as opposed to body fossils. Some fossils are biochemical and are called chemofossils or biosignatures.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Though the fossil record is incomplete, numerous studies have demonstrated that there is enough information available to give us a good understanding of the pattern of diversification of life on Earth. In addition, the record can predict and fill gaps such as the discovery of Tiktaalik in the arctic of Canada.",
"title": "Reliability"
},
{
"paragraph_id": 5,
"text": "The process of fossilization varies according to tissue type and external conditions:",
"title": "Fossilization processes"
},
{
"paragraph_id": 6,
"text": "Permineralization is a process of fossilization that occurs when an organism is buried. The empty spaces within an organism (spaces filled with liquid or gas during life) become filled with mineral-rich groundwater. Minerals precipitate from the groundwater, occupying the empty spaces. This process can occur in very small spaces, such as within the cell wall of a plant cell. Small scale permineralization can produce very detailed fossils. For permineralization to occur, the organism must become covered by sediment soon after death, otherwise the remains are destroyed by scavengers or decomposition. The degree to which the remains are decayed when covered determines the later details of the fossil. Some fossils consist only of skeletal remains or teeth; other fossils contain traces of skin, feathers or even soft tissues. This is a form of diagenesis.",
"title": "Fossilization processes"
},
{
"paragraph_id": 7,
"text": "",
"title": "Fossilization processes"
},
{
"paragraph_id": 8,
"text": "In some cases, the original remains of the organism completely dissolve or are otherwise destroyed. The remaining organism-shaped hole in the rock is called an external mold. If this void is later filled with sediment, the resulting cast resembles what the organism looked like. An endocast, or internal mold, is the result of sediments filling an organism's interior, such as the inside of a bivalve or snail or the hollow of a skull. Endocasts are sometimes termed Steinkerns, especially when bivalves are preserved this way.",
"title": "Fossilization processes"
},
{
"paragraph_id": 9,
"text": "This is a special form of cast and mold formation. If the chemistry is right, the organism (or fragment of organism) can act as a nucleus for the precipitation of minerals such as siderite, resulting in a nodule forming around it. If this happens rapidly before significant decay to the organic tissue, very fine three-dimensional morphological detail can be preserved. Nodules from the Carboniferous Mazon Creek fossil beds of Illinois, US, are among the best documented examples of such mineralization.",
"title": "Fossilization processes"
},
{
"paragraph_id": 10,
"text": "Replacement occurs when the shell, bone, or other tissue is replaced with another mineral. In some cases mineral replacement of the original shell occurs so gradually and at such fine scales that microstructural features are preserved despite the total loss of original material. A shell is said to be recrystallized when the original skeletal compounds are still present but in a different crystal form, as from aragonite to calcite.",
"title": "Fossilization processes"
},
{
"paragraph_id": 11,
"text": "Compression fossils, such as those of fossil ferns, are the result of chemical reduction of the complex organic molecules composing the organism's tissues. In this case the fossil consists of original material, albeit in a geochemically altered state. This chemical change is an expression of diagenesis. Often what remains is a carbonaceous film known as a phytoleim, in which case the fossil is known as a compression. Often, however, the phytoleim is lost and all that remains is an impression of the organism in the rock—an impression fossil. In many cases, however, compressions and impressions occur together. For instance, when the rock is broken open, the phytoleim will often be attached to one part (compression), whereas the counterpart will just be an impression. For this reason, one term covers the two modes of preservation: adpression.",
"title": "Fossilization processes"
},
{
"paragraph_id": 12,
"text": "Because of their antiquity, an unexpected exception to the alteration of an organism's tissues by chemical reduction of the complex organic molecules during fossilization has been the discovery of soft tissue in dinosaur fossils, including blood vessels, and the isolation of proteins and evidence for DNA fragments. In 2014, Mary Schweitzer and her colleagues reported the presence of iron particles (goethite-aFeO(OH)) associated with soft tissues recovered from dinosaur fossils. Based on various experiments that studied the interaction of iron in haemoglobin with blood vessel tissue they proposed that solution hypoxia coupled with iron chelation enhances the stability and preservation of soft tissue and provides the basis for an explanation for the unforeseen preservation of fossil soft tissues. However, a slightly older study based on eight taxa ranging in time from the Devonian to the Jurassic found that reasonably well-preserved fibrils that probably represent collagen were preserved in all these fossils and that the quality of preservation depended mostly on the arrangement of the collagen fibers, with tight packing favoring good preservation. There seemed to be no correlation between geological age and quality of preservation, within that timeframe.",
"title": "Fossilization processes"
},
{
"paragraph_id": 13,
"text": "Fossils that are carbonized or coalified consist of the organic remains which have been reduced primarily to the chemical element carbon. Carbonized fossils consist of a thin film which forms a silhouette of the original organism, and the original organic remains were typically soft tissues. Coalified fossils consist primarily of coal, and the original organic remains were typically woody in composition.",
"title": "Fossilization processes"
},
{
"paragraph_id": 14,
"text": "Bioimmuration occurs when a skeletal organism overgrows or otherwise subsumes another organism, preserving the latter, or an impression of it, within the skeleton. Usually it is a sessile skeletal organism, such as a bryozoan or an oyster, which grows along a substrate, covering other sessile sclerobionts. Sometimes the bioimmured organism is soft-bodied and is then preserved in negative relief as a kind of external mold. There are also cases where an organism settles on top of a living skeletal organism that grows upwards, preserving the settler in its skeleton. Bioimmuration is known in the fossil record from the Ordovician to the Recent.",
"title": "Fossilization processes"
},
{
"paragraph_id": 15,
"text": "Index fossils (also known as guide fossils, indicator fossils or zone fossils) are fossils used to define and identify geologic periods (or faunal stages). They work on the premise that, although different sediments may look different depending on the conditions under which they were deposited, they may include the remains of the same species of fossil. The shorter the species' time range, the more precisely different sediments can be correlated, and so rapidly evolving species' fossils are particularly valuable. The best index fossils are common, easy to identify at species level and have a broad distribution—otherwise the likelihood of finding and recognizing one in the two sediments is poor.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilized hard parts, and they reflect animal behaviours. Many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms).",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Coprolites are classified as trace fossils as opposed to body fossils, as they give evidence for the animal's behaviour (in this case, diet) rather than morphology. They were first described by William Buckland in 1829. Prior to this they were known as \"fossil fir cones\" and \"bezoar stones.\" They serve a valuable purpose in paleontology because they provide direct evidence of the predation and diet of extinct organisms. Coprolites may range in size from a few millimetres to over 60 centimetres.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "Microfossil is a descriptive term applied to fossilized plants and animals whose size is just at or below the level at which the fossil can be analyzed by the naked eye. A commonly applied cutoff point between \"micro\" and \"macro\" fossils is 1 mm. Microfossils may either be complete (or near-complete) organisms in themselves (such as the marine plankters foraminifera and coccolithophores) or component parts (such as small teeth or spores) of larger animals or plants. Microfossils are of critical importance as a reservoir of paleoclimate information, and are also commonly used by biostratigraphers to assist in the correlation of rock units.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Fossil resin (colloquially called amber) is a natural polymer found in many types of strata throughout the world, even the Arctic. The oldest fossil resin dates to the Triassic, though most dates to the Cenozoic. The excretion of the resin by certain plants is thought to be an evolutionary adaptation for protection from insects and to seal wounds. Fossil resin often contains other fossils called inclusions that were captured by the sticky resin. These include bacteria, fungi, other plants, and animals. Animal inclusions are usually small invertebrates, predominantly arthropods such as insects and spiders, and only extremely rarely a vertebrate such as a small lizard. Preservation of inclusions can be exquisite, including small fragments of DNA.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "A derived, reworked or remanié fossil is a fossil found in rock that accumulated significantly later than when the fossilized animal or plant died. Reworked fossils are created by erosion exhuming (freeing) fossils from the rock formation in which they were originally deposited and their redeposition in a younger sedimentary deposit.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "Fossil wood is wood that is preserved in the fossil record. Wood is usually the part of a plant that is best preserved (and most easily found). Fossil wood may or may not be petrified. The fossil wood may be the only part of the plant that has been preserved; therefore such wood may get a special kind of botanical name. This will usually include \"xylon\" and a term indicating its presumed affinity, such as Araucarioxylon (wood of Araucaria or some related genus), Palmoxylon (wood of an indeterminate palm), or Castanoxylon (wood of an indeterminate chinkapin).",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "The term subfossil can be used to refer to remains, such as bones, nests, or fecal deposits, whose fossilization process is not complete, either because the length of time since the animal involved was living is too short or because the conditions in which the remains were buried were not optimal for fossilization. Subfossils are often found in caves or other shelters where they can be preserved for thousands of years. The main importance of subfossil vs. fossil remains is that the former contain organic material, which can be used for radiocarbon dating or extraction and sequencing of DNA, protein, or other biomolecules. Additionally, isotope ratios can provide much information about the ecological conditions under which extinct animals lived. Subfossils are useful for studying the evolutionary history of an environment and can be important to studies in paleoclimatology.",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "Subfossils are often found in depositionary environments, such as lake sediments, oceanic sediments, and soils. Once deposited, physical and chemical weathering can alter the state of preservation, and small subfossils can also be ingested by living organisms. Subfossil remains that date from the Mesozoic are exceptionally rare, are usually in an advanced state of decay, and are consequently much disputed. The vast bulk of subfossil material comes from Quaternary sediments, including many subfossilized chironomid head capsules, ostracod carapaces, diatoms, and foraminifera.",
"title": "Types"
},
{
"paragraph_id": 25,
"text": "For remains such as molluscan seashells, which frequently do not change their chemical composition over geological time, and may occasionally even retain such features as the original color markings for millions of years, the label 'subfossil' is applied to shells that are understood to be thousands of years old, but are of Holocene age, and therefore are not old enough to be from the Pleistocene epoch.",
"title": "Types"
},
{
"paragraph_id": 26,
"text": "Chemical fossils, or chemofossils, are chemicals found in rocks and fossil fuels (petroleum, coal, and natural gas) that provide an organic signature for ancient life. Molecular fossils and isotope ratios represent two types of chemical fossils. The oldest traces of life on Earth are fossils of this type, including carbon isotope anomalies found in zircons that imply the existence of life as early as 4.1 billion years ago.",
"title": "Types"
},
{
"paragraph_id": 27,
"text": "Paleontology seeks to map out how life evolved across geologic time. A substantial hurdle is the difficulty of working out fossil ages. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to its decay products shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are volcanic ash layers, which may provide termini for the intervening sediments.",
"title": "Dating"
},
{
"paragraph_id": 28,
"text": "Consequently, palaeontologists rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the \"layer-cake\" that is the sedimentary record. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age is claimed to lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly adjacent. However, fossils of species that survived for a relatively short time can be used to match isolated rocks: this technique is called biostratigraphy. For instance, the conodont Eoplacognathus pseudoplanus has a short range in the Middle Ordovician period. If rocks of unknown age have traces of E. pseudoplanus, they have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and occupy a short time range to be useful. Misleading results are produced if the index fossils are incorrectly dated. Stratigraphy and biostratigraphy can in general provide only relative dating (A was before B), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching rocks of the same age across continents. Family-tree relationships also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated \"family tree\" says A was an ancestor of B and C, then A must have evolved earlier.",
"title": "Dating"
},
{
"paragraph_id": 29,
"text": "It is also possible to estimate how long ago two living clades diverged, in other words approximately how long ago their last common ancestor must have lived, by assuming that DNA mutations accumulate at a constant rate. These \"molecular clocks\", however, are fallible, and provide only approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two.",
"title": "Dating"
},
{
"paragraph_id": 30,
"text": "Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. This is illustrated by the fact that the number of species known through the fossil record is less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived. Because of the specialized and rare circumstances required for a biological structure to fossilize, only a small percentage of life-forms can be expected to be represented in discoveries, and each discovery represents only a snapshot of the process of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which will never demonstrate an exact half-way point.",
"title": "Dating"
},
{
"paragraph_id": 31,
"text": "The fossil record is strongly biased toward organisms with hard-parts, leaving most groups of soft-bodied organisms with little to no role. It is replete with the mollusks, the vertebrates, the echinoderms, the brachiopods and some groups of arthropods.",
"title": "Dating"
},
{
"paragraph_id": 32,
"text": "Fossil sites with exceptional preservation—sometimes including preserved soft tissues—are known as Lagerstätten—German for \"storage places\". These formations may have resulted from carcass burial in an anoxic environment with minimal bacteria, thus slowing decomposition. Lagerstätten span geological time from the Cambrian period to the present. Worldwide, some of the best examples of near-perfect fossilization are the Cambrian Maotianshan shales and Burgess Shale, the Devonian Hunsrück Slates, the Jurassic Solnhofen limestone, and the Carboniferous Mazon Creek localities.",
"title": "Sites"
},
{
"paragraph_id": 33,
"text": "Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by biofilms of microorganisms, especially cyanobacteria. Stromatolites provide some of the most ancient fossil records of life on Earth, dating back more than 3.5 billion years ago.",
"title": "Stromatolites"
},
{
"paragraph_id": 34,
"text": "Stromatolites were much more abundant in Precambrian times. While older, Archean fossil remains are presumed to be colonies of cyanobacteria, younger (that is, Proterozoic) fossils may be primordial forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia. The earliest stromatolite of confirmed microbial origin dates to 2.724 billion years ago.",
"title": "Stromatolites"
},
{
"paragraph_id": 35,
"text": "A 2009 discovery provides strong evidence of microbial stromatolites extending as far back as 3.45 billion years ago.",
"title": "Stromatolites"
},
{
"paragraph_id": 36,
"text": "Stromatolites are a major constituent of the fossil record for life's first 3.5 billion years, peaking about 1.25 billion years ago. They subsequently declined in abundance and diversity, which by the start of the Cambrian had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victims to grazing creatures (the Cambrian substrate revolution), implying that sufficiently complex organisms were common over 1 billion years ago.",
"title": "Stromatolites"
},
{
"paragraph_id": 37,
"text": "The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the end-Ordovician and end-Permian extinctions decimated marine animals, falling back to earlier levels as marine animals recovered. Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes.",
"title": "Stromatolites"
},
{
"paragraph_id": 38,
"text": "While prokaryotic cyanobacteria themselves reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. Cyanobacteria (as well as extremophile Gammaproteobacteria) are thought to be largely responsible for increasing the amount of oxygen in the primeval Earth's atmosphere through their continuing photosynthesis. Cyanobacteria use water, carbon dioxide and sunlight to create their food. A layer of mucus often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the mucus, which can be cemented by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria common in the fossil record and in modern sediments.",
"title": "Stromatolites"
},
{
"paragraph_id": 39,
"text": "The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides an extremely well exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx.",
"title": "Stromatolites"
},
{
"paragraph_id": 40,
"text": "It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions.",
"title": "Astrobiology"
},
{
"paragraph_id": 41,
"text": "On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.",
"title": "Astrobiology"
},
{
"paragraph_id": 42,
"text": "Pseudofossils are visual patterns in rocks that are produced by geologic processes rather than biologic processes. They can easily be mistaken for real fossils. Some pseudofossils, such as geological dendrite crystals, are formed by naturally occurring fissures in the rock that get filled up by percolating minerals. Other types of pseudofossils are kidney ore (round shapes in iron ore) and moss agates, which look like moss or plant leaves. Concretions, spherical or ovoid-shaped nodules found in some sedimentary strata, were once thought to be dinosaur eggs, and are often mistaken for fossils as well.",
"title": "Pseudofossils"
},
{
"paragraph_id": 43,
"text": "Gathering fossils dates at least to the beginning of recorded history. The fossils themselves are referred to as the fossil record. The fossil record was one of the early sources of data underlying the study of evolution and continues to be relevant to the history of life on Earth. Paleontologists examine the fossil record to understand the process of evolution and the way particular species have evolved.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 44,
"text": "Fossils have been visible and common throughout most of natural history, and so documented human interaction with them goes back as far as recorded history, or earlier.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 45,
"text": "There are many examples of paleolithic stone knives in Europe, with fossil echinoderms set precisely at the hand grip, going all the way back to Homo heidelbergensis and Neanderthals. These ancient peoples also drilled holes through the center of those round fossil shells, apparently using them as beads for necklaces.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 46,
"text": "The ancient Egyptians gathered fossils of species that resembled the bones of modern species they worshipped. The god Set was associated with the hippopotamus, therefore fossilized bones of hippo-like species were kept in that deity's temples. Five-rayed fossil sea urchin shells were associated with the deity Sopdu, the Morning Star, equivalent of Venus in Roman mythology.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 47,
"text": "Fossils appear to have directly contributed to the mythology of many civilizations, including the ancient Greeks. Classical Greek historian Herodotos wrote of an area near Hyperborea where gryphons protected golden treasure. There was indeed gold mining in that approximate region, where beaked Protoceratops skulls were common as fossils.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 48,
"text": "A later Greek scholar, Aristotle, eventually realized that fossil seashells from rocks were similar to those found on the beach, indicating the fossils were once living animals. He had previously explained them in terms of vaporous exhalations, which Persian polymath Avicenna modified into the theory of petrifying fluids (succus lapidificatus). Recognition of fossil seashells as originating in the sea was built upon in the 14th century by Albert of Saxony, and accepted in some form by most naturalists by the 16th century.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 49,
"text": "Roman naturalist Pliny the Elder wrote of \"tongue stones\", which he called glossopetra. These were fossil shark teeth, thought by some classical cultures to look like the tongues of people or snakes. He also wrote about the horns of Ammon, which are fossil ammonites, whence the group of shelled octopus-cousins ultimately draws its modern name. Pliny also makes one of the earlier known references to toadstones, thought until the 18th century to be a magical cure for poison originating in the heads of toads, but which are fossil teeth from Lepidotes, a Cretaceous ray-finned fish.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 50,
"text": "The Plains tribes of North America are thought to have similarly associated fossils, such as the many intact pterosaur fossils naturally exposed in the region, with their own mythology of the thunderbird.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 51,
"text": "There is no such direct mythological connection known from prehistoric Africa, but there is considerable evidence of tribes there excavating and moving fossils to ceremonial sites, apparently treating them with some reverence.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 52,
"text": "In Japan, fossil shark teeth were associated with the mythical tengu, thought to be the razor-sharp claws of the creature, documented some time after the 8th century AD.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 53,
"text": "In medieval China, the fossil bones of ancient mammals including Homo erectus were often mistaken for \"dragon bones\" and used as medicine and aphrodisiacs. In addition, some of these fossil bones are collected as \"art\" by scholars, who left scripts on various artifacts, indicating the time they were added to a collection. One good example is the famous scholar Huang Tingjian of the Song Dynasty during the 11th century, who kept a specific seashell fossil with his own poem engraved on it. In his Dream Pool Essays published in 1088, Song dynasty Chinese scholar-official Shen Kuo hypothesized that marine fossils found in a geological stratum of mountains located hundreds of miles from the Pacific Ocean was evidence that a prehistoric seashore had once existed there and shifted over centuries of time. His observation of petrified bamboos in the dry northern climate zone of what is now Yan'an, Shaanxi province, China, led him to advance early ideas of gradual climate change due to bamboo naturally growing in wetter climate areas.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 54,
"text": "In medieval Christendom, fossilized sea creatures on mountainsides were seen as proof of the biblical deluge of Noah's Ark. After observing the existence of seashells in mountains, the ancient Greek philosopher Xenophanes (c. 570 – 478 BC) speculated that the world was once inundated in a great flood that buried living creatures in drying mud.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 55,
"text": "In 1027, the Persian Avicenna explained fossils' stoniness in The Book of Healing:",
"title": "History of the study of fossils"
},
{
"paragraph_id": 56,
"text": "If what is said concerning the petrifaction of animals and plants is true, the cause of this (phenomenon) is a powerful mineralizing and petrifying virtue which arises in certain stony spots, or emanates suddenly from the earth during earthquake and subsidences, and petrifies whatever comes into contact with it. As a matter of fact, the petrifaction of the bodies of plants and animals is not more extraordinary than the transformation of waters.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 57,
"text": "From the 13th century to the present day, scholars pointed out that the fossil skulls of Deinotherium giganteum, found in Crete and Greece, might have been interpreted as being the skulls of the Cyclopes of Greek mythology, and are possibly the origin of that Greek myth. Their skulls appear to have a single eye-hole in the front, just like their modern elephant cousins, though in fact it's actually the opening for their trunk.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 58,
"text": "In Norse mythology, echinoderm shells (the round five-part button left over from a sea urchin) were associated with the god Thor, not only being incorporated in thunderstones, representations of Thor's hammer and subsequent hammer-shaped crosses as Christianity was adopted, but also kept in houses to garner Thor's protection.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 59,
"text": "These grew into the shepherd's crowns of English folklore, used for decoration and as good luck charms, placed by the doorway of homes and churches. In Suffolk, a different species was used as a good-luck charm by bakers, who referred to them as fairy loaves, associating them with the similarly shaped loaves of bread they baked.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 60,
"text": "More scientific views of fossils emerged during the Renaissance. Leonardo da Vinci concurred with Aristotle's view that fossils were the remains of ancient life. For example, Leonardo noticed discrepancies with the biblical flood narrative as an explanation for fossil origins:",
"title": "History of the study of fossils"
},
{
"paragraph_id": 61,
"text": "If the Deluge had carried the shells for distances of three and four hundred miles from the sea it would have carried them mixed with various other natural objects all heaped up together; but even at such distances from the sea we see the oysters all together and also the shellfish and the cuttlefish and all the other shells which congregate together, found all together dead; and the solitary shells are found apart from one another as we see them every day on the sea-shores.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 62,
"text": "And we find oysters together in very large families, among which some may be seen with their shells still joined together, indicating that they were left there by the sea and that they were still living when the strait of Gibraltar was cut through. In the mountains of Parma and Piacenza multitudes of shells and corals with holes may be seen still sticking to the rocks....",
"title": "History of the study of fossils"
},
{
"paragraph_id": 63,
"text": "In 1666, Nicholas Steno examined a shark, and made the association of its teeth with the \"tongue stones\" of ancient Greco-Roman mythology, concluding that those were not in fact the tongues of venomous snakes, but the teeth of some long-extinct species of shark.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 64,
"text": "Robert Hooke (1635–1703) included micrographs of fossils in his Micrographia and was among the first to observe fossil forams. His observations on fossils, which he stated to be the petrified remains of creatures some of which no longer existed, were published posthumously in 1705.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 65,
"text": "William Smith (1769–1839), an English canal engineer, observed that rocks of different ages (based on the law of superposition) preserved different assemblages of fossils, and that these assemblages succeeded one another in a regular and determinable order. He observed that rocks from distant locations could be correlated based on the fossils they contained. He termed this the principle of faunal succession. This principle became one of Darwin's chief pieces of evidence that biological evolution was real.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 66,
"text": "Georges Cuvier came to believe that most if not all the animal fossils he examined were remains of extinct species. This led Cuvier to become an active proponent of the geological school of thought called catastrophism. Near the end of his 1796 paper on living and fossil elephants he said:",
"title": "History of the study of fossils"
},
{
"paragraph_id": 67,
"text": "All of these facts, consistent among themselves, and not opposed by any report, seem to me to prove the existence of a world previous to ours, destroyed by some kind of catastrophe.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 68,
"text": "Interest in fossils, and geology more generally, expanded during the early nineteenth century. In Britain, Mary Anning's discoveries of fossils, including the first complete ichthyosaur and a complete plesiosaurus skeleton, sparked both public and scholarly interest.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 69,
"text": "Early naturalists well understood the similarities and differences of living species leading Linnaeus to develop a hierarchical classification system still in use today. Darwin and his contemporaries first linked the hierarchical structure of the tree of life with the then very sparse fossil record. Darwin eloquently described a process of descent with modification, or evolution, whereby organisms either adapt to natural and changing environmental pressures, or they perish.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 70,
"text": "When Darwin wrote On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, the oldest animal fossils were those from the Cambrian Period, now known to be about 540 million years old. He worried about the absence of older fossils because of the implications on the validity of his theories, but he expressed hope that such fossils would be found, noting that: \"only a small portion of the world is known with accuracy.\" Darwin also pondered the sudden appearance of many groups (i.e. phyla) in the oldest known Cambrian fossiliferous strata.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 71,
"text": "Since Darwin's time, the fossil record has been extended to between 2.3 and 3.5 billion years. Most of these Precambrian fossils are microscopic bacteria or microfossils. However, macroscopic fossils are now known from the late Proterozoic. The Ediacara biota (also called Vendian biota) dating from 575 million years ago collectively constitutes a richly diverse assembly of early multicellular eukaryotes.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 72,
"text": "The fossil record and faunal succession form the basis of the science of biostratigraphy or determining the age of rocks based on embedded fossils. For the first 150 years of geology, biostratigraphy and superposition were the only means for determining the relative age of rocks. The geologic time scale was developed based on the relative ages of rock strata as determined by the early paleontologists and stratigraphers.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 73,
"text": "Since the early years of the twentieth century, absolute dating methods, such as radiometric dating (including potassium/argon, argon/argon, uranium series, and, for very recent fossils, radiocarbon dating) have been used to verify the relative ages obtained by fossils and to provide absolute ages for many fossils. Radiometric dating has shown that the earliest known stromatolites are over 3.4 billion years old.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 74,
"text": "The fossil record is life's evolutionary epic that unfolded over four billion years as environmental conditions and genetic potential interacted in accordance with natural selection.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 75,
"text": "The Virtual Fossil Museum",
"title": "History of the study of fossils"
},
{
"paragraph_id": 76,
"text": "Paleontology has joined with evolutionary biology to share the interdisciplinary task of outlining the tree of life, which inevitably leads backwards in time to Precambrian microscopic life when cell structure and functions evolved. Earth's deep time in the Proterozoic and deeper still in the Archean is only \"recounted by microscopic fossils and subtle chemical signals.\" Molecular biologists, using phylogenetics, can compare protein amino acid or nucleotide sequence homology (i.e., similarity) to evaluate taxonomy and evolutionary distances among organisms, with limited statistical confidence. The study of fossils, on the other hand, can more specifically pinpoint when and in what organism a mutation first appeared. Phylogenetics and paleontology work together in the clarification of science's still dim view of the appearance of life and its evolution.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 77,
"text": "Niles Eldredge's study of the Phacops trilobite genus supported the hypothesis that modifications to the arrangement of the trilobite's eye lenses proceeded by fits and starts over millions of years during the Devonian. Eldredge's interpretation of the Phacops fossil record was that the aftermaths of the lens changes, but not the rapidly occurring evolutionary process, were fossilized. This and other data led Stephen Jay Gould and Niles Eldredge to publish their seminal paper on punctuated equilibrium in 1971.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 78,
"text": "Synchrotron X-ray tomographic analysis of early Cambrian bilaterian embryonic microfossils yielded new insights of metazoan evolution at its earliest stages. The tomography technique provides previously unattainable three-dimensional resolution at the limits of fossilization. Fossils of two enigmatic bilaterians, the worm-like Markuelia and a putative, primitive protostome, Pseudooides, provide a peek at germ layer embryonic development. These 543-million-year-old embryos support the emergence of some aspects of arthropod development earlier than previously thought in the late Proterozoic. The preserved embryos from China and Siberia underwent rapid diagenetic phosphatization resulting in exquisite preservation, including cell structures. This research is a notable example of how knowledge encoded by the fossil record continues to contribute otherwise unattainable information on the emergence and development of life on Earth. For example, the research suggests Markuelia has closest affinity to priapulid worms, and is adjacent to the evolutionary branching of Priapulida, Nematoda and Arthropoda.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 79,
"text": "Despite significant advances in uncovering and identifying paleontological specimens, it is generally accepted that the fossil record is vastly incomplete. Approaches for measuring the completeness of the fossil record have been developed for numerous subsets of species, including those grouped taxonomically, temporally, environmentally/geographically, or in sum. This encompasses the subfield of taphonomy and the study of biases in the paleontological record.",
"title": "History of the study of fossils"
},
{
"paragraph_id": 80,
"text": "According to one hypothesis, a Corinthian vase from the 6th century BCE is the oldest artistic record of a vertebrate fossil, perhaps a Miocene giraffe combined with elements from other species. However, a subsequent study using artificial intelligence and expert evaluations reject this idea, because mammals do not have the eye bones shown in the painted monster. Morphologically, the vase painting correspond to a carnivorous reptile of the Varanidae family that still lives in regions occupied by the ancient Greek.",
"title": "Art"
},
{
"paragraph_id": 81,
"text": "Fossil trading is the practice of buying and selling fossils. This is many times done illegally with artifacts stolen from research sites, costing many important scientific specimens each year. The problem is quite pronounced in China, where many specimens have been stolen.",
"title": "Trading and collecting"
},
{
"paragraph_id": 82,
"text": "Fossil collecting (sometimes, in a non-scientific sense, fossil hunting) is the collection of fossils for scientific study, hobby, or profit. Fossil collecting, as practiced by amateurs, is the predecessor of modern paleontology and many still collect fossils and study fossils as amateurs. Professionals and amateurs alike collect fossils for their scientific value.",
"title": "Trading and collecting"
},
{
"paragraph_id": 83,
"text": "The use of fossils to address health issues is rooted in traditional medicine and include the use of fossils as talismans. The specific fossil to use to alleviate or cure an illness is often based on its resemblance to the symptoms or affected organ. The usefulness of fossils as medicine is almost entirely a placebo effect, though fossil material might conceivably have some antacid activity or supply some essential minerals. The use of dinosaur bones as \"dragon bones\" has persisted in Traditional Chinese medicine into modern times, with mid-Cretaceous dinosaur bones being used for the purpose in Ruyang County during the early 21st century.",
"title": "As medicine"
}
]
| A fossil is any preserved remains, impression, or trace of any once-living thing from a past geological age. Examples include bones, shells, exoskeletons, stone imprints of animals or microbes, objects preserved in amber, hair, petrified wood and DNA remnants. The totality of fossils is known as the fossil record. Paleontology is the study of fossils: their age, method of formation, and evolutionary significance. Specimens are usually considered to be fossils if they are over 10,000 years old. The oldest fossils are around 3.48 billion years old to 4.1 billion years old. The observation in the 19th century that certain fossils were associated with certain rock strata led to the recognition of a geological timescale and the relative ages of different fossils. The development of radiometric dating techniques in the early 20th century allowed scientists to quantitatively measure the absolute ages of rocks and the fossils they host. There are many processes that lead to fossilization, including permineralization, casts and molds, authigenic mineralization, replacement and recrystallization, adpression, carbonization, and bioimmuration. Fossils vary in size from one-micrometre (1 µm) bacteria to dinosaurs and trees, many meters long and weighing many tons. A fossil normally preserves only a portion of the deceased organism, usually that portion that was partially mineralized during life, such as the bones and teeth of vertebrates, or the chitinous or calcareous exoskeletons of invertebrates. Fossils may also consist of the marks left behind by the organism while it was alive, such as animal tracks or feces (coprolites). These types of fossil are called trace fossils or ichnofossils, as opposed to body fossils. Some fossils are biochemical and are called chemofossils or biosignatures. | 2001-08-08T07:05:45Z | 2023-12-19T18:21:03Z | [
"Template:Wiktionary",
"Template:In Our Time",
"Template:Curlie",
"Template:Toclimit",
"Template:Anchor",
"Template:Main",
"Template:Cite NIE",
"Template:Div col",
"Template:Page needed",
"Template:Cite encyclopedia",
"Template:Redirect",
"Template:Clear-right",
"Template:Commons category",
"Template:Cite news",
"Template:Paleontology",
"Template:Portal",
"Template:Cite magazine",
"Template:Quote box",
"Template:Citation needed",
"Template:Succession box",
"Template:Rp",
"Template:Death",
"Template:Reflist",
"Template:Sprotect2",
"Template:Pp-move",
"Template:Lit",
"Template:Div col end",
"Template:Short description",
"Template:Lang",
"Template:Wikiquote",
"Template:Cite journal",
"Template:Pp",
"Template:Sc",
"Template:Sfn",
"Template:Technical inline",
"Template:S-start",
"Template:Wikibooks",
"Template:Authority control",
"Template:ISBN",
"Template:Citation",
"Template:Cite Americana",
"Template:Use dmy dates",
"Template:Clear",
"Template:Blockquote",
"Template:S-end",
"Template:Cite web",
"Template:Webarchive",
"Template:Further",
"Template:See also",
"Template:Multiple image",
"Template:Cite book",
"Template:Wikt-lang",
"Template:Clear-left",
"Template:Annotated link"
]
| https://en.wikipedia.org/wiki/Fossil |
10,960 | Family Educational Rights and Privacy Act | The Family Educational Rights and Privacy Act of 1974 (FERPA or the Buckley Amendment) is a United States federal law that governs the access to educational information and records by public entities such as potential employers, publicly funded educational institutions, and foreign governments. The act is also referred to as the Buckley Amendment, for one of its proponents, Senator James L. Buckley of New York.
FERPA is a U.S. federal law that regulates access and disclosure of student education records. It grants parents access to their child's records, allows amendments, and controls disclosure. After a student turns 18, their consent is generally required for disclosure. The law applies to institutions receiving U.S. Department of Education funds and provides privacy rights to students 18 years or older, or those in post-secondary institutions. Disclosure is permitted to parents of dependent students, and medical records are usually protected under FERPA rather than HIPAA. The law has faced criticism for concealing non-educational public records.
FERPA gives parents access to their child's education records, an opportunity to seek to have the records amended, and some control over the disclosure of information from the records. With several exceptions, schools must have a student's consent prior to the disclosure of education records after that student is 18 years old. The law applies only to educational agencies and institutions that receive funds under a program administered by the U.S. Department of Education.
Other regulations under this Act, effective starting January 3, 2012, allow for greater disclosures of personal and directory student identifying information and regulate disclosure of student IDs and e-mail addresses. For example, schools may provide external companies with a student's personally identifiable information without the student's consent. Conversely, tying student directory information to other information may result in a violation, as the combination creates an education record.
Examples of situations affected by FERPA include school employees divulging information to anyone other than the student about the student's grades or behavior, and school work posted on a bulletin board with a grade. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record.
This privacy policy also governs how state agencies transmit testing data to federal agencies, such as the Education Data Exchange Network.
This U.S. federal law also gave students 18 years of age or older, or students of any age if enrolled in any post-secondary educational institution, the right of privacy regarding grades, enrollment, and even billing information unless the school has specific permission from the student to share that specific type of information.
FERPA also permits a school to disclose personally identifiable information from education records of an "eligible student" (a student age 18 or older or enrolled in a postsecondary institution at any age) to his or her parents if the student is a dependent "student" as that term is defined in Section 152 of the Internal Revenue Code. Generally, if either parent has claimed the student as a dependent on the parent's most recent U.S. Federal income tax return, the school may non-consensually disclose the student's education records to both parents.
The law allowed students who apply to an educational institution such as graduate school permission to view recommendations submitted by others as part of the application. On standard application forms, students are given the option to waive this right.
FERPA specifically excludes employees of an educational institution if they are not students.
FERPA is now a guide to communicating higher education issues and privacy issues that include sexual assault and campus safety. It provides a framework on addressing needs of certain populations in higher education.
The citing of FERPA to conceal public records that are not "educational" in nature has been widely criticized, including criticism by the Act's primary Senate sponsor. For example, in the Owasso Independent School District v. Falvo case, an important part of the debate was determining the relationship between peer-grading and "education records" as defined in FERPA. The plaintiffs argued "that allowing students to score each other's tests [...] as the teachers explain the correct answers to the entire class [...] embarrassed [...] children", but they lost in a summary judgment by the district court. The Court of Appeals, ruled that students placing grades on the work of other students made such work into an "education record." Thus, peer-grading was determined as a violation of FERPA privacy policies because students had access to other students' academic performance without full consent. However, on appeal to the Supreme Court, it was unanimously ruled that peer-grading was not a violation of FERPA. This is because a grade written on a student's work does not become an "education record" until the teacher writes the final grade into a grade book.
Legal experts have debated the issue of whether student medical records (e.g. records of therapy sessions with a therapist at an on-campus counseling center) might be released to the school administration under certain triggering events, such as when a student sued his college or university.
Usually, student medical treatment records will remain under the protection of FERPA, not the Health Insurance Portability and Accountability Act (HIPAA). This is due to the "FERPA Exception" written within HIPAA. | [
{
"paragraph_id": 0,
"text": "The Family Educational Rights and Privacy Act of 1974 (FERPA or the Buckley Amendment) is a United States federal law that governs the access to educational information and records by public entities such as potential employers, publicly funded educational institutions, and foreign governments. The act is also referred to as the Buckley Amendment, for one of its proponents, Senator James L. Buckley of New York.",
"title": ""
},
{
"paragraph_id": 1,
"text": "FERPA is a U.S. federal law that regulates access and disclosure of student education records. It grants parents access to their child's records, allows amendments, and controls disclosure. After a student turns 18, their consent is generally required for disclosure. The law applies to institutions receiving U.S. Department of Education funds and provides privacy rights to students 18 years or older, or those in post-secondary institutions. Disclosure is permitted to parents of dependent students, and medical records are usually protected under FERPA rather than HIPAA. The law has faced criticism for concealing non-educational public records.",
"title": ""
},
{
"paragraph_id": 2,
"text": "FERPA gives parents access to their child's education records, an opportunity to seek to have the records amended, and some control over the disclosure of information from the records. With several exceptions, schools must have a student's consent prior to the disclosure of education records after that student is 18 years old. The law applies only to educational agencies and institutions that receive funds under a program administered by the U.S. Department of Education.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Other regulations under this Act, effective starting January 3, 2012, allow for greater disclosures of personal and directory student identifying information and regulate disclosure of student IDs and e-mail addresses. For example, schools may provide external companies with a student's personally identifiable information without the student's consent. Conversely, tying student directory information to other information may result in a violation, as the combination creates an education record.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Examples of situations affected by FERPA include school employees divulging information to anyone other than the student about the student's grades or behavior, and school work posted on a bulletin board with a grade. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "This privacy policy also governs how state agencies transmit testing data to federal agencies, such as the Education Data Exchange Network.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "This U.S. federal law also gave students 18 years of age or older, or students of any age if enrolled in any post-secondary educational institution, the right of privacy regarding grades, enrollment, and even billing information unless the school has specific permission from the student to share that specific type of information.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "FERPA also permits a school to disclose personally identifiable information from education records of an \"eligible student\" (a student age 18 or older or enrolled in a postsecondary institution at any age) to his or her parents if the student is a dependent \"student\" as that term is defined in Section 152 of the Internal Revenue Code. Generally, if either parent has claimed the student as a dependent on the parent's most recent U.S. Federal income tax return, the school may non-consensually disclose the student's education records to both parents.",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "The law allowed students who apply to an educational institution such as graduate school permission to view recommendations submitted by others as part of the application. On standard application forms, students are given the option to waive this right.",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "FERPA specifically excludes employees of an educational institution if they are not students.",
"title": "Overview"
},
{
"paragraph_id": 10,
"text": "FERPA is now a guide to communicating higher education issues and privacy issues that include sexual assault and campus safety. It provides a framework on addressing needs of certain populations in higher education.",
"title": "Overview"
},
{
"paragraph_id": 11,
"text": "The citing of FERPA to conceal public records that are not \"educational\" in nature has been widely criticized, including criticism by the Act's primary Senate sponsor. For example, in the Owasso Independent School District v. Falvo case, an important part of the debate was determining the relationship between peer-grading and \"education records\" as defined in FERPA. The plaintiffs argued \"that allowing students to score each other's tests [...] as the teachers explain the correct answers to the entire class [...] embarrassed [...] children\", but they lost in a summary judgment by the district court. The Court of Appeals, ruled that students placing grades on the work of other students made such work into an \"education record.\" Thus, peer-grading was determined as a violation of FERPA privacy policies because students had access to other students' academic performance without full consent. However, on appeal to the Supreme Court, it was unanimously ruled that peer-grading was not a violation of FERPA. This is because a grade written on a student's work does not become an \"education record\" until the teacher writes the final grade into a grade book.",
"title": "Access to public records"
},
{
"paragraph_id": 12,
"text": "Legal experts have debated the issue of whether student medical records (e.g. records of therapy sessions with a therapist at an on-campus counseling center) might be released to the school administration under certain triggering events, such as when a student sued his college or university.",
"title": "Student medical records"
},
{
"paragraph_id": 13,
"text": "Usually, student medical treatment records will remain under the protection of FERPA, not the Health Insurance Portability and Accountability Act (HIPAA). This is due to the \"FERPA Exception\" written within HIPAA.",
"title": "Student medical records"
}
]
| The Family Educational Rights and Privacy Act of 1974 is a United States federal law that governs the access to educational information and records by public entities such as potential employers, publicly funded educational institutions, and foreign governments. The act is also referred to as the Buckley Amendment, for one of its proponents, Senator James L. Buckley of New York. FERPA is a U.S. federal law that regulates access and disclosure of student education records. It grants parents access to their child's records, allows amendments, and controls disclosure. After a student turns 18, their consent is generally required for disclosure. The law applies to institutions receiving U.S. Department of Education funds and provides privacy rights to students 18 years or older, or those in post-secondary institutions. Disclosure is permitted to parents of dependent students, and medical records are usually protected under FERPA rather than HIPAA. The law has faced criticism for concealing non-educational public records. | 2001-08-09T07:08:47Z | 2023-09-27T15:29:37Z | [
"Template:Short description",
"Template:Reflist",
"Template:UnitedStatesCode",
"Template:Cite web",
"Template:More citations needed",
"Template:Infobox U.S. legislation",
"Template:Cite news",
"Template:Cite journal",
"Template:Patriot Act",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Family_Educational_Rights_and_Privacy_Act |
10,963 | Forgetting | Forgetting or disremembering is the apparent loss or modification of information already encoded and stored in an individual's short or long-term memory. It is a spontaneous or gradual process in which old memories are unable to be recalled from memory storage. Problems with remembering, learning and retaining new information are a few of the most common complaints of older adults. Studies show that retention improves with increased rehearsal. This improvement occurs because rehearsal helps to transfer information into long-term memory.
Forgetting curves (amount remembered as a function of time since an event was first experienced) have been extensively analyzed. The most recent evidence suggests that a power function provides the closest mathematical fit to the forgetting function.
Failing to retrieve an event does not mean that this specific event has been forever forgotten. Research has shown that there are a few health behaviors that to some extent can prevent forgetting from happening so often. One of the simplest ways to keep the brain healthy and prevent forgetting is to stay active and exercise. Staying active is important because overall it keeps the body healthy. When the body is healthy the brain is healthy and less inflamed as well. Older adults who were more active were found to have had less episodes of forgetting compared to those older adults who were less active. A healthy diet can also contribute to a healthier brain and aging process which in turn results in less frequent forgetting.
One of the first to study the mechanisms of forgetting was the German psychologist Hermann Ebbinghaus (1885). Using himself as the sole subject in his experiment, he memorized lists of three letter nonsense syllable words—two consonants and one vowel in the middle. He then measured his own capacity to relearn a given list of words after a variety of given time period. He found that forgetting occurs in a systematic manner, beginning rapidly and then leveling off. Although his methods were primitive, his basic premises have held true today and have been reaffirmed by more methodologically sound methods. The Ebbinghaus forgetting curve is the name of his results which he plotted out and made 2 conclusions. The first being that much of what we forget is lost soon after it is originally learned. The second being that the amount of forgetting eventually levels off.
Around the same time Ebbinghaus developed the forgetting curve, psychologist Sigmund Freud theorized that people intentionally forgot things in order to push bad thoughts and feelings deep into their unconscious, a process he called "repression". There is debate as to whether (or how often) memory repression really occurs and mainstream psychology holds that true memory repression occurs only very rarely.
One process model for memory was proposed by Richard Atkinson and Richard Shiffrin in the 1960s as a way to explain the operation of memory. This modal model of memory, also known as the Atkinson-Shiffrin model of memory, suggests there are three types of memory: sensory memory, short-term memory, and long-term memory. Each type of memory is separate in its capacity and duration. In the modal model, how quickly information is forgotten is related to the type of memory where that information is stored. Information in the first stage, sensory memory, is forgotten after only a few seconds. In the second stage, short-term memory, information is forgotten after about 20 years. While information in long-term memory can be remembered for minutes or even decades, it may be forgotten when the retrieval processes for that information fail.
Concerning unwanted memories, modern terminology divides motivated forgetting into unconscious repression (which is disputed) and conscious thought suppression.
Forgetting can be measured in different ways all of which are based on recall:
For this type of measurement, a participant has to identify material that was previously learned. The participant is asked to remember a list of material. Later on they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.
Free recall is a basic paradigm used to study human memory. In a free recall task, a subject is presented a list of to-be-remembered items, one at a time. For example, an experimenter might read a list of 20 words aloud, presenting a new word to the subject every 4 seconds. At the end of the presentation of the list, the subject is asked to recall the items (e.g., by writing down as many items from the list as possible). It is called a free recall task because the subject is free to recall the items in any order that he or she desires.
Prompted recall is a slight variation of free recall that consists of presenting hints or prompts to increase the likelihood that the behavior will be produced. Usually these prompts are stimuli that were not there during the training period. Thus in order to measure the degree of forgetting, one can see how many prompts the subject misses or the number of prompts required to produce the behavior.
This method measures forgetting by the amount of training required to reach the previous level of performance. German psychologist Hermann Ebbinghaus (1885) used this method on himself. He memorized lists of nonsensical syllables until he could repeat the list two times without error. After a certain interval, he relearned the list and saw how long it would take him to do this task. If it took fewer times, then there had been less forgetting. His experiment was one of the first to study forgetting.
Participants are given a list of words and that they have to remember. Then they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.
The four main theories of forgetting apparent in the study of psychology are as follows:
Cue-dependent forgetting (also, context-dependent forgetting) or retrieval failure, is the failure to recall a memory due to missing stimuli or cues that were present at the time the memory was encoded. Encoding is the first step in creating and remembering a memory. How well something has been encoded in the memory can be measured by completing specific tests of retrieval. Examples of these tests would be explicit ones like cued recall or implicit tests like word fragment completion. Cue-dependent forgetting is one of five cognitive psychology theories of forgetting. This theory states that a memory is sometimes temporarily forgotten purely because it cannot be retrieved, but the proper cue can bring it to mind. A good metaphor for this is searching for a book in a library without the reference number, title, author or even subject. The information still exists, but without these cues retrieval is unlikely. Furthermore, a good retrieval cue must be consistent with the original encoding of the information. If the sound of the word is emphasized during the encoding process, the cue that should be used should also put emphasis on the phonetic quality of the word. Information is available however, just not readily available without these cues. Depending on the age of a person, retrieval cues and skills may not work as well. This is usually common in older adults but that is not always the case. When information is encoded into the memory and retrieved with a technique called spaced retrieval, this helps older adults retrieve the events stored in the memory better. There is also evidence from different studies that show age related changes in memory. These specific studies have shown that episodic memory performance does in fact decline with age and have made known that older adults produce vivid rates of forgetting when two items are combined and not encoded.
Forgetting that occurs through physiological damage or dilapidation to the brain are referred to as organic causes of forgetting. These theories encompass the loss of information already retained in long-term memory or the inability to encode new information again. Examples include Alzheimer's, amnesia, dementia, consolidation theory and the gradual slowing down of the central nervous system due to aging.
Interference theory refers to the idea that when the learning of something new causes forgetting of older material on the basis of competition between the two. This essentially states that memory's information may become confused or combined with other information during encoding, resulting in the distortion or disruption of memories. In nature, the interfering items are said to originate from an overstimulating environment. Interference theory exists in three branches: Proactive, Retroactive and Output. Retroactive and Proactive inhibition each referring in contrast to the other. Retroactive interference is when new information (memories) interferes with older information. On the other hand, proactive interference is when old information interferes with the retrieval of new information. This is sometimes thought to occur especially when memories are similar. Output Interference occurs when the initial act of recalling specific information interferes with the retrieval of the original information. Another reason why retrieval failure occurs is due to encoding failure. The information never made it to long-term memory storage. According to the level of processing theory, how well information is encoded depends on the level of processing a piece of information receives. Certain parts of information are better encoded than others; for example, information this visual imagery or that has a survival value is more easily transferred to the long-term memory storage. This theory shows a contradiction: an extremely intelligent individual is expected to forget more hastily than one who has a slow mentality. For this reason, an intelligent individual has stored up more memory in his mind which will cause interferences and impair their ability to recall specific information. Based on current research, testing interference has only been carried out by recalling from a list of words rather than using situation from daily lives, thus it's hard to generalize the findings for this theory. It has been found that interference related tasks decreased memory performance by up to 20%, with negative effects at all interference time points and large variability between participants concerning both the time point and the size of maximal interference. Furthermore, fast learners seem to be more affected by interference than slow learners. People are also less likely to recall items when intervening stimuli are presented within the first ten minutes after learning. Recall performance is better without interference. Peripheral processes such as encoding time, recognition memory and motor execution decline with age. However proactive interference is similar. Suggesting contrary to earlier reports that the inhibitory processes observed with this paradigm remain intact in older adults.
Decay theory states that when something new is learned, a neurochemical, physical "memory trace" is formed in the brain and over time this trace tends to disintegrate, unless it is occasionally used. Decay theory states the reason we eventually forget something or an event is because the memory of it fades with time. If we do not attempt to look back at an event, the greater the interval time between the time when the event from happening and the time when we try to remember, the memory will start to fade. Time is the greatest impact in remembering an event.
Trace decay theory explains memories that are stored in both short-term and long-term memory system, and assumes that the memories leave a trace in the brain. According to this theory, short-term memory (STM) can only retain information for a limited amount of time, around 15 to 30 seconds unless it is rehearsed. If it is not rehearsed, the information will start to gradually fade away and decay. Donald Hebb proposed that incoming information causes a series of neurons to create a neurological memory trace in the brain which would result in change in the morphological and/or chemical changes in the brain and would fade with time. Repeated firing causes a structural change in the synapses. Rehearsal of repeated firing maintains the memory in STM until a structural change is made. Therefore, forgetting happens as a result of automatic decay of the memory trace in brain. This theory states that the events between learning and recall have no effects on recall; the important factor that affects is the duration that the information has been retained. Hence, as longer time passes more of traces are subject to decay and as a result the information is forgotten.
One major problem about this theory is that in real-life situation, the time between encoding a piece of information and recalling it, is going to be filled with all different kinds of events that might happen to the individual. Therefore, it is difficult to conclude that forgetting is a result of only the time duration. It is also important to consider the effectiveness of this theory. Although it seems very plausible, it is about impossible to test. It is difficult to create a situation where there is a blank period of time between presenting the material and recalling it later.
This theory is supposedly contradicted by the fact that one is able to ride a bike even after not having done so for decades. "Flashbulb memories" are another piece of seemingly contradicting evidence. It is believed that certain memories "trace decay" while others do not. Sleep is believed to play a key role in halting trace decay, although the exact mechanism of this is unknown.
Physical and chemical changes in our brain lead to a memory trace, and this is based on the idea of the trace theory of memory. Information that gets into our short-term memory lasts a few seconds (15–20 seconds), and it fades away if it is not rehearsed or practiced as the neurochemical memory trace disappears rapidly. According to the trace decay theory of forgetting, what occurs between the creation of new memories and the recall of these memories is not influenced by the recall. However, the time between these events (memory formation and recalling) decides whether the information can be kept or forgotten. As there is an inverse correlation that if the time is short, more information can be recalled. On the other hand, if the time is long less information can be recalled or more information will be forgotten. This theory can be criticized for not sharing ideas on how some memories can stay and others can fade, though there was a long time between the formation and recall. Newness to something plays a crucial role in this situation. For instance, people are more likely to recall their very first day abroad than all of the intervening days between it and living there. Emotions also play a crucial role in this situation.
Forgetting can have very different causes than simply removal of stored content. Forgetting can mean access problems, availability problems, or can have other reasons such as amnesia caused by an accident.
An inability to forget can cause distress, as with post-traumatic stress disorder and hyperthymesia (in which people have an extremely detailed autobiographical memory).
Psychologists have called attention to "social aspects of forgetting". Though often loosely defined, social amnesia is generally considered to be the opposite of collective memory. "Social amnesia" was first discussed by Russell Jacoby, yet his use of the term was restricted to a narrow approach, which was limited to what he perceived to be a relative neglect of psychoanalytical theory in psychology. The cultural historian Peter Burke suggested that "it may be worth investigating the social organization of forgetting, the rules of exclusion, suppression or repression, and the question of who wants whom to forget what". In an in-depth historical study spanning two centuries, Guy Beiner proposed the term "social forgetting", which he distinguished from crude notions of "collective amnesia" and "total oblivion", arguing that "social forgetting is to be found in the interface of public silence and more private remembrance". The philosopher Walter Benjamin sees social forgetting closely linked to the question of present-day interests, arguing that "every image of the past that is not recognized by the present as one of its own concerns threatens to disappear irretrievably". Building on this, the sociologist David Leupold argued in the context of competing national narratives that what is suppressed and forgotten in one national narrative "might appear at the core of past narrations by the other" - thus often leading to diametrically opposed, mutually exclusive accounts on the past. | [
{
"paragraph_id": 0,
"text": "Forgetting or disremembering is the apparent loss or modification of information already encoded and stored in an individual's short or long-term memory. It is a spontaneous or gradual process in which old memories are unable to be recalled from memory storage. Problems with remembering, learning and retaining new information are a few of the most common complaints of older adults. Studies show that retention improves with increased rehearsal. This improvement occurs because rehearsal helps to transfer information into long-term memory.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Forgetting curves (amount remembered as a function of time since an event was first experienced) have been extensively analyzed. The most recent evidence suggests that a power function provides the closest mathematical fit to the forgetting function.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Failing to retrieve an event does not mean that this specific event has been forever forgotten. Research has shown that there are a few health behaviors that to some extent can prevent forgetting from happening so often. One of the simplest ways to keep the brain healthy and prevent forgetting is to stay active and exercise. Staying active is important because overall it keeps the body healthy. When the body is healthy the brain is healthy and less inflamed as well. Older adults who were more active were found to have had less episodes of forgetting compared to those older adults who were less active. A healthy diet can also contribute to a healthier brain and aging process which in turn results in less frequent forgetting.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "One of the first to study the mechanisms of forgetting was the German psychologist Hermann Ebbinghaus (1885). Using himself as the sole subject in his experiment, he memorized lists of three letter nonsense syllable words—two consonants and one vowel in the middle. He then measured his own capacity to relearn a given list of words after a variety of given time period. He found that forgetting occurs in a systematic manner, beginning rapidly and then leveling off. Although his methods were primitive, his basic premises have held true today and have been reaffirmed by more methodologically sound methods. The Ebbinghaus forgetting curve is the name of his results which he plotted out and made 2 conclusions. The first being that much of what we forget is lost soon after it is originally learned. The second being that the amount of forgetting eventually levels off.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Around the same time Ebbinghaus developed the forgetting curve, psychologist Sigmund Freud theorized that people intentionally forgot things in order to push bad thoughts and feelings deep into their unconscious, a process he called \"repression\". There is debate as to whether (or how often) memory repression really occurs and mainstream psychology holds that true memory repression occurs only very rarely.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "One process model for memory was proposed by Richard Atkinson and Richard Shiffrin in the 1960s as a way to explain the operation of memory. This modal model of memory, also known as the Atkinson-Shiffrin model of memory, suggests there are three types of memory: sensory memory, short-term memory, and long-term memory. Each type of memory is separate in its capacity and duration. In the modal model, how quickly information is forgotten is related to the type of memory where that information is stored. Information in the first stage, sensory memory, is forgotten after only a few seconds. In the second stage, short-term memory, information is forgotten after about 20 years. While information in long-term memory can be remembered for minutes or even decades, it may be forgotten when the retrieval processes for that information fail.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Concerning unwanted memories, modern terminology divides motivated forgetting into unconscious repression (which is disputed) and conscious thought suppression.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Forgetting can be measured in different ways all of which are based on recall:",
"title": "Measurements"
},
{
"paragraph_id": 8,
"text": "For this type of measurement, a participant has to identify material that was previously learned. The participant is asked to remember a list of material. Later on they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.",
"title": "Measurements"
},
{
"paragraph_id": 9,
"text": "Free recall is a basic paradigm used to study human memory. In a free recall task, a subject is presented a list of to-be-remembered items, one at a time. For example, an experimenter might read a list of 20 words aloud, presenting a new word to the subject every 4 seconds. At the end of the presentation of the list, the subject is asked to recall the items (e.g., by writing down as many items from the list as possible). It is called a free recall task because the subject is free to recall the items in any order that he or she desires.",
"title": "Measurements"
},
{
"paragraph_id": 10,
"text": "Prompted recall is a slight variation of free recall that consists of presenting hints or prompts to increase the likelihood that the behavior will be produced. Usually these prompts are stimuli that were not there during the training period. Thus in order to measure the degree of forgetting, one can see how many prompts the subject misses or the number of prompts required to produce the behavior.",
"title": "Measurements"
},
{
"paragraph_id": 11,
"text": "This method measures forgetting by the amount of training required to reach the previous level of performance. German psychologist Hermann Ebbinghaus (1885) used this method on himself. He memorized lists of nonsensical syllables until he could repeat the list two times without error. After a certain interval, he relearned the list and saw how long it would take him to do this task. If it took fewer times, then there had been less forgetting. His experiment was one of the first to study forgetting.",
"title": "Measurements"
},
{
"paragraph_id": 12,
"text": "Participants are given a list of words and that they have to remember. Then they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.",
"title": "Measurements"
},
{
"paragraph_id": 13,
"text": "The four main theories of forgetting apparent in the study of psychology are as follows:",
"title": "Theories"
},
{
"paragraph_id": 14,
"text": "Cue-dependent forgetting (also, context-dependent forgetting) or retrieval failure, is the failure to recall a memory due to missing stimuli or cues that were present at the time the memory was encoded. Encoding is the first step in creating and remembering a memory. How well something has been encoded in the memory can be measured by completing specific tests of retrieval. Examples of these tests would be explicit ones like cued recall or implicit tests like word fragment completion. Cue-dependent forgetting is one of five cognitive psychology theories of forgetting. This theory states that a memory is sometimes temporarily forgotten purely because it cannot be retrieved, but the proper cue can bring it to mind. A good metaphor for this is searching for a book in a library without the reference number, title, author or even subject. The information still exists, but without these cues retrieval is unlikely. Furthermore, a good retrieval cue must be consistent with the original encoding of the information. If the sound of the word is emphasized during the encoding process, the cue that should be used should also put emphasis on the phonetic quality of the word. Information is available however, just not readily available without these cues. Depending on the age of a person, retrieval cues and skills may not work as well. This is usually common in older adults but that is not always the case. When information is encoded into the memory and retrieved with a technique called spaced retrieval, this helps older adults retrieve the events stored in the memory better. There is also evidence from different studies that show age related changes in memory. These specific studies have shown that episodic memory performance does in fact decline with age and have made known that older adults produce vivid rates of forgetting when two items are combined and not encoded.",
"title": "Theories"
},
{
"paragraph_id": 15,
"text": "Forgetting that occurs through physiological damage or dilapidation to the brain are referred to as organic causes of forgetting. These theories encompass the loss of information already retained in long-term memory or the inability to encode new information again. Examples include Alzheimer's, amnesia, dementia, consolidation theory and the gradual slowing down of the central nervous system due to aging.",
"title": "Theories"
},
{
"paragraph_id": 16,
"text": "Interference theory refers to the idea that when the learning of something new causes forgetting of older material on the basis of competition between the two. This essentially states that memory's information may become confused or combined with other information during encoding, resulting in the distortion or disruption of memories. In nature, the interfering items are said to originate from an overstimulating environment. Interference theory exists in three branches: Proactive, Retroactive and Output. Retroactive and Proactive inhibition each referring in contrast to the other. Retroactive interference is when new information (memories) interferes with older information. On the other hand, proactive interference is when old information interferes with the retrieval of new information. This is sometimes thought to occur especially when memories are similar. Output Interference occurs when the initial act of recalling specific information interferes with the retrieval of the original information. Another reason why retrieval failure occurs is due to encoding failure. The information never made it to long-term memory storage. According to the level of processing theory, how well information is encoded depends on the level of processing a piece of information receives. Certain parts of information are better encoded than others; for example, information this visual imagery or that has a survival value is more easily transferred to the long-term memory storage. This theory shows a contradiction: an extremely intelligent individual is expected to forget more hastily than one who has a slow mentality. For this reason, an intelligent individual has stored up more memory in his mind which will cause interferences and impair their ability to recall specific information. Based on current research, testing interference has only been carried out by recalling from a list of words rather than using situation from daily lives, thus it's hard to generalize the findings for this theory. It has been found that interference related tasks decreased memory performance by up to 20%, with negative effects at all interference time points and large variability between participants concerning both the time point and the size of maximal interference. Furthermore, fast learners seem to be more affected by interference than slow learners. People are also less likely to recall items when intervening stimuli are presented within the first ten minutes after learning. Recall performance is better without interference. Peripheral processes such as encoding time, recognition memory and motor execution decline with age. However proactive interference is similar. Suggesting contrary to earlier reports that the inhibitory processes observed with this paradigm remain intact in older adults.",
"title": "Theories"
},
{
"paragraph_id": 17,
"text": "Decay theory states that when something new is learned, a neurochemical, physical \"memory trace\" is formed in the brain and over time this trace tends to disintegrate, unless it is occasionally used. Decay theory states the reason we eventually forget something or an event is because the memory of it fades with time. If we do not attempt to look back at an event, the greater the interval time between the time when the event from happening and the time when we try to remember, the memory will start to fade. Time is the greatest impact in remembering an event.",
"title": "Theories"
},
{
"paragraph_id": 18,
"text": "Trace decay theory explains memories that are stored in both short-term and long-term memory system, and assumes that the memories leave a trace in the brain. According to this theory, short-term memory (STM) can only retain information for a limited amount of time, around 15 to 30 seconds unless it is rehearsed. If it is not rehearsed, the information will start to gradually fade away and decay. Donald Hebb proposed that incoming information causes a series of neurons to create a neurological memory trace in the brain which would result in change in the morphological and/or chemical changes in the brain and would fade with time. Repeated firing causes a structural change in the synapses. Rehearsal of repeated firing maintains the memory in STM until a structural change is made. Therefore, forgetting happens as a result of automatic decay of the memory trace in brain. This theory states that the events between learning and recall have no effects on recall; the important factor that affects is the duration that the information has been retained. Hence, as longer time passes more of traces are subject to decay and as a result the information is forgotten.",
"title": "Theories"
},
{
"paragraph_id": 19,
"text": "One major problem about this theory is that in real-life situation, the time between encoding a piece of information and recalling it, is going to be filled with all different kinds of events that might happen to the individual. Therefore, it is difficult to conclude that forgetting is a result of only the time duration. It is also important to consider the effectiveness of this theory. Although it seems very plausible, it is about impossible to test. It is difficult to create a situation where there is a blank period of time between presenting the material and recalling it later.",
"title": "Theories"
},
{
"paragraph_id": 20,
"text": "This theory is supposedly contradicted by the fact that one is able to ride a bike even after not having done so for decades. \"Flashbulb memories\" are another piece of seemingly contradicting evidence. It is believed that certain memories \"trace decay\" while others do not. Sleep is believed to play a key role in halting trace decay, although the exact mechanism of this is unknown.",
"title": "Theories"
},
{
"paragraph_id": 21,
"text": "Physical and chemical changes in our brain lead to a memory trace, and this is based on the idea of the trace theory of memory. Information that gets into our short-term memory lasts a few seconds (15–20 seconds), and it fades away if it is not rehearsed or practiced as the neurochemical memory trace disappears rapidly. According to the trace decay theory of forgetting, what occurs between the creation of new memories and the recall of these memories is not influenced by the recall. However, the time between these events (memory formation and recalling) decides whether the information can be kept or forgotten. As there is an inverse correlation that if the time is short, more information can be recalled. On the other hand, if the time is long less information can be recalled or more information will be forgotten. This theory can be criticized for not sharing ideas on how some memories can stay and others can fade, though there was a long time between the formation and recall. Newness to something plays a crucial role in this situation. For instance, people are more likely to recall their very first day abroad than all of the intervening days between it and living there. Emotions also play a crucial role in this situation.",
"title": "Theories"
},
{
"paragraph_id": 22,
"text": "Forgetting can have very different causes than simply removal of stored content. Forgetting can mean access problems, availability problems, or can have other reasons such as amnesia caused by an accident.",
"title": "Impairments and lack of forgetting"
},
{
"paragraph_id": 23,
"text": "An inability to forget can cause distress, as with post-traumatic stress disorder and hyperthymesia (in which people have an extremely detailed autobiographical memory).",
"title": "Impairments and lack of forgetting"
},
{
"paragraph_id": 24,
"text": "Psychologists have called attention to \"social aspects of forgetting\". Though often loosely defined, social amnesia is generally considered to be the opposite of collective memory. \"Social amnesia\" was first discussed by Russell Jacoby, yet his use of the term was restricted to a narrow approach, which was limited to what he perceived to be a relative neglect of psychoanalytical theory in psychology. The cultural historian Peter Burke suggested that \"it may be worth investigating the social organization of forgetting, the rules of exclusion, suppression or repression, and the question of who wants whom to forget what\". In an in-depth historical study spanning two centuries, Guy Beiner proposed the term \"social forgetting\", which he distinguished from crude notions of \"collective amnesia\" and \"total oblivion\", arguing that \"social forgetting is to be found in the interface of public silence and more private remembrance\". The philosopher Walter Benjamin sees social forgetting closely linked to the question of present-day interests, arguing that \"every image of the past that is not recognized by the present as one of its own concerns threatens to disappear irretrievably\". Building on this, the sociologist David Leupold argued in the context of competing national narratives that what is suppressed and forgotten in one national narrative \"might appear at the core of past narrations by the other\" - thus often leading to diametrically opposed, mutually exclusive accounts on the past.",
"title": "Social forgetting"
}
]
| Forgetting or disremembering is the apparent loss or modification of information already encoded and stored in an individual's short or long-term memory. It is a spontaneous or gradual process in which old memories are unable to be recalled from memory storage. Problems with remembering, learning and retaining new information are a few of the most common complaints of older adults. Studies show that retention improves with increased rehearsal. This improvement occurs because rehearsal helps to transfer information into long-term memory. Forgetting curves have been extensively analyzed. The most recent evidence suggests that a power function provides the closest mathematical fit to the forgetting function. | 2002-02-25T15:43:11Z | 2023-11-30T14:59:42Z | [
"Template:Infobox medical condition",
"Template:See also",
"Template:Citation needed",
"Template:Main",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Col div",
"Template:Div col end",
"Template:Cite book",
"Template:Cite news",
"Template:Wiktionary",
"Template:Memory",
"Template:Authority control",
"Template:Cn",
"Template:Wikiquote",
"Template:Short description",
"Template:Citation"
]
| https://en.wikipedia.org/wiki/Forgetting |
10,965 | Fay Wray | Vina Fay Wray (September 15, 1907 – August 8, 2004) was a Canadian-American actress best known for starring as Ann Darrow in the 1933 film King Kong. Through an acting career that spanned nearly six decades, Wray attained international recognition as an actress in horror films. She has been dubbed one of the early "scream queens".
After appearing in minor film roles, Wray gained media attention after being selected as one of the "WAMPAS Baby Stars" in 1926. This led to her being contracted to Paramount Pictures as a teenager, where she made more than a dozen feature films. After leaving Paramount, she signed deals with various film companies, being cast in her first horror film roles, in addition to many other types of roles, including in The Bowery (1933) and Viva Villa! (1934), both of which starred Wallace Beery. For RKO Radio Pictures, Inc., Wray starred in the film she is most identified with, King Kong (1933). After the success of King Kong, she made numerous appearances in both film and television, retiring in 1980.
Wray was born on a ranch near Cardston, Alberta, to parents who were members of the Church of Jesus Christ of Latter-day Saints, Elvina Marguerite Jones, who was from Salt Lake City, Utah, and Joseph Heber Wray, who was from Kingston upon Hull, England. She was one of six children and was a granddaughter of LDS pioneer Daniel Webster Jones. Her ancestors came from England, Scotland, Ireland and Wales. Wray was never baptized a member of The Church of Jesus Christ of Latter-day Saints.
Her family returned to the United States a few years after she was born; they moved to Salt Lake City in 1912 and moved to Lark, Utah, in 1914. In 1919, the Wray family returned to Salt Lake City, and then relocated to Hollywood, where Fay attended Hollywood High School.
In 1923, Wray appeared in her first film at the age of 16, when she landed a role in a short historical film sponsored by a local newspaper. In the 1920s, Wray landed a major role in the silent film The Coast Patrol (1925), as well as uncredited bit parts at the Hal Roach Studios.
In 1926, the Western Association of Motion Picture Advertisers selected Wray as one of the "WAMPAS Baby Stars", a group of women whom they believed to be on the threshold of movie stardom. She was at the time under contract to Universal Studios, mostly co-starring in low-budget Westerns opposite Buck Jones.
The following year, Wray was signed to a contract with Paramount Pictures. In 1926, director Erich von Stroheim cast her as the main female lead in his film The Wedding March, released by Paramount two years later. While the film was noted for its high budget and production values, it was a financial failure. It also gave Wray her first lead role. Wray stayed with Paramount to make more than a dozen films and made the transition from silent films to "talkies".
After leaving Paramount, Wray signed with other film studios. Under these deals, Wray was cast in several horror films, including Doctor X (1932) and Mystery of the Wax Museum (1933). However, her best known films were produced under her deal with RKO Radio Pictures. Her first film with RKO was The Most Dangerous Game (1932), co-starring Joel McCrea. The production was filmed at night on the same jungle sets that were being used for King Kong during the day, and with Wray and Robert Armstrong starring in both movies.
The Most Dangerous Game was followed by the release of Wray's best remembered film, King Kong. According to Wray, Jean Harlow had been RKO's original choice, but because MGM put Harlow under exclusive contract during the pre-production phase of the film, she became unavailable. Wray was approached by director Merian C. Cooper to play the blonde captive of King Kong; the role of Ann Darrow for which she was paid $10,000 ($200,000 in 2022 dollars) to portray. The film was a commercial success and Wray was reportedly proud that the film saved RKO from bankruptcy.
Wray continued to star in films, including The Richest Girl in the World, but by the early 1940s, her appearances became less frequent. She retired in 1942 after her second marriage but due to financial exigencies she soon resumed her acting career, and over the next three decades, Wray appeared in several films and appeared frequently on television. Wray portrayed Catherine Morrison in the 1953–54 sitcom The Pride of the Family with Natalie Wood playing her daughter. Wray appeared in Queen Bee, released in 1955.
Wray appeared in three episodes of Perry Mason: "The Case of the Prodigal Parent" (1958); "The Case of the Watery Witness" (1959), as murder victim Lorna Thomas; and "The Case of the Fatal Fetish" (1965), as voodoo practitioner Mignon Germaine. In 1959, Wray was cast as Tula Marsh in the episode "The Second Happiest Day" of Playhouse 90. Other roles around this time were in the episodes "Dip in the Pool" (1958) and "The Morning After" of CBS's Alfred Hitchcock Presents. In 1960, she appeared as Clara in an episode of 77 Sunset Strip, "Who Killed Cock Robin?" Another 1960 role was that of Mrs. Staunton, with Gigi Perreau as her daughter, in the episode "Flight from Terror" of The Islanders.
Wray appeared in a 1961 episode of The Real McCoys titled "Theatre in the Barn". In 1963, she played Mrs. Brubaker in The Eleventh Hour episode "You're So Smart, Why Can't You Be Good?". She ended her acting career with the 1980 made-for-television film Gideon's Trumpet.
In 1988, she published her autobiography On the Other Hand. In her later years, Wray continued to make public appearances. In 1991, she was crowned Queen of the Beaux Arts Ball, presiding with King Herbert Huncke.
She was approached by James Cameron to play the part of Rose Dawson Calvert for his blockbuster Titanic (1997) with Kate Winslet to play her younger self, but she turned down the role, which was subsequently portrayed by Gloria Stuart in an Oscar-nominated performance. She was a special guest at the 70th Academy Awards, where the show's host Billy Crystal introduced her as the "Beauty who charmed the Beast." She was the only 1920s Hollywood actress in attendance that evening. On October 3, 1998, she appeared at the Pine Bluff Film Festival, which showed The Wedding March with live orchestral accompaniment.
In January 2003, the 95-year-old Wray appeared at the 2003 Palm Beach International Film Festival to celebrate the Rick McKay documentary film Broadway: The Golden Age, by the Legends Who Were There, where she was honored with a "Legend in Film" award. In her later years, she visited the Empire State Building frequently; in 1991, she was a guest of honor at the building's 60th anniversary, and in May 2004, she made one of her last public appearances at the ESB. Her final public appearance was at the premiere of the documentary film Broadway: The Golden Age, by the Legends Who Were There in June 2004.
Wray married three times – to writers John Monk Saunders and Robert Riskin and the neurosurgeon Sanford Rothenberg (January 28, 1919 – January 4, 1991). She had three children: Susan Saunders, Victoria Riskin, and Robert Riskin Jr.
After returning to the US after finishing The Clairvoyant she became a naturalized citizen of the United States in May 1935.
Wray died in her sleep of natural causes in the night of August 8, 2004, in her apartment on Fifth Avenue Manhattan. She is interred at the Hollywood Forever Cemetery in Hollywood, California.
Two days after her death, the lights of the Empire State Building were lowered for 15 minutes in her memory.
In 1989, Wray was awarded the Women in Film Crystal Award. Wray was honored with a Legend in Film award at the 2003 Palm Beach International Film Festival. For her contribution to the motion picture industry, Wray was honored with a star on the Hollywood Walk of Fame at 6349 Hollywood Blvd. She received a star posthumously on Canada's Walk of Fame in Toronto on June 5, 2005. A small park near Lee's Creek on Main Street in Cardston, Alberta, her birthplace, was named Fay Wray Park in her honor. The small sign at the edge of the park on Main Street has a silhouette of King Kong on it, remembering her role in King Kong. A large oil portrait of Wray by Alberta artist Neil Boyle is on display in the Empress Theatre in Fort Macleod, Alberta. In May 2006, Wray became one of the first four entertainers to be honored by Canada Post by being featured on a postage stamp. | [
{
"paragraph_id": 0,
"text": "Vina Fay Wray (September 15, 1907 – August 8, 2004) was a Canadian-American actress best known for starring as Ann Darrow in the 1933 film King Kong. Through an acting career that spanned nearly six decades, Wray attained international recognition as an actress in horror films. She has been dubbed one of the early \"scream queens\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "After appearing in minor film roles, Wray gained media attention after being selected as one of the \"WAMPAS Baby Stars\" in 1926. This led to her being contracted to Paramount Pictures as a teenager, where she made more than a dozen feature films. After leaving Paramount, she signed deals with various film companies, being cast in her first horror film roles, in addition to many other types of roles, including in The Bowery (1933) and Viva Villa! (1934), both of which starred Wallace Beery. For RKO Radio Pictures, Inc., Wray starred in the film she is most identified with, King Kong (1933). After the success of King Kong, she made numerous appearances in both film and television, retiring in 1980.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Wray was born on a ranch near Cardston, Alberta, to parents who were members of the Church of Jesus Christ of Latter-day Saints, Elvina Marguerite Jones, who was from Salt Lake City, Utah, and Joseph Heber Wray, who was from Kingston upon Hull, England. She was one of six children and was a granddaughter of LDS pioneer Daniel Webster Jones. Her ancestors came from England, Scotland, Ireland and Wales. Wray was never baptized a member of The Church of Jesus Christ of Latter-day Saints.",
"title": "Life and career"
},
{
"paragraph_id": 3,
"text": "Her family returned to the United States a few years after she was born; they moved to Salt Lake City in 1912 and moved to Lark, Utah, in 1914. In 1919, the Wray family returned to Salt Lake City, and then relocated to Hollywood, where Fay attended Hollywood High School.",
"title": "Life and career"
},
{
"paragraph_id": 4,
"text": "In 1923, Wray appeared in her first film at the age of 16, when she landed a role in a short historical film sponsored by a local newspaper. In the 1920s, Wray landed a major role in the silent film The Coast Patrol (1925), as well as uncredited bit parts at the Hal Roach Studios.",
"title": "Life and career"
},
{
"paragraph_id": 5,
"text": "In 1926, the Western Association of Motion Picture Advertisers selected Wray as one of the \"WAMPAS Baby Stars\", a group of women whom they believed to be on the threshold of movie stardom. She was at the time under contract to Universal Studios, mostly co-starring in low-budget Westerns opposite Buck Jones.",
"title": "Life and career"
},
{
"paragraph_id": 6,
"text": "The following year, Wray was signed to a contract with Paramount Pictures. In 1926, director Erich von Stroheim cast her as the main female lead in his film The Wedding March, released by Paramount two years later. While the film was noted for its high budget and production values, it was a financial failure. It also gave Wray her first lead role. Wray stayed with Paramount to make more than a dozen films and made the transition from silent films to \"talkies\".",
"title": "Life and career"
},
{
"paragraph_id": 7,
"text": "After leaving Paramount, Wray signed with other film studios. Under these deals, Wray was cast in several horror films, including Doctor X (1932) and Mystery of the Wax Museum (1933). However, her best known films were produced under her deal with RKO Radio Pictures. Her first film with RKO was The Most Dangerous Game (1932), co-starring Joel McCrea. The production was filmed at night on the same jungle sets that were being used for King Kong during the day, and with Wray and Robert Armstrong starring in both movies.",
"title": "Life and career"
},
{
"paragraph_id": 8,
"text": "The Most Dangerous Game was followed by the release of Wray's best remembered film, King Kong. According to Wray, Jean Harlow had been RKO's original choice, but because MGM put Harlow under exclusive contract during the pre-production phase of the film, she became unavailable. Wray was approached by director Merian C. Cooper to play the blonde captive of King Kong; the role of Ann Darrow for which she was paid $10,000 ($200,000 in 2022 dollars) to portray. The film was a commercial success and Wray was reportedly proud that the film saved RKO from bankruptcy.",
"title": "Life and career"
},
{
"paragraph_id": 9,
"text": "Wray continued to star in films, including The Richest Girl in the World, but by the early 1940s, her appearances became less frequent. She retired in 1942 after her second marriage but due to financial exigencies she soon resumed her acting career, and over the next three decades, Wray appeared in several films and appeared frequently on television. Wray portrayed Catherine Morrison in the 1953–54 sitcom The Pride of the Family with Natalie Wood playing her daughter. Wray appeared in Queen Bee, released in 1955.",
"title": "Life and career"
},
{
"paragraph_id": 10,
"text": "Wray appeared in three episodes of Perry Mason: \"The Case of the Prodigal Parent\" (1958); \"The Case of the Watery Witness\" (1959), as murder victim Lorna Thomas; and \"The Case of the Fatal Fetish\" (1965), as voodoo practitioner Mignon Germaine. In 1959, Wray was cast as Tula Marsh in the episode \"The Second Happiest Day\" of Playhouse 90. Other roles around this time were in the episodes \"Dip in the Pool\" (1958) and \"The Morning After\" of CBS's Alfred Hitchcock Presents. In 1960, she appeared as Clara in an episode of 77 Sunset Strip, \"Who Killed Cock Robin?\" Another 1960 role was that of Mrs. Staunton, with Gigi Perreau as her daughter, in the episode \"Flight from Terror\" of The Islanders.",
"title": "Life and career"
},
{
"paragraph_id": 11,
"text": "Wray appeared in a 1961 episode of The Real McCoys titled \"Theatre in the Barn\". In 1963, she played Mrs. Brubaker in The Eleventh Hour episode \"You're So Smart, Why Can't You Be Good?\". She ended her acting career with the 1980 made-for-television film Gideon's Trumpet.",
"title": "Life and career"
},
{
"paragraph_id": 12,
"text": "In 1988, she published her autobiography On the Other Hand. In her later years, Wray continued to make public appearances. In 1991, she was crowned Queen of the Beaux Arts Ball, presiding with King Herbert Huncke.",
"title": "Life and career"
},
{
"paragraph_id": 13,
"text": "She was approached by James Cameron to play the part of Rose Dawson Calvert for his blockbuster Titanic (1997) with Kate Winslet to play her younger self, but she turned down the role, which was subsequently portrayed by Gloria Stuart in an Oscar-nominated performance. She was a special guest at the 70th Academy Awards, where the show's host Billy Crystal introduced her as the \"Beauty who charmed the Beast.\" She was the only 1920s Hollywood actress in attendance that evening. On October 3, 1998, she appeared at the Pine Bluff Film Festival, which showed The Wedding March with live orchestral accompaniment.",
"title": "Life and career"
},
{
"paragraph_id": 14,
"text": "In January 2003, the 95-year-old Wray appeared at the 2003 Palm Beach International Film Festival to celebrate the Rick McKay documentary film Broadway: The Golden Age, by the Legends Who Were There, where she was honored with a \"Legend in Film\" award. In her later years, she visited the Empire State Building frequently; in 1991, she was a guest of honor at the building's 60th anniversary, and in May 2004, she made one of her last public appearances at the ESB. Her final public appearance was at the premiere of the documentary film Broadway: The Golden Age, by the Legends Who Were There in June 2004.",
"title": "Life and career"
},
{
"paragraph_id": 15,
"text": "Wray married three times – to writers John Monk Saunders and Robert Riskin and the neurosurgeon Sanford Rothenberg (January 28, 1919 – January 4, 1991). She had three children: Susan Saunders, Victoria Riskin, and Robert Riskin Jr.",
"title": "Personal life"
},
{
"paragraph_id": 16,
"text": "After returning to the US after finishing The Clairvoyant she became a naturalized citizen of the United States in May 1935.",
"title": "Personal life"
},
{
"paragraph_id": 17,
"text": "Wray died in her sleep of natural causes in the night of August 8, 2004, in her apartment on Fifth Avenue Manhattan. She is interred at the Hollywood Forever Cemetery in Hollywood, California.",
"title": "Personal life"
},
{
"paragraph_id": 18,
"text": "Two days after her death, the lights of the Empire State Building were lowered for 15 minutes in her memory.",
"title": "Personal life"
},
{
"paragraph_id": 19,
"text": "In 1989, Wray was awarded the Women in Film Crystal Award. Wray was honored with a Legend in Film award at the 2003 Palm Beach International Film Festival. For her contribution to the motion picture industry, Wray was honored with a star on the Hollywood Walk of Fame at 6349 Hollywood Blvd. She received a star posthumously on Canada's Walk of Fame in Toronto on June 5, 2005. A small park near Lee's Creek on Main Street in Cardston, Alberta, her birthplace, was named Fay Wray Park in her honor. The small sign at the edge of the park on Main Street has a silhouette of King Kong on it, remembering her role in King Kong. A large oil portrait of Wray by Alberta artist Neil Boyle is on display in the Empress Theatre in Fort Macleod, Alberta. In May 2006, Wray became one of the first four entertainers to be honored by Canada Post by being featured on a postage stamp.",
"title": "Honors"
}
]
| Vina Fay Wray was a Canadian-American actress best known for starring as Ann Darrow in the 1933 film King Kong. Through an acting career that spanned nearly six decades, Wray attained international recognition as an actress in horror films. She has been dubbed one of the early "scream queens". After appearing in minor film roles, Wray gained media attention after being selected as one of the "WAMPAS Baby Stars" in 1926. This led to her being contracted to Paramount Pictures as a teenager, where she made more than a dozen feature films. After leaving Paramount, she signed deals with various film companies, being cast in her first horror film roles, in addition to many other types of roles, including in The Bowery (1933) and Viva Villa! (1934), both of which starred Wallace Beery. For RKO Radio Pictures, Inc., Wray starred in the film she is most identified with, King Kong (1933). After the success of King Kong, she made numerous appearances in both film and television, retiring in 1980. | 2001-08-10T05:47:43Z | 2023-12-10T06:36:42Z | [
"Template:More citations needed section",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:Infobox person",
"Template:Cite web",
"Template:Commons category",
"Template:Amg name",
"Template:Distinguish",
"Template:Use Canadian English",
"Template:Use mdy dates",
"Template:Div col end",
"Template:The George Pal Memorial Award",
"Template:Authority control",
"Template:Inflation-year",
"Template:Div col",
"Template:Portal",
"Template:Reflist",
"Template:Wikiquote",
"Template:IMDb name",
"Template:IBDB name"
]
| https://en.wikipedia.org/wiki/Fay_Wray |
10,967 | Forgetting curve | The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material.
The forgetting curve supports one of the seven kinds of memory failures: transience, which is the process of forgetting that occurs with the passage of time.
From 1880 to 1885, Hermann Ebbinghaus ran a limited, incomplete study on himself and published his hypothesis in 1885 as Über das Gedächtnis (later translated into English as Memory: A Contribution to Experimental Psychology). Ebbinghaus studied the memorisation of nonsense syllables, such as "WID" and "ZOF" (CVCs or Consonant–Vowel–Consonant) by repeatedly testing himself after various time periods and recording the results. He plotted these results on a graph creating what is now known as the "forgetting curve". Ebbinghaus investigated the rate of forgetting, but not the effect of spaced repetition on the increase in retrievability of memories.
Ebbinghaus's publication also included an equation to approximate his forgetting curve:
Here, b {\displaystyle b} represents 'Savings' expressed as a percentage, and t {\displaystyle t} represents time in minutes, counting from one minute before end of learning. The constants c and k are 1.25 and 1.84 respectively. Savings is defined as the relative amount of time saved on the second learning trial as a result of having had the first. A savings of 100% would indicate that all items were still known from the first trial. A 75% savings would mean that relearning missed items required 25% as long as the original learning session (to learn all items). 'Savings' is thus, analogous to retention rate.
In 2015, an attempt to replicate the forgetting curve with one study subject has shown the experimental results similar to Ebbinghaus' original data.
Ebbinghaus' experiment has significantly contributed to experimental psychology. He was the first to carry out a series of well-designed experiments on the subject of forgetting, and he was one of the first to choose artificial stimuli in the research of experimental psychology. Since his introduction of nonsense syllables, a large number of experiments in experimental psychology has been based on highly controlled artificial stimuli.
Hermann Ebbinghaus hypothesized that the speed of forgetting depends on a number of factors such as the difficulty of the learned material (e.g. how meaningful it is), its representation and other physiological factors such as stress and sleep. He further hypothesized that the basal forgetting rate differs little between individuals. He concluded that the difference in performance can be explained by mnemonic representation skills.
He went on to hypothesize that basic training in mnemonic techniques can help overcome those differences in part. He asserted that the best methods for increasing the strength of memory are:
His premise was that each repetition in learning increases the optimum interval before the next repetition is needed (for near-perfect retention, initial repetitions may need to be made within days, but later they can be made after years). He discovered that information is easier to recall when it's built upon things you already know, and the forgetting curve was flattened by every repetition. It appeared that by applying frequent training in learning, the information was solidified by repeated recalling.
Later research also suggested that, other than the two factors Ebbinghaus proposed, higher original learning would also produce slower forgetting. The more information was originally learned, the slower the forgetting rate would be.
Spending time each day to remember information will greatly decrease the effects of the forgetting curve. Some learning consultants claim reviewing material in the first 24 hours after learning information is the optimum time to actively recall the content and reset the forgetting curve. Evidence suggests waiting 10–20% of the time towards when the information will be needed is the optimum time for a single review.
Some memories remain free from the detrimental effects of interference and do not necessarily follow the typical forgetting curve as various noise and outside factors influence what information would be remembered. There is debate among supporters of the hypothesis about the shape of the curve for events and facts that are more significant to the subject. Some supporters, for example, suggest that memories of shocking events such as the Kennedy Assassination or 9/11 are vividly imprinted in memory (flashbulb memory). Others have compared contemporaneous written recollections with recollections recorded years later, and found considerable variations as the subject's memory incorporates after-acquired information. There is considerable research in this area as it relates to eyewitness identification testimony, and eyewitness accounts are found demonstrably unreliable.
Many equations have since been proposed to approximate forgetting, perhaps the simplest being an exponential curve described by the equation
where R {\displaystyle R} is retrievability (a measure of how easy it is to retrieve a piece of information from memory), S {\displaystyle S} is stability of memory (determines how fast R {\displaystyle R} falls over time in the absence of training, testing or other recall), and t {\displaystyle t} is time.
Simple equations such as this one were not found to provide a good fit to the available data. | [
{
"paragraph_id": 0,
"text": "The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The forgetting curve supports one of the seven kinds of memory failures: transience, which is the process of forgetting that occurs with the passage of time.",
"title": ""
},
{
"paragraph_id": 2,
"text": "From 1880 to 1885, Hermann Ebbinghaus ran a limited, incomplete study on himself and published his hypothesis in 1885 as Über das Gedächtnis (later translated into English as Memory: A Contribution to Experimental Psychology). Ebbinghaus studied the memorisation of nonsense syllables, such as \"WID\" and \"ZOF\" (CVCs or Consonant–Vowel–Consonant) by repeatedly testing himself after various time periods and recording the results. He plotted these results on a graph creating what is now known as the \"forgetting curve\". Ebbinghaus investigated the rate of forgetting, but not the effect of spaced repetition on the increase in retrievability of memories.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Ebbinghaus's publication also included an equation to approximate his forgetting curve:",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Here, b {\\displaystyle b} represents 'Savings' expressed as a percentage, and t {\\displaystyle t} represents time in minutes, counting from one minute before end of learning. The constants c and k are 1.25 and 1.84 respectively. Savings is defined as the relative amount of time saved on the second learning trial as a result of having had the first. A savings of 100% would indicate that all items were still known from the first trial. A 75% savings would mean that relearning missed items required 25% as long as the original learning session (to learn all items). 'Savings' is thus, analogous to retention rate.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 2015, an attempt to replicate the forgetting curve with one study subject has shown the experimental results similar to Ebbinghaus' original data.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Ebbinghaus' experiment has significantly contributed to experimental psychology. He was the first to carry out a series of well-designed experiments on the subject of forgetting, and he was one of the first to choose artificial stimuli in the research of experimental psychology. Since his introduction of nonsense syllables, a large number of experiments in experimental psychology has been based on highly controlled artificial stimuli.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Hermann Ebbinghaus hypothesized that the speed of forgetting depends on a number of factors such as the difficulty of the learned material (e.g. how meaningful it is), its representation and other physiological factors such as stress and sleep. He further hypothesized that the basal forgetting rate differs little between individuals. He concluded that the difference in performance can be explained by mnemonic representation skills.",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 8,
"text": "He went on to hypothesize that basic training in mnemonic techniques can help overcome those differences in part. He asserted that the best methods for increasing the strength of memory are:",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 9,
"text": "His premise was that each repetition in learning increases the optimum interval before the next repetition is needed (for near-perfect retention, initial repetitions may need to be made within days, but later they can be made after years). He discovered that information is easier to recall when it's built upon things you already know, and the forgetting curve was flattened by every repetition. It appeared that by applying frequent training in learning, the information was solidified by repeated recalling.",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 10,
"text": "Later research also suggested that, other than the two factors Ebbinghaus proposed, higher original learning would also produce slower forgetting. The more information was originally learned, the slower the forgetting rate would be.",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 11,
"text": "Spending time each day to remember information will greatly decrease the effects of the forgetting curve. Some learning consultants claim reviewing material in the first 24 hours after learning information is the optimum time to actively recall the content and reset the forgetting curve. Evidence suggests waiting 10–20% of the time towards when the information will be needed is the optimum time for a single review.",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 12,
"text": "Some memories remain free from the detrimental effects of interference and do not necessarily follow the typical forgetting curve as various noise and outside factors influence what information would be remembered. There is debate among supporters of the hypothesis about the shape of the curve for events and facts that are more significant to the subject. Some supporters, for example, suggest that memories of shocking events such as the Kennedy Assassination or 9/11 are vividly imprinted in memory (flashbulb memory). Others have compared contemporaneous written recollections with recollections recorded years later, and found considerable variations as the subject's memory incorporates after-acquired information. There is considerable research in this area as it relates to eyewitness identification testimony, and eyewitness accounts are found demonstrably unreliable.",
"title": "Increasing rate of learning"
},
{
"paragraph_id": 13,
"text": "Many equations have since been proposed to approximate forgetting, perhaps the simplest being an exponential curve described by the equation",
"title": "Equations"
},
{
"paragraph_id": 14,
"text": "where R {\\displaystyle R} is retrievability (a measure of how easy it is to retrieve a piece of information from memory), S {\\displaystyle S} is stability of memory (determines how fast R {\\displaystyle R} falls over time in the absence of training, testing or other recall), and t {\\displaystyle t} is time.",
"title": "Equations"
},
{
"paragraph_id": 15,
"text": "Simple equations such as this one were not found to provide a good fit to the available data.",
"title": "Equations"
}
]
| The forgetting curve hypothesizes the decline of memory retention in time. This curve shows how information is lost over time when there is no attempt to retain it. A related concept is the strength of memory that refers to the durability that memory traces in the brain. The stronger the memory, the longer period of time that a person is able to recall it. A typical graph of the forgetting curve purports to show that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material. The forgetting curve supports one of the seven kinds of memory failures: transience, which is the process of forgetting that occurs with the passage of time. | 2001-08-10T21:38:50Z | 2023-12-30T10:56:37Z | [
"Template:Short description",
"Template:More citations needed",
"Template:Cite book",
"Template:Cite journal",
"Template:Memory",
"Template:Spaced repetition",
"Template:Lang",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Forgetting_curve |
10,969 | Field-programmable gate array | A field-programmable gate array (FPGA) is a type of integrated circuit that can be programmed or reprogrammed after manufacturing. It consists of an array of programmable logic blocks and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as in telecommunications, automotive, aerospace, and industrial sectors.
FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration.
The logic blocks of an FPGA can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
FPGAs also have a role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.
FPGAs are also commonly used during the development of ASICs to speed up the simulation process.
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable).
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
Xilinx produced the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs).
In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.
Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform.
The following timelines indicate progress in different aspects of FPGA design.
A design start is a new custom design for implementation on an FPGA.
Contemporary FPGAs have ample logic gates and RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.
As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning helps resource allocation within FPGAs to meet these timing constraints.
Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillator driver circuitry, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few mixed signal FPGAs have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
The most common FPGA architecture consists of an array of logic blocks called configurable logic blocks (CLBs), or logic array blocks (LABs), depending on vendor, I/O pads, and routing channels. Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array.
"An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs."
In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. The LUT might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, entire or parts of the adder are stored as functions into the LUTs in order to save space.
Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high-speed I/O logic and embedded memories.
Higher-end FPGAs can contain high-speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI or PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performance signal conditioning circuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA.
An alternate approach to using hard macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at run time, which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.
In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete system on a programmable chip. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.
Most of the logic inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as an H tree, so they can be delivered with minimal skew. FPGAs may contain analog phase-locked loop or delay-locked loop components to synthesize new clock frequencies and manage jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. Some FPGAs contain dual port RAM blocks that are capable of working with different clocks, aiding in the construction of building FIFOs and dual port buffers that bridge clock domains.
To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies.
Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA.
Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other die and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology.
To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules.
Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification and validation methodologies. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.
The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog. However, in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instruments' LabVIEW graphical programming language (sometimes referred to as G) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog is currently the most popular. Verilog creates a level of abstraction to hide away the details of its implementation. Verilog has a C-like syntax, unlike VHDL.
To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources. Such designs are known as open-source hardware.
In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.
More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language and target FPGA functions as OpenCL kernels using OpenCL constructs. For further information, see high-level synthesis and C to HDL.
Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may often load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS.
Rarer alternatives to the SRAM approach include:
In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now an Intel subsidiary) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market.
Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs.
In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down.
On June 1, 2015, Intel announced it would acquire Altera for approximately $16.7 billion and completed the acquisition on December 30, 2015.
On October 27, 2020, AMD announced it would acquire Xilinx and completed the acquisition valued at about $50 billion in February 2022.
Other manufacturers include:
An FPGA can be used to solve any problem which is computable. This is trivially proven by the fact that FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.
FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began to incorporate FPGAs instead.
The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems. The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging.
Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. As of 2018, FPGAs are seeing increased use as AI accelerators including Microsoft's so-termed "Project Catapult" and for accelerating artificial neural networks for machine learning applications.
Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. As of 2017, new cost and performance dynamics have broadened the range of viable applications.
Where personal computer peripherals exist in niche markets or are struggling to make inroads into a mass market (sometimes despite heavy promotion), it can be more cost-effective to utilise FPGAs for small production runs (e.g. 1,000 units). Examples include exotic products such as e.g. ArVid, a VHS tape archiver (only some versions of which were FPGA-based) and Gigabyte Technology's i-RAM budget pseudo-SSD drive, which used a Xilinx FPGA. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. Again, to the extent the availability of lower-cost FPGAs is increasing, it can become justifiable to include them even in larger production runs.
Other uses for FPGAs include:
FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory. Physical unclonable functions (PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space.
FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi.
With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks.
In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data.
In 2020 a critical vulnerability (named "Starbleed") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected.
Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP ("Application-specific standard part", such as a standalone USB interface chip) solutions by providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'. A design that included 6 to 10 ASICs can now be achieved using only one FPGA.
Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.
The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible but have the advantage of more predictable timing delays and a higher logic-to-interconnect ratio. FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for "booting" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design. | [
{
"paragraph_id": 0,
"text": "A field-programmable gate array (FPGA) is a type of integrated circuit that can be programmed or reprogrammed after manufacturing. It consists of an array of programmable logic blocks and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as in telecommunications, automotive, aerospace, and industrial sectors.",
"title": ""
},
{
"paragraph_id": 1,
"text": "FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The logic blocks of an FPGA can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.",
"title": ""
},
{
"paragraph_id": 3,
"text": "FPGAs also have a role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.",
"title": ""
},
{
"paragraph_id": 4,
"text": "FPGAs are also commonly used during the development of ASICs to speed up the simulation process.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Xilinx produced the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The following timelines indicate progress in different aspects of FPGA design.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "A design start is a new custom design for implementation on an FPGA.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Contemporary FPGAs have ample logic gates and RAM blocks to implement complex digital computations. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.",
"title": "Design"
},
{
"paragraph_id": 16,
"text": "As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning helps resource allocation within FPGAs to meet these timing constraints.",
"title": "Design"
},
{
"paragraph_id": 17,
"text": "Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillator driver circuitry, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few mixed signal FPGAs have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.",
"title": "Design"
},
{
"paragraph_id": 18,
"text": "The most common FPGA architecture consists of an array of logic blocks called configurable logic blocks (CLBs), or logic array blocks (LABs), depending on vendor, I/O pads, and routing channels. Generally, all the routing channels have the same width (number of signals). Multiple I/O pads may fit into the height of one row or the width of one column in the array.",
"title": "Design"
},
{
"paragraph_id": 19,
"text": "\"An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing channels needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing channels increase the cost (and decrease the performance) of the FPGA without providing any benefit, FPGA manufacturers try to provide just enough channels so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs.\"",
"title": "Design"
},
{
"paragraph_id": 20,
"text": "In general, a logic block consists of a few logical cells. A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. The LUT might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, entire or parts of the adder are stored as functions into the LUTs in order to save space.",
"title": "Design"
},
{
"paragraph_id": 21,
"text": "Modern FPGA families expand upon the above capabilities to include higher-level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased performance compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high-speed I/O logic and embedded memories.",
"title": "Design"
},
{
"paragraph_id": 22,
"text": "Higher-end FPGAs can contain high-speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI or PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high-performance signal conditioning circuitry along with high-speed serializers and deserializers, components that cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA.",
"title": "Design"
},
{
"paragraph_id": 23,
"text": "An alternate approach to using hard macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at run time, which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.",
"title": "Design"
},
{
"paragraph_id": 24,
"text": "In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete system on a programmable chip. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.",
"title": "Design"
},
{
"paragraph_id": 25,
"text": "Most of the logic inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset, typically implemented as an H tree, so they can be delivered with minimal skew. FPGAs may contain analog phase-locked loop or delay-locked loop components to synthesize new clock frequencies and manage jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. Some FPGAs contain dual port RAM blocks that are capable of working with different clocks, aiding in the construction of building FIFOs and dual port buffers that bridge clock domains.",
"title": "Design"
},
{
"paragraph_id": 26,
"text": "To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies.",
"title": "Design"
},
{
"paragraph_id": 27,
"text": "Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA.",
"title": "Design"
},
{
"paragraph_id": 28,
"text": "Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other die and technologies to the FPGA using Intel's embedded multi_die interconnect bridge (EMIB) technology.",
"title": "Design"
},
{
"paragraph_id": 29,
"text": "To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules.",
"title": "Programming"
},
{
"paragraph_id": 30,
"text": "Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification and validation methodologies. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.",
"title": "Programming"
},
{
"paragraph_id": 31,
"text": "The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog. However, in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instruments' LabVIEW graphical programming language (sometimes referred to as G) has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog is currently the most popular. Verilog creates a level of abstraction to hide away the details of its implementation. Verilog has a C-like syntax, unlike VHDL.",
"title": "Programming"
},
{
"paragraph_id": 32,
"text": "To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources. Such designs are known as open-source hardware.",
"title": "Programming"
},
{
"paragraph_id": 33,
"text": "In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.",
"title": "Programming"
},
{
"paragraph_id": 34,
"text": "More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language and target FPGA functions as OpenCL kernels using OpenCL constructs. For further information, see high-level synthesis and C to HDL.",
"title": "Programming"
},
{
"paragraph_id": 35,
"text": "Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may often load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS.",
"title": "Programming"
},
{
"paragraph_id": 36,
"text": "Rarer alternatives to the SRAM approach include:",
"title": "Programming"
},
{
"paragraph_id": 37,
"text": "In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now an Intel subsidiary) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market.",
"title": "Manufacturers"
},
{
"paragraph_id": 38,
"text": "Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs.",
"title": "Manufacturers"
},
{
"paragraph_id": 39,
"text": "In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down.",
"title": "Manufacturers"
},
{
"paragraph_id": 40,
"text": "On June 1, 2015, Intel announced it would acquire Altera for approximately $16.7 billion and completed the acquisition on December 30, 2015.",
"title": "Manufacturers"
},
{
"paragraph_id": 41,
"text": "On October 27, 2020, AMD announced it would acquire Xilinx and completed the acquisition valued at about $50 billion in February 2022.",
"title": "Manufacturers"
},
{
"paragraph_id": 42,
"text": "Other manufacturers include:",
"title": "Manufacturers"
},
{
"paragraph_id": 43,
"text": "An FPGA can be used to solve any problem which is computable. This is trivially proven by the fact that FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.",
"title": "Applications"
},
{
"paragraph_id": 44,
"text": "FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began to incorporate FPGAs instead.",
"title": "Applications"
},
{
"paragraph_id": 45,
"text": "The evolution of FPGAs has motivated an increase in the use of these devices, whose architecture allows the development of hardware solutions optimized for complex tasks, such as 3D MRI image segmentation, 3D discrete wavelet transform, tomographic image reconstruction, or PET/MRI systems. The developed solutions can perform intensive computation tasks with parallel processing, are dynamically reprogrammable, and have a low cost, all while meeting the hard real-time requirements associated with medical imaging.",
"title": "Applications"
},
{
"paragraph_id": 46,
"text": "Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. As of 2018, FPGAs are seeing increased use as AI accelerators including Microsoft's so-termed \"Project Catapult\" and for accelerating artificial neural networks for machine learning applications.",
"title": "Applications"
},
{
"paragraph_id": 47,
"text": "Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. As of 2017, new cost and performance dynamics have broadened the range of viable applications.",
"title": "Applications"
},
{
"paragraph_id": 48,
"text": "Where personal computer peripherals exist in niche markets or are struggling to make inroads into a mass market (sometimes despite heavy promotion), it can be more cost-effective to utilise FPGAs for small production runs (e.g. 1,000 units). Examples include exotic products such as e.g. ArVid, a VHS tape archiver (only some versions of which were FPGA-based) and Gigabyte Technology's i-RAM budget pseudo-SSD drive, which used a Xilinx FPGA. Often a custom-made chip would be cheaper if made in larger quantities, but FPGAs may be chosen to quickly bring a product to market. Again, to the extent the availability of lower-cost FPGAs is increasing, it can become justifiable to include them even in larger production runs.",
"title": "Applications"
},
{
"paragraph_id": 49,
"text": "Other uses for FPGAs include:",
"title": "Applications"
},
{
"paragraph_id": 50,
"text": "FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory. Physical unclonable functions (PUFs) are integrated circuits that have their own unique signatures, due to processing, and can also be used to secure FPGAs while taking up very little hardware space.",
"title": "Security"
},
{
"paragraph_id": 51,
"text": "FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi.",
"title": "Security"
},
{
"paragraph_id": 52,
"text": "With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks.",
"title": "Security"
},
{
"paragraph_id": 53,
"text": "In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that some FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data.",
"title": "Security"
},
{
"paragraph_id": 54,
"text": "In 2020 a critical vulnerability (named \"Starbleed\") was discovered in all Xilinx 7series FPGAs that rendered bitstream encryption useless. There is no workaround. Xilinx did not produce a hardware revision. Ultrascale and later devices, already on the market at the time, were not affected.",
"title": "Security"
},
{
"paragraph_id": 55,
"text": "Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP (\"Application-specific standard part\", such as a standalone USB interface chip) solutions by providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'. A design that included 6 to 10 ASICs can now be achieved using only one FPGA.",
"title": "Similar technologies"
},
{
"paragraph_id": 56,
"text": "Advantages of FPGAs include the ability to re-program when already deployed (i.e. \"in the field\") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.",
"title": "Similar technologies"
},
{
"paragraph_id": 57,
"text": "The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible but have the advantage of more predictable timing delays and a higher logic-to-interconnect ratio. FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions and are responsible for \"booting\" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.",
"title": "Similar technologies"
}
]
| A field-programmable gate array (FPGA) is a type of integrated circuit that can be programmed or reprogrammed after manufacturing. It consists of an array of programmable logic blocks and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as in telecommunications, automotive, aerospace, and industrial sectors. FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration. The logic blocks of an FPGA can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software. FPGAs also have a role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture. FPGAs are also commonly used during the development of ASICs to speed up the simulation process. | 2001-08-10T21:44:52Z | 2023-12-29T07:57:24Z | [
"Template:Main",
"Template:When",
"Template:Citation needed span",
"Template:YouTube",
"Template:As of",
"Template:Authority control",
"Template:Portal",
"Template:Use American English",
"Template:Failed verification",
"Template:See",
"Template:Clarify",
"Template:Cite magazine",
"Template:Cite conference",
"Template:Redirect-distinguish",
"Template:Hardware acceleration",
"Template:Short description",
"Template:Reflist",
"Template:Cite news",
"Template:Citation needed",
"Template:By whom",
"Template:See also",
"Template:Cite book",
"Template:Cite journal",
"Template:Electronic components",
"Template:Digital systems",
"Template:Programmable Logic",
"Template:Cn",
"Template:Circa",
"Template:Cite web",
"Template:Webarchive",
"Template:Dead link",
"Template:Semiconductor packages",
"Template:Snd"
]
| https://en.wikipedia.org/wiki/Field-programmable_gate_array |
10,971 | Free-running sleep | Free-running sleep is a rare sleep pattern whereby the sleep schedule of a person shifts later every day. It occurs as the sleep disorder non-24-hour sleep–wake disorder or artificially as part of experiments used in the study of circadian and other rhythms in biology. Study subjects are shielded from all time cues, often by a constant light protocol, by a constant dark protocol or by the use of light/dark conditions to which the organism cannot entrain such as the ultrashort protocol of one hour dark and two hours light. Also, limited amounts of food may be made available at short intervals so as to avoid entrainment to mealtimes. Subjects are thus forced to live by their internal circadian "clocks".
The individual's or animal's circadian phase can be known only by the monitoring of some kind of output of the circadian system, the internal "body clock". The researcher can precisely determine, for example, the daily cycles of gene activity, body temperature, blood pressure, hormone secretion and/or sleep and activity/alertness. Alertness in humans can be determined by many kinds of verbal and non-verbal tests, whereas alertness in animals can usually be assessed by observing physical activity (for example, of wheel-running in rodents).
When animals or people free-run, experiments can be done to see what sort of signals, known as zeitgebers, are effective in entrainment. Also, much work has been done to see how long or short a circadian cycle can be entrained to various organisms. For example, some animals can be entrained to a 22-hour day, but they can not be entrained to a 20-hour day. In recent studies funded by the U.S. space industry, it has been shown that most humans can be entrained to a 23.5-hour day and to a 24.65-hour day.
The effect of unintended time cues is called masking and can totally confound experimental results. Examples of masking are morning rush traffic audible to the subjects, or researchers or maintenance staff visiting subjects on a regular schedule.
Non-24-hour sleep–wake disorder, also referred to as free-running disorder (FRD) or Non-24, is one of the circadian rhythm sleep disorders in humans. It affects more than half of people who are totally blind and a smaller number of sighted individuals.
Among blind people, the cause is the inability to register, and therefore to entrain to, light cues. The many blind people who do entrain to the 24-hour light/dark cycle have eyes with functioning retinas including operative non-visual light-sensitive cells, ipRGCs. These ganglion cells, which contain melanopsin, convey their signals to the "circadian clock" via the retinohypothalamic tract (branching off from the optic nerve), linking the retina to the pineal gland.
Among sighted individuals, Non-24 usually first appears in the teens or early twenties. As with delayed sleep phase disorder (DSPS or DSPD), in the absence of neurological damage due to trauma or stroke, cases almost never appear after the age of 30. Non-24 affects more sighted males than sighted females. A quarter of sighted individuals with Non-24 also have an associated psychiatric condition, and a quarter of them have previously shown symptoms of DSPS. | [
{
"paragraph_id": 0,
"text": "Free-running sleep is a rare sleep pattern whereby the sleep schedule of a person shifts later every day. It occurs as the sleep disorder non-24-hour sleep–wake disorder or artificially as part of experiments used in the study of circadian and other rhythms in biology. Study subjects are shielded from all time cues, often by a constant light protocol, by a constant dark protocol or by the use of light/dark conditions to which the organism cannot entrain such as the ultrashort protocol of one hour dark and two hours light. Also, limited amounts of food may be made available at short intervals so as to avoid entrainment to mealtimes. Subjects are thus forced to live by their internal circadian \"clocks\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The individual's or animal's circadian phase can be known only by the monitoring of some kind of output of the circadian system, the internal \"body clock\". The researcher can precisely determine, for example, the daily cycles of gene activity, body temperature, blood pressure, hormone secretion and/or sleep and activity/alertness. Alertness in humans can be determined by many kinds of verbal and non-verbal tests, whereas alertness in animals can usually be assessed by observing physical activity (for example, of wheel-running in rodents).",
"title": "Background"
},
{
"paragraph_id": 2,
"text": "When animals or people free-run, experiments can be done to see what sort of signals, known as zeitgebers, are effective in entrainment. Also, much work has been done to see how long or short a circadian cycle can be entrained to various organisms. For example, some animals can be entrained to a 22-hour day, but they can not be entrained to a 20-hour day. In recent studies funded by the U.S. space industry, it has been shown that most humans can be entrained to a 23.5-hour day and to a 24.65-hour day.",
"title": "Background"
},
{
"paragraph_id": 3,
"text": "The effect of unintended time cues is called masking and can totally confound experimental results. Examples of masking are morning rush traffic audible to the subjects, or researchers or maintenance staff visiting subjects on a regular schedule.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "Non-24-hour sleep–wake disorder, also referred to as free-running disorder (FRD) or Non-24, is one of the circadian rhythm sleep disorders in humans. It affects more than half of people who are totally blind and a smaller number of sighted individuals.",
"title": "In humans"
},
{
"paragraph_id": 5,
"text": "Among blind people, the cause is the inability to register, and therefore to entrain to, light cues. The many blind people who do entrain to the 24-hour light/dark cycle have eyes with functioning retinas including operative non-visual light-sensitive cells, ipRGCs. These ganglion cells, which contain melanopsin, convey their signals to the \"circadian clock\" via the retinohypothalamic tract (branching off from the optic nerve), linking the retina to the pineal gland.",
"title": "In humans"
},
{
"paragraph_id": 6,
"text": "Among sighted individuals, Non-24 usually first appears in the teens or early twenties. As with delayed sleep phase disorder (DSPS or DSPD), in the absence of neurological damage due to trauma or stroke, cases almost never appear after the age of 30. Non-24 affects more sighted males than sighted females. A quarter of sighted individuals with Non-24 also have an associated psychiatric condition, and a quarter of them have previously shown symptoms of DSPS.",
"title": "In humans"
}
]
| Free-running sleep is a rare sleep pattern whereby the sleep schedule of a person shifts later every day. It occurs as the sleep disorder non-24-hour sleep–wake disorder or artificially as part of experiments used in the study of circadian and other rhythms in biology. Study subjects are shielded from all time cues, often by a constant light protocol, by a constant dark protocol or by the use of light/dark conditions to which the organism cannot entrain such as the ultrashort protocol of one hour dark and two hours light. Also, limited amounts of food may be made available at short intervals so as to avoid entrainment to mealtimes. Subjects are thus forced to live by their internal circadian "clocks". | 2001-08-10T23:14:52Z | 2023-11-01T04:48:29Z | [
"Template:Reflist",
"Template:Bare URL PDF",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite web",
"Template:Light Ethology",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Free-running_sleep |
10,972 | Fenrir | Fenrir (Old Norse 'fen-dweller') or Fenrisúlfr (Old Norse "Fenrir's wolf", often translated "Fenris-wolf"), also referred to as Hróðvitnir (Old Norse "fame-wolf") and Vánagandr (Old Norse 'monster of the [River] Ván'), is a wolf in Norse mythology. Fenrir, along with Hel and the World Serpent, is a child of Loki and giantess Angrboða. He is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda and Heimskringla, written in the 13th century by Snorri Sturluson. In both the Poetic Edda and Prose Edda, Fenrir is the father of the wolves Sköll and Hati Hróðvitnisson, is a son of Loki and is foretold to kill the god Odin during the events of Ragnarök, but will in turn be killed by Odin's son Víðarr.
In the Prose Edda, additional information is given about Fenrir, including that, due to the gods' knowledge of prophecies foretelling great trouble from Fenrir and his rapid growth, the gods bound him and as a result Fenrir bit off the right hand of the god Týr. Depictions of Fenrir have been identified on various objects and scholarly theories have been proposed regarding Fenrir's relation to other canine beings in Norse mythology. Fenrir has been the subject of artistic depictions and he appears in literature.
Fenrir is mentioned in three stanzas of the poem Völuspá and in two stanzas of the poem Vafþrúðnismál. In stanza 40 of the poem Völuspá, a völva divulges to Odin that, in the east, an old woman sat in the forest Járnviðr "and bred there the broods of Fenrir. There will come from them all one of that number to be a moon-snatcher in troll's skin." Further into the poem the völva foretells that Odin will be consumed by Fenrir at Ragnarök:
Then is fulfilled Hlín's second sorrow, when Óðinn goes to fight with the wolf, and Beli's slayer, bright, against Surtr. Then shall Frigg's sweet friend fall.
In the stanza that follows the völva describes that Odin's "tall child of Triumph's Sire" (Odin's son Víðarr) will then come to "strike at the beast of slaughter" and with his hands he will drive a sword into the heart of "Hveðrungr's son", avenging the death of his father.
In the first of two stanzas mentioning Fenrir in Vafþrúðnismál Odin poses a question to the wise jötunn Vafþrúðnir:
Much I have travelled, much have I tried out, much have I tested the Powers; from where will a sun come into the smooth heaven when Fenrir has assailed this one?
In the stanza that follows Vafþrúðnir responds that Sól (here referred to as Álfröðull) will bear a daughter before Fenrir attacks her, and that this daughter shall continue the paths of her deceased mother through the heavens.
In the Prose Edda, Fenrir is mentioned in three books: Gylfaginning, Skáldskaparmál and Háttatal.
In chapter 13 of the Prose Edda book Gylfaginning, Fenrir is first mentioned in a stanza quoted from Völuspá. Fenrir is first mentioned in prose in chapter 25, where the enthroned figure of High tells Gangleri (described as King Gylfi in disguise) about the god Týr. High says that one example of Týr's bravery is that when the Æsir were luring Fenrir (referred to here as Fenrisúlfr) to place the fetter Gleipnir on the wolf, Týr placed his hand within the wolf's mouth as a pledge. This was done at Fenrir's own request because he did not trust that the Æsir would let him go. As a result, when the Æsir refused to release him, he bit off Týr's hand at a location "now called the wolf-joint" (the wrist), causing Týr to be one-handed and "not considered to be a promoter of settlements between people."
In chapter 34, High describes Loki, and says that Loki had three children with a woman named Angrboða located in the land of Jötunheimr; Fenrisúlfr, the serpent Jörmungandr, and the female being Hel. High continues that, once the gods found that these three children were being brought up in the land of Jötunheimr, and when the gods "traced prophecies that from these siblings great mischief and disaster would arise for them" the gods expected a lot of trouble from the three children, partially due to the nature of the mother of the children, yet worse so due to the nature of their father.
High says that Odin sent the gods to gather the children and bring them to him. Upon their arrival, Odin threw Jörmungandr into "that deep sea that lies round all lands", and then threw Hel into Niflheim, and bestowed upon her authority over nine worlds. However, the Æsir brought up the wolf "at home", and only Týr had the courage to approach Fenrir, and give Fenrir food. The gods noticed that Fenrir was growing rapidly every day, and since all prophecies foretold that Fenrir was destined to cause them harm, the gods formed a plan. The gods prepared three fetters: The first, greatly strong, was called Leyding. They brought Leyding to Fenrir and suggested that the wolf try his strength with it. Fenrir judged that it was not beyond his strength, and so let the gods do what they wanted with it. At Fenrir's first kick the bind snapped, and Fenrir loosened himself from Leyding. The gods made a second fetter, twice as strong, and named it Dromi. The gods asked Fenrir to try the new fetter, and that should he break this feat of engineering, Fenrir would achieve great fame for his strength. Fenrir considered that, while the fetter was very strong, his strength had grown since he broke Leyding; and also that he would have to take some risks if he were to become famous. Fenrir allowed them to place the fetter.
When the Æsir exclaimed that they were ready, Fenrir shook himself, knocked the fetter to the ground, strained hard, and kicking with his feet, snapped the fetter – breaking it into pieces that flew far into the distance. High says that, as a result, to "loose from Leyding" or to "strike out of Dromi" have become sayings for when something is achieved with great effort. The Æsir started to fear that they would not be able to bind Fenrir, and so Odin sent Freyr's messenger Skírnir down into the land of Svartálfaheimr to "some dwarfs" and had them make a fetter called Gleipnir. The dwarves constructed Gleipnir from six mythical ingredients. After an exchange between Gangleri and High, High continues that the fetter was smooth and soft as a silken ribbon, yet strong and firm. The messenger brought the ribbon to the Æsir, and they thanked him heartily for completing the task.
The Æsir went out on to the lake Amsvartnir sent for Fenrir to accompany them, and continued to the island Lyngvi (Old Norse "a place overgrown with heather"). The gods showed Fenrir the silken fetter Gleipnir, told him to tear it, stated that it was much stronger than it appeared, passed it among themselves, used their hands to pull it, and yet it did not tear. However, they said that Fenrir would be able to tear it, to which Fenrir replied:
It looks to me that with this ribbon as though I will gain no fame from it if I do tear apart such a slender band, but if it is made with art and trickery, then even if it does look thin, this band is not going on my legs.
The Æsir said Fenrir would quickly tear apart a thin silken strip, noting that Fenrir earlier broke great iron binds, and added that if Fenrir wasn't able to break slender Gleipnir then Fenrir is nothing for the gods to fear, and as a result would be freed. Fenrir responded:
If you bind me so that I am unable to release myself, then you will be standing by in such a way that I should have to wait a long time before I got any help from you. I am reluctant to have this band put on me. But rather than that you question my courage, let someone put his hand in my mouth as a pledge that this is done in good faith.
With this statement, all of the Æsir look to one another, finding themselves in a dilemma. Everyone refused to place their hand in Fenrir's mouth until Týr put out his right hand and placed it into the wolf's jaws. When Fenrir kicked, Gleipnir caught tightly, and the more Fenrir struggled, the stronger the band grew. At this, everyone laughed, except Týr, who there lost his right hand. When the gods knew that Fenrir was fully bound, they took a cord called Gelgja (Old Norse "fetter") hanging from Gleipnir, inserted the cord through a large stone slab called Gjöll (Old Norse "scream"), and the gods fastened the stone slab deep into the ground. After, the gods took a great rock called Thviti (Old Norse "hitter, batterer"), and thrust it even further into the ground as an anchoring peg. Fenrir reacted violently; he opened his jaws very wide, and tried to bite the gods. Then the gods thrust a sword into his mouth. Its hilt touched the lower jaw and its point the upper one; by means of it the jaws of the wolf were spread apart and the wolf gagged. Fenrir "howled horribly", saliva ran from his mouth, and this saliva formed the river Ván (Old Norse "hope"). There Fenrir will lie until Ragnarök. Gangleri comments that Loki created a "pretty terrible family" though important, and asks why the Æsir did not just kill Fenrir there since they expected great malice from him. High replies that "so greatly did the gods respect their holy places and places of sanctuary that they did not want to defile them with the wolf's blood even though the prophecies say that he will be the death of Odin."
In chapter 38, High says that there are many men in Valhalla, and many more who will arrive, yet they will "seem too few when the wolf comes". In chapter 51, High foretells that as part of the events of Ragnarök, after Fenrir's son Sköll has swallowed the sun and his other son Hati Hróðvitnisson has swallowed the moon, the stars will disappear from the sky. The earth will shake violently, trees will be uprooted, mountains will fall, and all binds will snap – Fenrisúlfr will be free. Fenrisúlfr will go forth with his mouth opened wide, his upper jaw touching the sky and his lower jaw the earth, and flames will burn from his eyes and nostrils. Later, Fenrisúlfr will arrive at the field Vígríðr with his sibling Jörmungandr. With the forces assembled there, an immense battle will take place. During this, Odin will ride to fight Fenrisúlfr. During the battle, Fenrisúlfr will eventually swallow Odin, killing him, and Odin's son Víðarr will move forward and kick one foot into the lower jaw of the wolf. This foot will bear a legendary shoe "for which the material has been collected throughout all time". With one hand, Víðarr will take hold of the wolf's upper jaw and tear apart his mouth, killing Fenrisúlfr. High follows this prose description by citing various quotes from Völuspá in support, some of which mention Fenrir.
In the Epilogue section of the Prose Edda book Skáldskaparmál, a euhemerized monologue equates Fenrisúlfr to Pyrrhus, attempting to rationalize that "it killed Odin, and Pyrrhus could be said to be a wolf according to their religion, for he paid no respect to places of sanctuary when he killed the king in the temple in front of Thor's altar." In chapter 2, "wolf's enemy" is cited as a kenning for Odin as used by the 10th century skald Egill Skallagrímsson. In chapter 9, "feeder of the wolf" is given as a kenning for Týr and, in chapter 11, "slayer of Fenrisúlfr" is presented as a kenning for Víðarr. In chapter 50, a section of Ragnarsdrápa by the 9th century skald Bragi Boddason is quoted that refers to Hel, the being, as "the monstrous wolf's sister". In chapter 75, names for wargs and wolves are listed, including both "Hróðvitnir" and "Fenrir". "Fenrir" appears twice in verse as a common noun for a "wolf" or "warg" in chapter 58 of Skáldskaparmál, and in chapter 56 of the book Háttatal. Additionally, the name "Fenrir" can be found among a list of jötnar in chapter 75 of Skáldskaparmál.
At the end of the Heimskringla saga Hákonar saga góða, the poem Hákonarmál by the 10th century skald Eyvindr skáldaspillir is presented. The poem is about the fall of King Haakon I of Norway; although he is Christian, he is taken by two valkyries to Valhalla, and is there received as one of the Einherjar. Towards the end of the poem, a stanza relates sooner will the bonds of Fenrir snap than as good a king as Haakon shall stand in his place:
Unfettered will fare the Fenris Wolf and ravaged the realm of men, ere that cometh a kingly prince as good, to stand in his stead.
Thorwald's Cross, a partially surviving runestone erected at Kirk Andreas on the Isle of Man, depicts a bearded human holding a spear downward at a wolf, his right foot in its mouth, while a large bird sits at his shoulder. Rundata dates it to 940, while Pluskowski dates it to the 11th century. This depiction has been interpreted as Odin, with a raven or eagle at his shoulder, being consumed by Fenrir at Ragnarök. On the reverse of the stone is another image parallel to it that has been described as Christ triumphing over Satan. These combined elements have led to the cross as being described as "syncretic art"; a mixture of pagan and Christian beliefs.
The mid-11th century Gosforth Cross, located in Cumbria, England, has been described as depicting a combination of scenes from the Christian Judgement Day and the pagan Ragnarök. The cross features various figures depicted in Borre style, including a man with a spear facing a monstrous head, one of whose feet is thrust into the beast's forked tongue and on its lower jaw, while a hand is placed against its upper jaw, a scene interpreted as Víðarr fighting Fenrir. This depiction has been theorized as a metaphor for Christ's defeat of Satan.
The 11th century Ledberg stone in Sweden, similarly to Thorwald's Cross, features a figure with his foot at the mouth of a four-legged beast, and this may also be a depiction of Odin being devoured by Fenrir at Ragnarök. Below the beast and the man is a depiction of a legless, helmeted man, with his arms in a prostrate position. The Younger Futhark inscription on the stone bears a commonly seen memorial dedication, but is followed by an encoded runic sequence that has been described as "mysterious", and "an interesting magic formula which is known from all over the ancient Norse world".
If the images on the Tullstorp Runestone are correctly identified as depicting Ragnarök, then Fenrir is shown above the ship Naglfar.
Meyer Schapiro theorizes a connection between the "Hell Mouth" that appears in medieval Christian iconography and Fenrir. According to Schapiro, "the Anglo-Saxon taste for the Hell Mouth was perhaps influenced by the northern pagan myth of the Crack of Doom and the battle with the wolf, who devoured Odin."
Scholars propose that a variety of objects from the archaeological record depict Týr. For example, a Migration Period gold bracteate from Trollhättan, Sweden, features a person receiving a bite on the hand from a beast, which may depict Týr and Fenrir. A Viking Age hogback in Sockburn, County Durham, North East England may depict Týr and Fenrir.
In reference to Fenrir's presentation in the Prose Edda, Andy Orchard theorizes that "the hound (or wolf)" Garmr, Sköll, and Hati Hróðvitnisson were originally simply all Fenrir, stating that "Snorri, characteristically, is careful to make distinctions, naming the wolves who devour the sun and moon as Sköll and Hati, and describing an encounter between Garm and Týr (who, one would have thought, might like to get his hand on Fenrir) at Ragnarök."
John Lindow says that it is unclear why the gods decide to raise Fenrir as opposed to his siblings Hel and Jörmungandr in Gylfaginning chapter 35, theorizing that it may be "because Odin had a connection with wolves? Because Loki was Odin's blood brother?" Referring to the same chapter, Lindow comments that neither of the phrases that Fenrir's binding result in have left any other traces. Lindow compares Fenrir's role to his father Loki and Fenrir's sibling Jörmungandr, in that they all spend time with the gods, are bound or cast out by them, return "at the end of the current mythic order to destroy them, only to be destroyed himself as a younger generation of gods, one of them his slayer, survives into the new world order." He also points to Fenrir's binding as part of a recurring theme of the bound monster, where an enemy of the gods is bound, but destined to break free at Ragnarok.
Indo-European parallels have been proposed between myths of Fenrir and the Persian demon Ahriman. The Yashts refer to a story where Taxma Urupi rode Angra Mainyu as a horse for thirty years. An elaboration of this allusion is found only in a late Parsi commentary. The ruler Taxmoruw (Taxma Urupi) managed to lasso Ahriman (Angra Mainyu) and keep him tied up while taking him for a ride three times a day. After thirty years, Ahriman outwitted and swallowed Taxmoruw. In a sexual encounter with Ahriman, Jamshid, Taxmoruw's brother, inserted his hand into Ahriman's anus and pulled out his brother's corpse. His hand withered from contact with the diabolic innards. The suggested parallels with Fenrir myths are the binding of an evil being by a ruler figure and the subsequent swallowing of the ruler figure by the evil being (Odin and Fenrir), trickery involving the thrusting of a hand into a monster's orifice and the affliction of the inserted limb (Týr and Fenrir).
Ethologist Valerius Geist wrote that Fenrir's maiming and ultimate killing of Odin, who had previously nurtured him, was likely based on true experiences of wolf-behaviour, seeing as wolves are genetically encoded to rise up in the pack hierarchy and have, on occasion, been recorded to rebel against, and kill, their parents. Geist states that "apparently, even the ancients knew that wolves may turn on their parents and siblings and kill them."
Fenrir appears in modern literature in the poem "Om Fenrisulven og Tyr" (1819) by Adam Gottlob Oehlenschläger (collected in Nordens Guder), the novel Der Fenriswolf by K. H. Strobl, and Til kamp mod dødbideriet (1974) by E. K. Reich and E. Larsen.
Fenrir has been depicted in the artwork Odin and Fenris (1909) and The Binding of Fenris (around 1900) by Dorothy Hardy, Odin und Fenriswolf and Fesselung des Fenriswolfe (1901) by Emil Doepler, and is the subject of the metal sculpture Fenrir by Arne Vinje Gunnerud located on the island of Askøy, Norway.
Fenrir is a highly durable mech option in Pixonic's game War Robots (released as "Walking War Robots" in 2014).
Fenrir appears as an antagonist in the 2020 videogame Assassin's Creed Valhalla, with a story adapted from the events found in Prose Edda.
Fenrir appears in the 2022 game God of War Ragnarök. | [
{
"paragraph_id": 0,
"text": "Fenrir (Old Norse 'fen-dweller') or Fenrisúlfr (Old Norse \"Fenrir's wolf\", often translated \"Fenris-wolf\"), also referred to as Hróðvitnir (Old Norse \"fame-wolf\") and Vánagandr (Old Norse 'monster of the [River] Ván'), is a wolf in Norse mythology. Fenrir, along with Hel and the World Serpent, is a child of Loki and giantess Angrboða. He is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda and Heimskringla, written in the 13th century by Snorri Sturluson. In both the Poetic Edda and Prose Edda, Fenrir is the father of the wolves Sköll and Hati Hróðvitnisson, is a son of Loki and is foretold to kill the god Odin during the events of Ragnarök, but will in turn be killed by Odin's son Víðarr.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the Prose Edda, additional information is given about Fenrir, including that, due to the gods' knowledge of prophecies foretelling great trouble from Fenrir and his rapid growth, the gods bound him and as a result Fenrir bit off the right hand of the god Týr. Depictions of Fenrir have been identified on various objects and scholarly theories have been proposed regarding Fenrir's relation to other canine beings in Norse mythology. Fenrir has been the subject of artistic depictions and he appears in literature.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fenrir is mentioned in three stanzas of the poem Völuspá and in two stanzas of the poem Vafþrúðnismál. In stanza 40 of the poem Völuspá, a völva divulges to Odin that, in the east, an old woman sat in the forest Járnviðr \"and bred there the broods of Fenrir. There will come from them all one of that number to be a moon-snatcher in troll's skin.\" Further into the poem the völva foretells that Odin will be consumed by Fenrir at Ragnarök:",
"title": "Attestations"
},
{
"paragraph_id": 3,
"text": "Then is fulfilled Hlín's second sorrow, when Óðinn goes to fight with the wolf, and Beli's slayer, bright, against Surtr. Then shall Frigg's sweet friend fall.",
"title": "Attestations"
},
{
"paragraph_id": 4,
"text": "In the stanza that follows the völva describes that Odin's \"tall child of Triumph's Sire\" (Odin's son Víðarr) will then come to \"strike at the beast of slaughter\" and with his hands he will drive a sword into the heart of \"Hveðrungr's son\", avenging the death of his father.",
"title": "Attestations"
},
{
"paragraph_id": 5,
"text": "In the first of two stanzas mentioning Fenrir in Vafþrúðnismál Odin poses a question to the wise jötunn Vafþrúðnir:",
"title": "Attestations"
},
{
"paragraph_id": 6,
"text": "Much I have travelled, much have I tried out, much have I tested the Powers; from where will a sun come into the smooth heaven when Fenrir has assailed this one?",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "In the stanza that follows Vafþrúðnir responds that Sól (here referred to as Álfröðull) will bear a daughter before Fenrir attacks her, and that this daughter shall continue the paths of her deceased mother through the heavens.",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "In the Prose Edda, Fenrir is mentioned in three books: Gylfaginning, Skáldskaparmál and Háttatal.",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "In chapter 13 of the Prose Edda book Gylfaginning, Fenrir is first mentioned in a stanza quoted from Völuspá. Fenrir is first mentioned in prose in chapter 25, where the enthroned figure of High tells Gangleri (described as King Gylfi in disguise) about the god Týr. High says that one example of Týr's bravery is that when the Æsir were luring Fenrir (referred to here as Fenrisúlfr) to place the fetter Gleipnir on the wolf, Týr placed his hand within the wolf's mouth as a pledge. This was done at Fenrir's own request because he did not trust that the Æsir would let him go. As a result, when the Æsir refused to release him, he bit off Týr's hand at a location \"now called the wolf-joint\" (the wrist), causing Týr to be one-handed and \"not considered to be a promoter of settlements between people.\"",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "In chapter 34, High describes Loki, and says that Loki had three children with a woman named Angrboða located in the land of Jötunheimr; Fenrisúlfr, the serpent Jörmungandr, and the female being Hel. High continues that, once the gods found that these three children were being brought up in the land of Jötunheimr, and when the gods \"traced prophecies that from these siblings great mischief and disaster would arise for them\" the gods expected a lot of trouble from the three children, partially due to the nature of the mother of the children, yet worse so due to the nature of their father.",
"title": "Attestations"
},
{
"paragraph_id": 11,
"text": "High says that Odin sent the gods to gather the children and bring them to him. Upon their arrival, Odin threw Jörmungandr into \"that deep sea that lies round all lands\", and then threw Hel into Niflheim, and bestowed upon her authority over nine worlds. However, the Æsir brought up the wolf \"at home\", and only Týr had the courage to approach Fenrir, and give Fenrir food. The gods noticed that Fenrir was growing rapidly every day, and since all prophecies foretold that Fenrir was destined to cause them harm, the gods formed a plan. The gods prepared three fetters: The first, greatly strong, was called Leyding. They brought Leyding to Fenrir and suggested that the wolf try his strength with it. Fenrir judged that it was not beyond his strength, and so let the gods do what they wanted with it. At Fenrir's first kick the bind snapped, and Fenrir loosened himself from Leyding. The gods made a second fetter, twice as strong, and named it Dromi. The gods asked Fenrir to try the new fetter, and that should he break this feat of engineering, Fenrir would achieve great fame for his strength. Fenrir considered that, while the fetter was very strong, his strength had grown since he broke Leyding; and also that he would have to take some risks if he were to become famous. Fenrir allowed them to place the fetter.",
"title": "Attestations"
},
{
"paragraph_id": 12,
"text": "When the Æsir exclaimed that they were ready, Fenrir shook himself, knocked the fetter to the ground, strained hard, and kicking with his feet, snapped the fetter – breaking it into pieces that flew far into the distance. High says that, as a result, to \"loose from Leyding\" or to \"strike out of Dromi\" have become sayings for when something is achieved with great effort. The Æsir started to fear that they would not be able to bind Fenrir, and so Odin sent Freyr's messenger Skírnir down into the land of Svartálfaheimr to \"some dwarfs\" and had them make a fetter called Gleipnir. The dwarves constructed Gleipnir from six mythical ingredients. After an exchange between Gangleri and High, High continues that the fetter was smooth and soft as a silken ribbon, yet strong and firm. The messenger brought the ribbon to the Æsir, and they thanked him heartily for completing the task.",
"title": "Attestations"
},
{
"paragraph_id": 13,
"text": "The Æsir went out on to the lake Amsvartnir sent for Fenrir to accompany them, and continued to the island Lyngvi (Old Norse \"a place overgrown with heather\"). The gods showed Fenrir the silken fetter Gleipnir, told him to tear it, stated that it was much stronger than it appeared, passed it among themselves, used their hands to pull it, and yet it did not tear. However, they said that Fenrir would be able to tear it, to which Fenrir replied:",
"title": "Attestations"
},
{
"paragraph_id": 14,
"text": "It looks to me that with this ribbon as though I will gain no fame from it if I do tear apart such a slender band, but if it is made with art and trickery, then even if it does look thin, this band is not going on my legs.",
"title": "Attestations"
},
{
"paragraph_id": 15,
"text": "The Æsir said Fenrir would quickly tear apart a thin silken strip, noting that Fenrir earlier broke great iron binds, and added that if Fenrir wasn't able to break slender Gleipnir then Fenrir is nothing for the gods to fear, and as a result would be freed. Fenrir responded:",
"title": "Attestations"
},
{
"paragraph_id": 16,
"text": "If you bind me so that I am unable to release myself, then you will be standing by in such a way that I should have to wait a long time before I got any help from you. I am reluctant to have this band put on me. But rather than that you question my courage, let someone put his hand in my mouth as a pledge that this is done in good faith.",
"title": "Attestations"
},
{
"paragraph_id": 17,
"text": "With this statement, all of the Æsir look to one another, finding themselves in a dilemma. Everyone refused to place their hand in Fenrir's mouth until Týr put out his right hand and placed it into the wolf's jaws. When Fenrir kicked, Gleipnir caught tightly, and the more Fenrir struggled, the stronger the band grew. At this, everyone laughed, except Týr, who there lost his right hand. When the gods knew that Fenrir was fully bound, they took a cord called Gelgja (Old Norse \"fetter\") hanging from Gleipnir, inserted the cord through a large stone slab called Gjöll (Old Norse \"scream\"), and the gods fastened the stone slab deep into the ground. After, the gods took a great rock called Thviti (Old Norse \"hitter, batterer\"), and thrust it even further into the ground as an anchoring peg. Fenrir reacted violently; he opened his jaws very wide, and tried to bite the gods. Then the gods thrust a sword into his mouth. Its hilt touched the lower jaw and its point the upper one; by means of it the jaws of the wolf were spread apart and the wolf gagged. Fenrir \"howled horribly\", saliva ran from his mouth, and this saliva formed the river Ván (Old Norse \"hope\"). There Fenrir will lie until Ragnarök. Gangleri comments that Loki created a \"pretty terrible family\" though important, and asks why the Æsir did not just kill Fenrir there since they expected great malice from him. High replies that \"so greatly did the gods respect their holy places and places of sanctuary that they did not want to defile them with the wolf's blood even though the prophecies say that he will be the death of Odin.\"",
"title": "Attestations"
},
{
"paragraph_id": 18,
"text": "In chapter 38, High says that there are many men in Valhalla, and many more who will arrive, yet they will \"seem too few when the wolf comes\". In chapter 51, High foretells that as part of the events of Ragnarök, after Fenrir's son Sköll has swallowed the sun and his other son Hati Hróðvitnisson has swallowed the moon, the stars will disappear from the sky. The earth will shake violently, trees will be uprooted, mountains will fall, and all binds will snap – Fenrisúlfr will be free. Fenrisúlfr will go forth with his mouth opened wide, his upper jaw touching the sky and his lower jaw the earth, and flames will burn from his eyes and nostrils. Later, Fenrisúlfr will arrive at the field Vígríðr with his sibling Jörmungandr. With the forces assembled there, an immense battle will take place. During this, Odin will ride to fight Fenrisúlfr. During the battle, Fenrisúlfr will eventually swallow Odin, killing him, and Odin's son Víðarr will move forward and kick one foot into the lower jaw of the wolf. This foot will bear a legendary shoe \"for which the material has been collected throughout all time\". With one hand, Víðarr will take hold of the wolf's upper jaw and tear apart his mouth, killing Fenrisúlfr. High follows this prose description by citing various quotes from Völuspá in support, some of which mention Fenrir.",
"title": "Attestations"
},
{
"paragraph_id": 19,
"text": "In the Epilogue section of the Prose Edda book Skáldskaparmál, a euhemerized monologue equates Fenrisúlfr to Pyrrhus, attempting to rationalize that \"it killed Odin, and Pyrrhus could be said to be a wolf according to their religion, for he paid no respect to places of sanctuary when he killed the king in the temple in front of Thor's altar.\" In chapter 2, \"wolf's enemy\" is cited as a kenning for Odin as used by the 10th century skald Egill Skallagrímsson. In chapter 9, \"feeder of the wolf\" is given as a kenning for Týr and, in chapter 11, \"slayer of Fenrisúlfr\" is presented as a kenning for Víðarr. In chapter 50, a section of Ragnarsdrápa by the 9th century skald Bragi Boddason is quoted that refers to Hel, the being, as \"the monstrous wolf's sister\". In chapter 75, names for wargs and wolves are listed, including both \"Hróðvitnir\" and \"Fenrir\". \"Fenrir\" appears twice in verse as a common noun for a \"wolf\" or \"warg\" in chapter 58 of Skáldskaparmál, and in chapter 56 of the book Háttatal. Additionally, the name \"Fenrir\" can be found among a list of jötnar in chapter 75 of Skáldskaparmál.",
"title": "Attestations"
},
{
"paragraph_id": 20,
"text": "At the end of the Heimskringla saga Hákonar saga góða, the poem Hákonarmál by the 10th century skald Eyvindr skáldaspillir is presented. The poem is about the fall of King Haakon I of Norway; although he is Christian, he is taken by two valkyries to Valhalla, and is there received as one of the Einherjar. Towards the end of the poem, a stanza relates sooner will the bonds of Fenrir snap than as good a king as Haakon shall stand in his place:",
"title": "Attestations"
},
{
"paragraph_id": 21,
"text": "Unfettered will fare the Fenris Wolf and ravaged the realm of men, ere that cometh a kingly prince as good, to stand in his stead.",
"title": "Attestations"
},
{
"paragraph_id": 22,
"text": "Thorwald's Cross, a partially surviving runestone erected at Kirk Andreas on the Isle of Man, depicts a bearded human holding a spear downward at a wolf, his right foot in its mouth, while a large bird sits at his shoulder. Rundata dates it to 940, while Pluskowski dates it to the 11th century. This depiction has been interpreted as Odin, with a raven or eagle at his shoulder, being consumed by Fenrir at Ragnarök. On the reverse of the stone is another image parallel to it that has been described as Christ triumphing over Satan. These combined elements have led to the cross as being described as \"syncretic art\"; a mixture of pagan and Christian beliefs.",
"title": "Archaeological record"
},
{
"paragraph_id": 23,
"text": "The mid-11th century Gosforth Cross, located in Cumbria, England, has been described as depicting a combination of scenes from the Christian Judgement Day and the pagan Ragnarök. The cross features various figures depicted in Borre style, including a man with a spear facing a monstrous head, one of whose feet is thrust into the beast's forked tongue and on its lower jaw, while a hand is placed against its upper jaw, a scene interpreted as Víðarr fighting Fenrir. This depiction has been theorized as a metaphor for Christ's defeat of Satan.",
"title": "Archaeological record"
},
{
"paragraph_id": 24,
"text": "The 11th century Ledberg stone in Sweden, similarly to Thorwald's Cross, features a figure with his foot at the mouth of a four-legged beast, and this may also be a depiction of Odin being devoured by Fenrir at Ragnarök. Below the beast and the man is a depiction of a legless, helmeted man, with his arms in a prostrate position. The Younger Futhark inscription on the stone bears a commonly seen memorial dedication, but is followed by an encoded runic sequence that has been described as \"mysterious\", and \"an interesting magic formula which is known from all over the ancient Norse world\".",
"title": "Archaeological record"
},
{
"paragraph_id": 25,
"text": "If the images on the Tullstorp Runestone are correctly identified as depicting Ragnarök, then Fenrir is shown above the ship Naglfar.",
"title": "Archaeological record"
},
{
"paragraph_id": 26,
"text": "Meyer Schapiro theorizes a connection between the \"Hell Mouth\" that appears in medieval Christian iconography and Fenrir. According to Schapiro, \"the Anglo-Saxon taste for the Hell Mouth was perhaps influenced by the northern pagan myth of the Crack of Doom and the battle with the wolf, who devoured Odin.\"",
"title": "Archaeological record"
},
{
"paragraph_id": 27,
"text": "Scholars propose that a variety of objects from the archaeological record depict Týr. For example, a Migration Period gold bracteate from Trollhättan, Sweden, features a person receiving a bite on the hand from a beast, which may depict Týr and Fenrir. A Viking Age hogback in Sockburn, County Durham, North East England may depict Týr and Fenrir.",
"title": "Archaeological record"
},
{
"paragraph_id": 28,
"text": "In reference to Fenrir's presentation in the Prose Edda, Andy Orchard theorizes that \"the hound (or wolf)\" Garmr, Sköll, and Hati Hróðvitnisson were originally simply all Fenrir, stating that \"Snorri, characteristically, is careful to make distinctions, naming the wolves who devour the sun and moon as Sköll and Hati, and describing an encounter between Garm and Týr (who, one would have thought, might like to get his hand on Fenrir) at Ragnarök.\"",
"title": "Theories"
},
{
"paragraph_id": 29,
"text": "John Lindow says that it is unclear why the gods decide to raise Fenrir as opposed to his siblings Hel and Jörmungandr in Gylfaginning chapter 35, theorizing that it may be \"because Odin had a connection with wolves? Because Loki was Odin's blood brother?\" Referring to the same chapter, Lindow comments that neither of the phrases that Fenrir's binding result in have left any other traces. Lindow compares Fenrir's role to his father Loki and Fenrir's sibling Jörmungandr, in that they all spend time with the gods, are bound or cast out by them, return \"at the end of the current mythic order to destroy them, only to be destroyed himself as a younger generation of gods, one of them his slayer, survives into the new world order.\" He also points to Fenrir's binding as part of a recurring theme of the bound monster, where an enemy of the gods is bound, but destined to break free at Ragnarok.",
"title": "Theories"
},
{
"paragraph_id": 30,
"text": "Indo-European parallels have been proposed between myths of Fenrir and the Persian demon Ahriman. The Yashts refer to a story where Taxma Urupi rode Angra Mainyu as a horse for thirty years. An elaboration of this allusion is found only in a late Parsi commentary. The ruler Taxmoruw (Taxma Urupi) managed to lasso Ahriman (Angra Mainyu) and keep him tied up while taking him for a ride three times a day. After thirty years, Ahriman outwitted and swallowed Taxmoruw. In a sexual encounter with Ahriman, Jamshid, Taxmoruw's brother, inserted his hand into Ahriman's anus and pulled out his brother's corpse. His hand withered from contact with the diabolic innards. The suggested parallels with Fenrir myths are the binding of an evil being by a ruler figure and the subsequent swallowing of the ruler figure by the evil being (Odin and Fenrir), trickery involving the thrusting of a hand into a monster's orifice and the affliction of the inserted limb (Týr and Fenrir).",
"title": "Theories"
},
{
"paragraph_id": 31,
"text": "Ethologist Valerius Geist wrote that Fenrir's maiming and ultimate killing of Odin, who had previously nurtured him, was likely based on true experiences of wolf-behaviour, seeing as wolves are genetically encoded to rise up in the pack hierarchy and have, on occasion, been recorded to rebel against, and kill, their parents. Geist states that \"apparently, even the ancients knew that wolves may turn on their parents and siblings and kill them.\"",
"title": "Theories"
},
{
"paragraph_id": 32,
"text": "Fenrir appears in modern literature in the poem \"Om Fenrisulven og Tyr\" (1819) by Adam Gottlob Oehlenschläger (collected in Nordens Guder), the novel Der Fenriswolf by K. H. Strobl, and Til kamp mod dødbideriet (1974) by E. K. Reich and E. Larsen.",
"title": "Modern influence"
},
{
"paragraph_id": 33,
"text": "Fenrir has been depicted in the artwork Odin and Fenris (1909) and The Binding of Fenris (around 1900) by Dorothy Hardy, Odin und Fenriswolf and Fesselung des Fenriswolfe (1901) by Emil Doepler, and is the subject of the metal sculpture Fenrir by Arne Vinje Gunnerud located on the island of Askøy, Norway.",
"title": "Modern influence"
},
{
"paragraph_id": 34,
"text": "Fenrir is a highly durable mech option in Pixonic's game War Robots (released as \"Walking War Robots\" in 2014).",
"title": "Modern influence"
},
{
"paragraph_id": 35,
"text": "Fenrir appears as an antagonist in the 2020 videogame Assassin's Creed Valhalla, with a story adapted from the events found in Prose Edda.",
"title": "Modern influence"
},
{
"paragraph_id": 36,
"text": "Fenrir appears in the 2022 game God of War Ragnarök.",
"title": "Modern influence"
}
]
| Fenrir or Fenrisúlfr, also referred to as Hróðvitnir and Vánagandr, is a wolf in Norse mythology. Fenrir, along with Hel and the World Serpent, is a child of Loki and giantess Angrboða. He is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda and Heimskringla, written in the 13th century by Snorri Sturluson. In both the Poetic Edda and Prose Edda, Fenrir is the father of the wolves Sköll and Hati Hróðvitnisson, is a son of Loki and is foretold to kill the god Odin during the events of Ragnarök, but will in turn be killed by Odin's son Víðarr. In the Prose Edda, additional information is given about Fenrir, including that, due to the gods' knowledge of prophecies foretelling great trouble from Fenrir and his rapid growth, the gods bound him and as a result Fenrir bit off the right hand of the god Týr. Depictions of Fenrir have been identified on various objects and scholarly theories have been proposed regarding Fenrir's relation to other canine beings in Norse mythology. Fenrir has been the subject of artistic depictions and he appears in literature. | 2001-08-17T17:15:27Z | 2023-12-16T09:36:28Z | [
"Template:Good article",
"Template:Use dmy dates",
"Template:Poemquote",
"Template:Blockquote",
"Template:Reflist",
"Template:Cite book",
"Template:See also",
"Template:Webarchive",
"Template:Refend",
"Template:NorseMythology",
"Template:Short description",
"Template:Cite web",
"Template:Cite magazine",
"Template:Refbegin",
"Template:Cite journal",
"Template:About",
"Template:ISBN",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Fenrir |
10,974 | Final Fantasy | Final Fantasy is a fantasy anthology media franchise created by Hironobu Sakaguchi which is owned and developed and published by Square Enix (formerly Square). The franchise centers on a series of fantasy role-playing video games. The first game in the series was released in 1987, with 16 numbered main entries having been released to date.
The franchise has since branched into other video game genres such as tactical role-playing, action role-playing, massively multiplayer online role-playing, racing, third-person shooter, fighting, and rhythm, as well as branching into other media, including films, anime, manga, and novels.
Final Fantasy is mostly an anthology series with primary installments being stand-alone role-playing games, each with different settings, plots and main characters; however, the franchise is linked by several recurring elements, including game mechanics and recurring character names. Each plot centers on a particular group of heroes who are battling a great evil, but also explores the characters' internal struggles and relationships. Character names are frequently derived from the history, languages, pop culture, and mythologies of cultures worldwide. The mechanics of each game involve similar battle systems and maps.
Final Fantasy has been both critically and commercially successful. Several entries are regarded as some of the greatest video games, with the series selling more than 185 million copies worldwide, making it one of the best-selling video game franchises of all time. The series is well known for its innovation, visuals, such as the inclusion of full-motion videos, photorealistic character models, and music by Nobuo Uematsu. It has popularized many features now common in role-playing games, also popularizing the genre as a whole in markets outside Japan.
The first installment of the series was released in Japan on December 18, 1987. Subsequent games are numbered and given a story unrelated to previous games, so the numbers refer to volumes rather than to sequels. Many Final Fantasy games have been localized for markets in North America, Europe, and Australia on numerous video game consoles, personal computers (PC), and mobile phones. As of June 2023, the series includes the main installments from Final Fantasy to Final Fantasy XVI, as well as direct sequels and spin-offs, both released and confirmed as being in development. Most of the older games have been remade or re-released on multiple platforms.
Three Final Fantasy installments were released on the Nintendo Entertainment System (NES). Final Fantasy was released in Japan in 1987 and in North America in 1990. It introduced many concepts to the console RPG genre, and has since been remade on several platforms. Final Fantasy II, released in 1988 in Japan, has been bundled with Final Fantasy in several re-releases. The last of the NES installments, Final Fantasy III, was released in Japan in 1990, but was not released elsewhere until a Nintendo DS remake came out in 2006.
The Super Nintendo Entertainment System (SNES) also featured three installments of the main series, all of which have been re-released on several platforms. Final Fantasy IV was released in 1991; in North America, it was released as Final Fantasy II. It introduced the "Active Time Battle" system. Final Fantasy V, released in 1992 in Japan, was the first game in the series to spawn a sequel: a short anime series, Final Fantasy: Legend of the Crystals. Final Fantasy VI was released in Japan in 1994, titled Final Fantasy III in North America.
The PlayStation console saw the release of three main Final Fantasy games. Final Fantasy VII (1997) moved away from the two-dimensional (2D) graphics used in the first six games to three-dimensional (3D) computer graphics; the game features polygonal characters on pre-rendered backgrounds. It also introduced a more modern setting, a style that was carried over to the next game. It was also the second in the series to be released in Europe, with the first being Final Fantasy Mystic Quest. Final Fantasy VIII was published in 1999, and was the first to consistently use realistically proportioned characters and feature a vocal piece as its theme music. Final Fantasy IX, released in 2000, returned to the series' roots, by revisiting a more traditional Final Fantasy setting, rather than the more modern worlds of VII and VIII.
Three main installments, as well as one online game, were published for the PlayStation 2. Final Fantasy X (2001) introduced full 3D areas and voice acting to the series, and was the first to spawn a sub-sequel (Final Fantasy X-2, published in 2003). The first massively multiplayer online role-playing game (MMORPG) in the series, Final Fantasy XI, was released on the PS2 and PC in 2002, and later on the Xbox 360. It introduced real-time battles instead of random encounters. Final Fantasy XII, published in 2006, also includes real-time battles in large, interconnected playfields. The game is also the first in the main series to utilize a world used in a previous game, namely the land of Ivalice, which was previously featured in Final Fantasy Tactics and Vagrant Story.
In 2009, Final Fantasy XIII was released in Japan, and in North America and Europe the following year, for PlayStation 3 and Xbox 360. It is the flagship installment of the Fabula Nova Crystallis Final Fantasy series and became the first mainline game to spawn two sub-sequels (XIII-2 and Lightning Returns). It was also the first game released in Chinese and high definition along with being released on two consoles at once. Final Fantasy XIV, a MMORPG, was released worldwide on Microsoft Windows in 2010, but it received heavy criticism when it was launched, prompting Square Enix to rerelease the game as Final Fantasy XIV: A Realm Reborn, this time to the PlayStation 3 as well, in 2013. Final Fantasy XV is an action role-playing game that was released for PlayStation 4 and Xbox One in 2016. Originally a XIII spin-off titled Versus XIII, XV uses the mythos of the Fabula Nova Crystallis series, although in many other respects the game stands on its own and has since been distanced from the series by its developers. The latest mainline entry, Final Fantasy XVI, was released in 2023 for PlayStation 5.
Final Fantasy has spawned numerous spin-offs and metaseries. Several are, in fact, not Final Fantasy games, but were rebranded for North American release. Examples include the SaGa series, rebranded The Final Fantasy Legend, and its two sequels, Final Fantasy Legend II and III. Final Fantasy Mystic Quest was specifically developed for a United States audience, and Final Fantasy Tactics is a tactical RPG that features many references and themes found in the series. The spin-off Chocobo series, Crystal Chronicles series, and Kingdom Hearts series also include multiple Final Fantasy elements. In 2003, the Final Fantasy series' first sub-sequel, Final Fantasy X-2, was released. Final Fantasy XIII was originally intended to stand on its own, but the team wanted to explore the world, characters and mythos more, resulting in the development and release of two sequels in 2011 and 2013 respectively, creating the series' first official trilogy. Dissidia Final Fantasy was released in 2009, a fighting game that features heroes and villains from the first ten games of the main series. It was followed by a prequel in 2011, a sequel in 2015 and a mobile spin-off in 2017. Other spin-offs have taken the form of subseries—Compilation of Final Fantasy VII, Ivalice Alliance, and Fabula Nova Crystallis Final Fantasy. In 2022, Square Enix released an action-role playing title Stranger of Paradise: Final Fantasy Origin developed in collaboration with Team Ninja, which takes place in an alternate, reimagined reality based on the setting of the original Final Fantasy game, depicting a prequel story that explores the origins of the antagonist Chaos and the emergence of the four Warriors of Light. Enhanced 3D remakes of Final Fantasy III and IV were released in 2006 and 2007 respectively. The first installment of the Final Fantasy VII Remake project was released on the PlayStation 4 in 2020.
Square Enix has expanded the Final Fantasy series into various media. Multiple anime and computer-generated imagery (CGI) films have been produced that are based either on individual Final Fantasy games or on the series as a whole. The first was an original video animation (OVA), Final Fantasy: Legend of the Crystals, a sequel to Final Fantasy V. The story was set in the same world as the game, although 200 years in the future. It was released as four 30-minute episodes, first in Japan in 1994 and later in the United States by Urban Vision in 1998. In 2001, Square Pictures released its first feature film, Final Fantasy: The Spirits Within. The film is set on a future Earth invaded by alien life forms. The Spirits Within was the first animated feature to seriously attempt to portray photorealistic CGI humans, but was considered a box office bomb and garnered mixed reviews.
A 25-episode anime television series, Final Fantasy: Unlimited, was released in 2001 based on the common elements of the Final Fantasy series. It was broadcast in Japan by TV Tokyo and released in North America by ADV Films.
In 2005, Final Fantasy VII: Advent Children, a feature length direct-to-DVD CGI film, and Last Order: Final Fantasy VII, a non-canon OVA, were released as part of the Compilation of Final Fantasy VII. Advent Children was animated by Visual Works, which helped the company create CG sequences for the games. The film, unlike The Spirits Within, became a commercial success. Last Order, on the other hand, was released in Japan in a special DVD bundle package with Advent Children. Last Order sold out quickly and was positively received by Western critics, though fan reaction was mixed over changes to established story scenes.
Two animated tie-ins for Final Fantasy XV were released as part of a larger multimedia project dubbed the Final Fantasy XV Universe. Brotherhood is a series of five 10-to-20-minute-long episodes developed by A-1 Pictures and Square Enix detailing the backstories of the main cast. Kingsglaive, a CGI film released prior to the game in Summer 2016, is set during the game's opening and follows new and secondary characters. In 2019, Square Enix released a short anime, produced by Satelight Inc, called Final Fantasy XV: Episode Ardyn – Prologue on their YouTube channel which acts as the background story for the final piece of DLC for Final Fantasy XV giving insight into Ardyn's past.
Square Enix also released Final Fantasy XIV: Dad of Light in 2017, an 8-episode Japanese soap opera based, featuring a mix of live-action scenes and Final Fantasy XIV gameplay footage.
As of June 2019, Sony Pictures Television is working on a first ever live-action adaptation of the series with Hivemind and Square Enix. Jason F. Brown, Sean Daniel and Dinesh Shamdasani for Hivemind are the producers while Ben Lustig and Jake Thornton were attached as writers and executive producers for the series.
Several video games have either been adapted into or have had spin-offs in the form of manga and novels. The first was the novelization of Final Fantasy II in 1989, and was followed by a manga adaptation of Final Fantasy III in 1992. The past decade has seen an increase in the number of non-video game adaptations and spin-offs. Final Fantasy: The Spirits Within has been adapted into a novel, the spin-off game Final Fantasy Crystal Chronicles has been adapted into a manga, and Final Fantasy XI had a novel and manga set in its continuity. Seven novellas based on the Final Fantasy VII universe have also been released. The Final Fantasy: Unlimited story was partially continued in novels and a manga after the anime series ended. The Final Fantasy X and XIII series have also had novellas and audio dramas released. Final Fantasy Tactics Advance has been adapted into a radio drama, and Final Fantasy: Unlimited has received a radio drama sequel.
A trading card game named Final Fantasy Trading Card Game is produced by Square Enix and Hobby Japan, first released Japan in 2012 with an English version in 2016. The game has been compared to Magic: the Gathering, and a tournament circuit for the game also takes place.
Although most Final Fantasy installments are independent, many gameplay elements recur throughout the series. Most games contain elements of fantasy and science fiction and feature recycled names often inspired from various cultures' history, languages and mythology, including Asian, European, and Middle-Eastern. Examples include weapon names like Excalibur and Masamune—derived from Arthurian legend and the Japanese swordsmith Masamune respectively—as well as the spell names Holy, Meteor, and Ultima. Beginning with Final Fantasy IV, the main series adopted its current logo style that features the same typeface and an emblem designed by Japanese artist Yoshitaka Amano. The emblem relates to a game's plot and typically portrays a character or object in the story. Subsequent remakes of the first three games have replaced the previous logos with ones similar to the rest of the series.
The central conflict in many Final Fantasy games focuses on a group of characters battling an evil, and sometimes ancient, antagonist that dominates the game's world. Stories frequently involve a sovereign state in rebellion, with the protagonists taking part in the rebellion. The heroes are often destined to defeat the evil, and occasionally gather as a direct result of the antagonist's malicious actions. Another staple of the series is the existence of two villains; the main villain is not always who it appears to be, as the primary antagonist may actually be subservient to another character or entity. The main antagonist introduced at the beginning of the game is not always the final enemy, and the characters must continue their quest beyond what appears to be the final fight.
Stories in the series frequently emphasize the internal struggles, passions, and tragedies of the characters, and the main plot often recedes into the background as the focus shifts to their personal lives. Games also explore relationships between characters, ranging from love to rivalry. Other recurring situations that drive the plot include amnesia, a hero corrupted by an evil force, mistaken identity, and self-sacrifice. Magical orbs and crystals are recurring in-game items that are frequently connected to the themes of the games' plots. Crystals often play a central role in the creation of the world, and a majority of the Final Fantasy games link crystals and orbs to the planet's life force. As such, control over these crystals drives the main conflict. The classical elements are also a recurring theme in the series related to the heroes, villains, and items. Other common plot and setting themes include the Gaia hypothesis, an apocalypse, and conflicts between advanced technology and nature.
The series features a number of recurring character archetypes. Most famously, every game since Final Fantasy II, including subsequent remakes of the original Final Fantasy, features a character named Cid. Cid's appearance, personality, goals, and role in the game (non-playable ally, party member, villain) vary dramatically. However, two characteristics many versions of Cid have in common are being a scientist or engineer, and being tied in some way to an airship the party eventually acquires. Every Cid has at least one of these two traits.
Biggs and Wedge, inspired by two Star Wars characters of the same name, appear in numerous games as minor characters, sometimes as comic relief. The later games in the series feature several males with effeminate characteristics. Recurring creatures include Chocobos, Moogles, and Cactuars. Chocobos are large, often flightless birds that appear in several installments as a means of long-distance travel for characters. Moogles are white, stout creatures resembling teddy bears with wings and a single antenna. They serve different roles in games including mail delivery, weaponsmiths, party members, and saving the game. Cactuars are anthropomorphic cacti with haniwa-like faces presented in a running or dashing pose. They usually appear as recurring enemy units, and also as summoned allies or friendly non-player characters in certain titles. Chocobo and Moogle appearances are often accompanied by specific musical themes that have been arranged differently for separate games.
In Final Fantasy games, players command a party of characters as they progress through the game's story by exploring the game world and defeating enemies. Enemies are typically encountered randomly through exploring, a trend which changed in Final Fantasy XI and XII. The player issues combat orders—like "Fight", "Magic", and "Item"—to individual characters via a menu-driven interface while engaging in battles. Throughout the series, the games have used different battle systems. Prior to Final Fantasy XI, battles were turn-based with the protagonists and antagonists on different sides of the battlefield. Final Fantasy IV introduced the "Active Time Battle" (ATB) system that augmented the turn-based nature with a perpetual time-keeping system. Designed by Hiroyuki Ito, it injected urgency and excitement into combat by requiring the player to act before an enemy attacks, and was used until Final Fantasy X, which implemented the "Conditional Turn-Based" (CTB) system. This new system returned to the previous turn-based system, but added nuances to offer players more challenge. Final Fantasy XI adopted a real-time battle system where characters continuously act depending on the issued command. Final Fantasy XII continued this gameplay with the "Active Dimension Battle" system. Final Fantasy XIII's combat system, designed by the same man who worked on X, was meant to have an action-oriented feel, emulating the cinematic battles in Final Fantasy VII: Advent Children. Final Fantasy XV introduces a new "Open Combat" system. Unlike previous battle systems in the franchise, the "Open Combat" system (OCS) allows players to take on a fully active battle scenario, allowing for free range attacks and movement, giving a much more fluid feel of combat. This system also incorporates a "Tactical" Option during battle, which pauses active battle to allow use of items.
Like most RPGs, the Final Fantasy installments use an experience level system for character advancement, in which experience points are accumulated by killing enemies. Character classes, specific jobs that enable unique abilities for characters, are another recurring theme. Introduced in the first game, character classes have been used differently in each game. Some restrict a character to a single job to integrate it into the story, while other games feature dynamic job systems that allow the player to choose from multiple classes and switch throughout the game. Though used heavily in many games, such systems have become less prevalent in favor of characters that are more versatile; characters still match an archetype, but are able to learn skills outside their class.
Magic is another common RPG element in the series. The method by which characters gain magic varies between installments, but is generally divided into classes organized by color: "White magic", which focuses on spells that assist teammates; "Black magic", which focuses on harming enemies; "Red magic", which is a combination of white and black magic, "Blue magic", which mimics enemy attacks; and "Green magic" which focuses on applying status effects to either allies or enemies. Other types of magic frequently appear such as "Time magic", focusing on the themes of time, space, and gravity; and "Summoning magic", which evokes legendary creatures to aid in battle and is a feature that has persisted since Final Fantasy III. Summoned creatures are often referred to by names like "Espers" or "Eidolons" and have been inspired by mythologies from Arabic, Hindu, Norse, and Greek cultures.
Different means of transportation have appeared through the series. The most common is the airship for long range travel, accompanied by chocobos for travelling short distances, but others include sea and land vessels. Following Final Fantasy VII, more modern and futuristic vehicle designs have been included.
In the mid-1980s, Square entered the Japanese video game industry with simple RPGs, racing games, and platformers for Nintendo's Famicom Disk System. In 1987, Square designer Hironobu Sakaguchi chose to create a new fantasy role-playing game for the cartridge-based NES, and drew inspiration from popular fantasy games: Enix's Dragon Quest, Nintendo's The Legend of Zelda, and Origin Systems's Ultima series. Though often attributed to the company allegedly facing bankruptcy, Sakaguchi explained that the game was his personal last-ditch effort in the game industry and that its title, Final Fantasy, stemmed from his feelings at the time; had the game not sold well, he would have quit the business and gone back to college. Despite his explanation, publications have also attributed the name to the company's hopes that the project would solve its financial troubles. In 2015, Sakaguchi explained the name's origin: the team wanted a title that would abbreviate to "FF", which would sound good in Japanese. The name was originally going to be Fighting Fantasy, but due to concerns over trademark conflicts with the roleplaying gamebook series of the same name, they needed to settle for something else. As the English word "Final" was well-known in Japan, Sakaguchi settled on that. According to Sakaguchi, any title that created the "FF" abbreviation would have done.
The game indeed reversed Square's lagging fortunes, and it became the company's flagship franchise. Following the success, Square immediately developed a second installment. Because Sakaguchi assumed Final Fantasy would be a stand-alone game, its story was not designed to be expanded by a sequel. The developers instead chose to carry over only thematic similarities from its predecessor, while some of the gameplay elements, such as the character advancement system, were overhauled. This approach has continued throughout the series; each major Final Fantasy game features a new setting, a new cast of characters, and an upgraded battle system. Video game writer John Harris attributed the concept of reworking the game system of each installment to Nihon Falcom's Dragon Slayer series, with which Square was previously involved as a publisher. The company regularly released new games in the main series. However, the time between the releases of XI (2002), XII (2006), and XIII (2009) were much longer than previous games. Following Final Fantasy XIV, Square Enix released Final Fantasy games either annually or biennially. This switch was to mimic the development cycles of Western games in the Call of Duty, Assassin's Creed and Battlefield series, as well as maintain fan-interest.
For the original Final Fantasy, Sakaguchi required a larger production team than Square's previous games. He began crafting the game's story while experimenting with gameplay ideas. Once the gameplay system and game world size were established, Sakaguchi integrated his story ideas into the available resources. A different approach has been taken for subsequent games; the story is completed first and the game built around it. Designers have never been restricted by consistency, though most feel each game should have a minimum number of common elements. The development teams strive to create completely new worlds for each game, and avoid making new games too similar to previous ones. Game locations are conceptualized early in development and design details like building parts are fleshed out as a base for entire structures.
The first five games were directed by Sakaguchi, who also provided the original concepts. He drew inspiration for game elements from anime films by Hayao Miyazaki; series staples like the airships and chocobos are inspired by elements in Castle in the Sky and Nausicaä of the Valley of the Wind, respectively. Sakaguchi served as a producer for subsequent games until he left Square in 2001. Yoshinori Kitase took over directing the games until Final Fantasy VIII, and has been followed by a new director for each new game. Hiroyuki Ito designed several gameplay systems, including Final Fantasy V's "Job System", Final Fantasy VIII's "Junction System" and the Active Time Battle concept, which was used from Final Fantasy IV until IX. In designing the Active Time Battle system, Ito drew inspiration from Formula One racing; he thought it would be interesting if character types had different speeds after watching race cars pass each other. Ito also co-directed Final Fantasy VI with Kitase. Kenji Terada was the scenario writer for the first three games; Kitase took over as scenario writer for Final Fantasy V through VII. Kazushige Nojima became the series' primary scenario writer from Final Fantasy VII until his resignation in October 2003; he has since formed his own company, Stellavista. Nojima partially or completely wrote the stories for Final Fantasy VII, VIII, X, and its sequel X-2. He also worked as the scenario writer for the spin-off series, Kingdom Hearts. Daisuke Watanabe co-wrote the scenarios for Final Fantasy X and XII, and was the main writer for the XIII games.
Artistic design, including character and monster creations, was handled by Japanese artist Yoshitaka Amano from Final Fantasy through Final Fantasy VI. Amano also handled title logo designs for all of the main series and the image illustrations from Final Fantasy VII onward. Tetsuya Nomura was chosen to replace Amano because Nomura's designs were more adaptable to 3D graphics. He worked with the series from Final Fantasy VII through X, then came back for XIII, and for the basic design of XV. For Final Fantasy IX character designs were handled by Shukō Murase, Toshiyuki Itahana, and Shin Nagasawa. For Final Fantasy XV, Roberto Ferrari was responsible for the character design. Nomura is also the character designer of the Kingdom Hearts series, Compilation of Final Fantasy VII, and Fabula Nova Crystallis: Final Fantasy. Other designers include Nobuyoshi Mihara and Akihiko Yoshida. Mihara was the character designer for Final Fantasy XI, and Yoshida served as character designer for Final Fantasy Tactics, the Square-produced Vagrant Story, and Final Fantasy XII.
Because of graphical limitations, the first games on the NES feature small sprite representations of the leading party members on the main world screen. Battle screens use more detailed, full versions of characters in a side-view perspective. This practice was used until Final Fantasy VI, which uses detailed versions for both screens. The NES sprites are 26 pixels high and use a color palette of 4 colors. 6 frames of animation are used to depict different character statuses like "healthy" and "fatigued". The SNES installments use updated graphics and effects, as well as higher quality audio than in previous games, but are otherwise similar to their predecessors in basic design. The SNES sprites are 2 pixels shorter, but have larger palettes and feature more animation frames: 11 colors and 40 frames respectively. The upgrade allowed designers to have characters be more detailed in appearance and express more emotions. The first game includes non-player characters (NPCs) the player could interact with, but they are mostly static in-game objects. Beginning with the second game, Square used predetermined pathways for NPCs to create more dynamic scenes that include comedy and drama.
In 1995, Square showed an interactive SGI technical demonstration of Final Fantasy VI for the then next generation of consoles. The demonstration used Silicon Graphics's prototype Nintendo 64 workstations to create 3D graphics. Fans believed the demo was of a new Final Fantasy game for the Nintendo 64 console. 1997 saw the release of Final Fantasy VII for the Sony PlayStation. The switch was due to a dispute with Nintendo over its use of faster but more expensive cartridges, as opposed to the slower and cheaper, but much higher capacity compact discs used on rival systems. VII introduced 3D graphics with fully pre-rendered backgrounds. It was because of this switch to 3D that a CD-ROM format was chosen over a cartridge format. The switch also led to increased production costs and a greater subdivision of the creative staff for VII and subsequent 3D games in the series.
Starting with Final Fantasy VIII, the series adopted a more photo-realistic look. Like VII, full motion video (FMV) sequences would have video playing in the background, with the polygonal characters composited on top. Final Fantasy IX returned to the more stylized design of earlier games in the series, although it still maintained, and in many cases slightly upgraded, most of the graphical techniques used in the previous two games. Final Fantasy X was released on the PlayStation 2, and used the more powerful hardware to render graphics in real-time instead of using pre-rendered material to obtain a more dynamic look; the game features full 3D environments, rather than have 3D character models move about pre-rendered backgrounds. It is also the first Final Fantasy game to introduce voice acting, occurring throughout the majority of the game, even with many minor characters. This aspect added a whole new dimension of depth to the character's reactions, emotions, and development.
Taking a temporary divergence, Final Fantasy XI used the PlayStation 2's online capabilities as an MMORPG. Initially released for the PlayStation 2 with a PC port arriving six months later, XI was also released on the Xbox 360 nearly four years after its original release in Japan. This was the first Final Fantasy game to use a free rotating camera. Final Fantasy XII was released in 2006 for the PlayStation 2 and uses only half as many polygons as Final Fantasy X, in exchange for more advanced textures and lighting. It also retains the freely rotating camera from XI. Final Fantasy XIII and XIV both make use of Crystal Tools, a middleware engine developed by Square Enix.
Final Fantasy games feature a variety of music, and frequently reuse themes. Most of the games open with a piece called "Prelude", which has evolved from a simple, 2-voice arpeggio in the early games to a complex, melodic arrangement in recent installments. Victories in combat are often accompanied by a victory fanfare, a theme that has become one of the most recognized pieces of music in the series. The basic theme that accompanies Chocobo appearances has been rearranged in a different musical style for most installments. Recurring secret bosses such as Gilgamesh are also used as opportunities to revive their musical themes.
A theme known as the "Final Fantasy Main Theme" or "March", originally featured in the first game, often accompanies the ending credits. Although leitmotifs are common in the more character-driven installments, theme music is typically reserved for main characters and recurring plot elements.
Nobuo Uematsu was the primary composer of the Final Fantasy series until his resignation from Square Enix in November 2004. Other notable composers who have worked on main entries in the series include Masashi Hamauzu, Hitoshi Sakimoto, and Yoko Shimomura. Uematsu was allowed to create much of the music with little direction from the production staff. Sakaguchi, however, would request pieces to fit specific game scenes including battles and exploring different areas of the game world. Once a game's major scenarios were completed, Uematsu would begin writing the music based on the story, characters, and accompanying artwork. He started with a game's main theme, and developed other pieces to match its style. In creating character themes, Uematsu read the game's scenario to determine the characters' personality. He would also ask the scenario writer for more details to scenes he was unsure about. Technical limitations were prevalent in earlier games; Sakaguchi would sometimes instruct Uematsu to only use specific notes. It was not until Final Fantasy IV on the SNES that Uematsu was able to add more subtlety to the music.
Overall, the Final Fantasy series has been critically acclaimed and commercially successful, though each installment has seen different levels of success. The series has seen a steady increase in total sales; it sold over 10 million software units worldwide by early 1996, more than 25 million units by 1999, more than 33 million units and nearly $1 billion revenue (between $1.7–2.6 billion adjusted for inflation) by 2001, 45 million units by August 2003, 63 million by December 2005, and 85 million by July 2008. By June 2011, the series had sold over 100 million units, and by March 2014, it had sold over 110 million units. Its high sales numbers have ranked it as one of the best-selling video game franchises in the industry; in January 2007, the series was listed as number three, and later in July as number four. As of 2019, the series had sold over 149 million units worldwide. As of October 2021, the series had sold over 164 million units worldwide. By March 2022, the series reached cumulative global physical and digital sales of 173 million units.
Several games within the series have become best-selling games. At the end of 2007, the seventh, eighth, and ninth best-selling RPGs were Final Fantasy VII, VIII, and X respectively. The original Final Fantasy VII has sold over 14.1 million copies worldwide, earning it the position of the best-selling Final Fantasy game. Within two days of Final Fantasy VIII's North American release on September 9, 1999, it became the top-selling video game in the United States, a position it held for more than three weeks. Final Fantasy X sold over 1.4 million Japanese units in pre-orders alone, which set a record for the fastest-selling console RPG. The MMORPG, Final Fantasy XI, reached over 200,000 active daily players in March 2006 and had reached over half a million subscribers by July 2007. Final Fantasy XII sold more than 1.7 million copies in its first week in Japan. By November 6, 2006—one week after its release—XII had shipped approximately 1.5 million copies in North America. Final Fantasy XIII became the fastest-selling game in the franchise, and sold one million units on its first day of sale in Japan. Final Fantasy XIV: A Realm Reborn, in comparison to its predecessor, was a runaway success, originally suffering from servers being overcrowded, and eventually gaining over one million unique subscribers within two months of its launch.
The series has received critical acclaim for the quality of its visuals and soundtracks. In 1996, Next Generation ranked the series collectively as the 17th best game of all time, speaking very highly of its graphics, music and stories. In 1999, Next Generation listed the Final Fantasy series as number 16 on their "Top 50 Games of All Time", commenting that "by pairing state-of-the-art technology with memorable, sometimes shamelessly melodramatic storylines, the series has successfully outlasted its competitors [...] and improved with each new installation". It was awarded a star on the Walk of Game in 2006, making it the first franchise to win a star on the event (other winners were individual games, not franchises). WalkOfGame.com commented that the series has sought perfection as well as having been a risk taker in innovation. In 2006, GameFAQs held a contest for the best video game series ever, with Final Fantasy finishing as the runner-up to The Legend of Zelda. In a 2008 public poll held by The Game Group plc, Final Fantasy was voted the best game series, with five games appearing in their "Greatest Games of All Time" list.
Many Final Fantasy games have been included in various lists of top games. Several games have been listed on multiple IGN "Top Games" lists. Twelve games were listed on Famitsu's 2006 "Top 100 Favorite Games of All Time", four of which were in the top ten, with Final Fantasy X and VII coming first and second, respectively. The series holds seven Guinness World Records in the Guinness World Records Gamer's Edition 2008, which include the "Most Games in an RPG Series" (13 main games, seven enhanced games, and 32 spin-off games), the "Longest Development Period" (the production of Final Fantasy XII took five years), and the "Fastest-Selling Console RPG in a Single Day" (Final Fantasy X). The 2009 edition listed two games from the series among the top 50 consoles games: Final Fantasy XII at number 8 and VII at number 20. In 2018, Final Fantasy VII was inducted as a member of the World Video Game Hall of Fame.
However, the series has garnered some criticism. IGN has commented that the menu system used by the games is a major detractor for many and is a "significant reason why they haven't touched the series". The site has also heavily criticized the use of random encounters in the series' battle systems. IGN further stated the various attempts to bring the series into film and animation have either been unsuccessful, unremarkable, or did not live up to the standards of the games. In 2007, Edge criticized the series for a number of related games that include the phrase "Final Fantasy" in their titles, which are considered inferior to previous games. It also commented that with the departure of Hironobu Sakaguchi, the series might be in danger of growing stale.
Several individual Final Fantasy games have garnered extra attention; some for their positive reception and others for their negative reception. Final Fantasy VII topped GamePro's "26 Best RPGs of All Time" list, as well as GameFAQs "Best Game Ever" audience polls in 2004 and 2005. Despite the success of VII, it is sometimes criticized as being overrated. In 2003, GameSpy listed it as the seventh most overrated game of all time, while IGN presented views from both sides. Dirge of Cerberus: Final Fantasy VII shipped 392,000 units in its first week of release, but received review scores that were much lower than that of other Final Fantasy games. A delayed, negative review after the Japanese release of Dirge of Cerberus from Japanese gaming magazine Famitsu hinted at a controversy between the magazine and Square Enix. Though Final Fantasy: The Spirits Within was praised for its visuals, the plot was criticized and the film was considered a box office bomb. Final Fantasy Crystal Chronicles for the GameCube received overall positive review scores, but reviews stated that the use of Game Boy Advances as controllers was a big detractor. The predominantly negative reception of the original version of Final Fantasy XIV caused then-president Yoichi Wada to issue an official apology during a Tokyo press conference, stating that the brand had been "greatly damaged" by the game's reception.
Various video game publications have created rankings of the mainline Final Fantasy games. In the table below, the lower the number given, the better the game is in the view of the respective publication. By way of comparison, the ratings provided by Famitsu magazine and the review aggregator Metacritic are also given; in these rows, higher numbers indicate better reviews. Note that Metacritic ratings up until Final Fantasy VII largely represent retrospective reviews from online websites years after their initial release, rather than contemporary reviews from video game magazines at the time of their initial release.
Final Fantasy has been very influential in the history of video games and game mechanics. Final Fantasy IV is considered a milestone for the genre, introducing a dramatic storyline with a strong emphasis on character development and personal relationships. In 1992, Nintendo's Shigeru Miyamoto noted the impact of Final Fantasy on Japanese role-playing games, stating Final Fantasy's "interactive cinematic approach" with an emphasis on "presentation and graphics" was gradually becoming "the most common style" of Japanese RPG at the time. Final Fantasy VII, having been the first title of the series to be officially released in the PAL territories of Europe and Oceania, is credited as having the largest industry impact of the series, and with allowing console role-playing games to gain global mass-market appeal. VII is considered to be one of the most important and influential video games of all time.
The series affected Square's business on several levels. The commercial failure of Final Fantasy: The Spirits Within resulted in hesitation and delays from Enix during merger discussions with Square. Square's decision to produce games exclusively for the Sony PlayStation—a move followed by Enix's decision with the Dragon Quest series—severed their relationship with Nintendo. Final Fantasy games were absent from Nintendo consoles, specifically the Nintendo 64, for seven years. Critics attribute the switch of strong third-party games like the Final Fantasy and Dragon Quest games to Sony's PlayStation, and away from the Nintendo 64, as one of the reasons behind PlayStation being the more successful of the two consoles. The release of the Nintendo GameCube, which used optical disc media, in 2001 caught the attention of Square. To produce games for the system, Square created the shell company The Game Designers Studio and released Final Fantasy Crystal Chronicles, which spawned its own metaseries within the main franchise. Final Fantasy XI's lack of an online method of subscription cancellation prompted the creation of legislation in Illinois that requires internet gaming services to provide such a method to the state's residents.
The series' popularity has resulted in its appearance and reference in numerous facets of popular culture like anime, TV series, and webcomics. Music from the series has permeated into different areas of culture. Final Fantasy IV's "Theme of Love" was integrated into the curriculum of Japanese school children and has been performed live by orchestras and metal bands. In 2003, Uematsu co-founded The Black Mages, an instrumental rock group independent of Square that has released albums of arranged Final Fantasy tunes. Bronze medalists Alison Bartosik and Anna Kozlova performed their synchronized swimming routine at the 2004 Summer Olympics to music from Final Fantasy VIII. Many of the soundtracks have also been released for sale. Numerous companion books, which normally provide in-depth game information, have been published. In Japan, they are published by Square and are called Ultimania books.
The series has inspired numerous game developers. Fable creator Peter Molyneux considers Final Fantasy VII to be the RPG that "defined the genre" for him. BioWare founder Greg Zeschuk cited Final Fantasy VII as "the first really emotionally engaging game" he played and said it had "a big impact" on BioWare's work. The Witcher 3 senior environmental artist Jonas Mattsson cited Final Fantasy as "a huge influence" and said it was "the first RPG" he played through. Mass Effect art director Derek Watts cited Final Fantasy: The Spirits Within as a major influence on the visual design and art direction of the series. BioWare senior product manager David Silverman cited Final Fantasy XII's gambit system as an influence on the gameplay of Dragon Age: Origins. Ubisoft Toronto creative director Maxime Beland cited the original Final Fantasy as a major influence on him. Media Molecule's Constantin Jupp credited Final Fantasy VII with getting him into game design. Tim Schafer also cited Final Fantasy VII as one of his favourite games of all time. | [
{
"paragraph_id": 0,
"text": "Final Fantasy is a fantasy anthology media franchise created by Hironobu Sakaguchi which is owned and developed and published by Square Enix (formerly Square). The franchise centers on a series of fantasy role-playing video games. The first game in the series was released in 1987, with 16 numbered main entries having been released to date.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The franchise has since branched into other video game genres such as tactical role-playing, action role-playing, massively multiplayer online role-playing, racing, third-person shooter, fighting, and rhythm, as well as branching into other media, including films, anime, manga, and novels.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Final Fantasy is mostly an anthology series with primary installments being stand-alone role-playing games, each with different settings, plots and main characters; however, the franchise is linked by several recurring elements, including game mechanics and recurring character names. Each plot centers on a particular group of heroes who are battling a great evil, but also explores the characters' internal struggles and relationships. Character names are frequently derived from the history, languages, pop culture, and mythologies of cultures worldwide. The mechanics of each game involve similar battle systems and maps.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Final Fantasy has been both critically and commercially successful. Several entries are regarded as some of the greatest video games, with the series selling more than 185 million copies worldwide, making it one of the best-selling video game franchises of all time. The series is well known for its innovation, visuals, such as the inclusion of full-motion videos, photorealistic character models, and music by Nobuo Uematsu. It has popularized many features now common in role-playing games, also popularizing the genre as a whole in markets outside Japan.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The first installment of the series was released in Japan on December 18, 1987. Subsequent games are numbered and given a story unrelated to previous games, so the numbers refer to volumes rather than to sequels. Many Final Fantasy games have been localized for markets in North America, Europe, and Australia on numerous video game consoles, personal computers (PC), and mobile phones. As of June 2023, the series includes the main installments from Final Fantasy to Final Fantasy XVI, as well as direct sequels and spin-offs, both released and confirmed as being in development. Most of the older games have been remade or re-released on multiple platforms.",
"title": "Media"
},
{
"paragraph_id": 5,
"text": "Three Final Fantasy installments were released on the Nintendo Entertainment System (NES). Final Fantasy was released in Japan in 1987 and in North America in 1990. It introduced many concepts to the console RPG genre, and has since been remade on several platforms. Final Fantasy II, released in 1988 in Japan, has been bundled with Final Fantasy in several re-releases. The last of the NES installments, Final Fantasy III, was released in Japan in 1990, but was not released elsewhere until a Nintendo DS remake came out in 2006.",
"title": "Media"
},
{
"paragraph_id": 6,
"text": "The Super Nintendo Entertainment System (SNES) also featured three installments of the main series, all of which have been re-released on several platforms. Final Fantasy IV was released in 1991; in North America, it was released as Final Fantasy II. It introduced the \"Active Time Battle\" system. Final Fantasy V, released in 1992 in Japan, was the first game in the series to spawn a sequel: a short anime series, Final Fantasy: Legend of the Crystals. Final Fantasy VI was released in Japan in 1994, titled Final Fantasy III in North America.",
"title": "Media"
},
{
"paragraph_id": 7,
"text": "The PlayStation console saw the release of three main Final Fantasy games. Final Fantasy VII (1997) moved away from the two-dimensional (2D) graphics used in the first six games to three-dimensional (3D) computer graphics; the game features polygonal characters on pre-rendered backgrounds. It also introduced a more modern setting, a style that was carried over to the next game. It was also the second in the series to be released in Europe, with the first being Final Fantasy Mystic Quest. Final Fantasy VIII was published in 1999, and was the first to consistently use realistically proportioned characters and feature a vocal piece as its theme music. Final Fantasy IX, released in 2000, returned to the series' roots, by revisiting a more traditional Final Fantasy setting, rather than the more modern worlds of VII and VIII.",
"title": "Media"
},
{
"paragraph_id": 8,
"text": "Three main installments, as well as one online game, were published for the PlayStation 2. Final Fantasy X (2001) introduced full 3D areas and voice acting to the series, and was the first to spawn a sub-sequel (Final Fantasy X-2, published in 2003). The first massively multiplayer online role-playing game (MMORPG) in the series, Final Fantasy XI, was released on the PS2 and PC in 2002, and later on the Xbox 360. It introduced real-time battles instead of random encounters. Final Fantasy XII, published in 2006, also includes real-time battles in large, interconnected playfields. The game is also the first in the main series to utilize a world used in a previous game, namely the land of Ivalice, which was previously featured in Final Fantasy Tactics and Vagrant Story.",
"title": "Media"
},
{
"paragraph_id": 9,
"text": "In 2009, Final Fantasy XIII was released in Japan, and in North America and Europe the following year, for PlayStation 3 and Xbox 360. It is the flagship installment of the Fabula Nova Crystallis Final Fantasy series and became the first mainline game to spawn two sub-sequels (XIII-2 and Lightning Returns). It was also the first game released in Chinese and high definition along with being released on two consoles at once. Final Fantasy XIV, a MMORPG, was released worldwide on Microsoft Windows in 2010, but it received heavy criticism when it was launched, prompting Square Enix to rerelease the game as Final Fantasy XIV: A Realm Reborn, this time to the PlayStation 3 as well, in 2013. Final Fantasy XV is an action role-playing game that was released for PlayStation 4 and Xbox One in 2016. Originally a XIII spin-off titled Versus XIII, XV uses the mythos of the Fabula Nova Crystallis series, although in many other respects the game stands on its own and has since been distanced from the series by its developers. The latest mainline entry, Final Fantasy XVI, was released in 2023 for PlayStation 5.",
"title": "Media"
},
{
"paragraph_id": 10,
"text": "Final Fantasy has spawned numerous spin-offs and metaseries. Several are, in fact, not Final Fantasy games, but were rebranded for North American release. Examples include the SaGa series, rebranded The Final Fantasy Legend, and its two sequels, Final Fantasy Legend II and III. Final Fantasy Mystic Quest was specifically developed for a United States audience, and Final Fantasy Tactics is a tactical RPG that features many references and themes found in the series. The spin-off Chocobo series, Crystal Chronicles series, and Kingdom Hearts series also include multiple Final Fantasy elements. In 2003, the Final Fantasy series' first sub-sequel, Final Fantasy X-2, was released. Final Fantasy XIII was originally intended to stand on its own, but the team wanted to explore the world, characters and mythos more, resulting in the development and release of two sequels in 2011 and 2013 respectively, creating the series' first official trilogy. Dissidia Final Fantasy was released in 2009, a fighting game that features heroes and villains from the first ten games of the main series. It was followed by a prequel in 2011, a sequel in 2015 and a mobile spin-off in 2017. Other spin-offs have taken the form of subseries—Compilation of Final Fantasy VII, Ivalice Alliance, and Fabula Nova Crystallis Final Fantasy. In 2022, Square Enix released an action-role playing title Stranger of Paradise: Final Fantasy Origin developed in collaboration with Team Ninja, which takes place in an alternate, reimagined reality based on the setting of the original Final Fantasy game, depicting a prequel story that explores the origins of the antagonist Chaos and the emergence of the four Warriors of Light. Enhanced 3D remakes of Final Fantasy III and IV were released in 2006 and 2007 respectively. The first installment of the Final Fantasy VII Remake project was released on the PlayStation 4 in 2020.",
"title": "Media"
},
{
"paragraph_id": 11,
"text": "Square Enix has expanded the Final Fantasy series into various media. Multiple anime and computer-generated imagery (CGI) films have been produced that are based either on individual Final Fantasy games or on the series as a whole. The first was an original video animation (OVA), Final Fantasy: Legend of the Crystals, a sequel to Final Fantasy V. The story was set in the same world as the game, although 200 years in the future. It was released as four 30-minute episodes, first in Japan in 1994 and later in the United States by Urban Vision in 1998. In 2001, Square Pictures released its first feature film, Final Fantasy: The Spirits Within. The film is set on a future Earth invaded by alien life forms. The Spirits Within was the first animated feature to seriously attempt to portray photorealistic CGI humans, but was considered a box office bomb and garnered mixed reviews.",
"title": "Media"
},
{
"paragraph_id": 12,
"text": "A 25-episode anime television series, Final Fantasy: Unlimited, was released in 2001 based on the common elements of the Final Fantasy series. It was broadcast in Japan by TV Tokyo and released in North America by ADV Films.",
"title": "Media"
},
{
"paragraph_id": 13,
"text": "In 2005, Final Fantasy VII: Advent Children, a feature length direct-to-DVD CGI film, and Last Order: Final Fantasy VII, a non-canon OVA, were released as part of the Compilation of Final Fantasy VII. Advent Children was animated by Visual Works, which helped the company create CG sequences for the games. The film, unlike The Spirits Within, became a commercial success. Last Order, on the other hand, was released in Japan in a special DVD bundle package with Advent Children. Last Order sold out quickly and was positively received by Western critics, though fan reaction was mixed over changes to established story scenes.",
"title": "Media"
},
{
"paragraph_id": 14,
"text": "Two animated tie-ins for Final Fantasy XV were released as part of a larger multimedia project dubbed the Final Fantasy XV Universe. Brotherhood is a series of five 10-to-20-minute-long episodes developed by A-1 Pictures and Square Enix detailing the backstories of the main cast. Kingsglaive, a CGI film released prior to the game in Summer 2016, is set during the game's opening and follows new and secondary characters. In 2019, Square Enix released a short anime, produced by Satelight Inc, called Final Fantasy XV: Episode Ardyn – Prologue on their YouTube channel which acts as the background story for the final piece of DLC for Final Fantasy XV giving insight into Ardyn's past.",
"title": "Media"
},
{
"paragraph_id": 15,
"text": "Square Enix also released Final Fantasy XIV: Dad of Light in 2017, an 8-episode Japanese soap opera based, featuring a mix of live-action scenes and Final Fantasy XIV gameplay footage.",
"title": "Media"
},
{
"paragraph_id": 16,
"text": "As of June 2019, Sony Pictures Television is working on a first ever live-action adaptation of the series with Hivemind and Square Enix. Jason F. Brown, Sean Daniel and Dinesh Shamdasani for Hivemind are the producers while Ben Lustig and Jake Thornton were attached as writers and executive producers for the series.",
"title": "Media"
},
{
"paragraph_id": 17,
"text": "Several video games have either been adapted into or have had spin-offs in the form of manga and novels. The first was the novelization of Final Fantasy II in 1989, and was followed by a manga adaptation of Final Fantasy III in 1992. The past decade has seen an increase in the number of non-video game adaptations and spin-offs. Final Fantasy: The Spirits Within has been adapted into a novel, the spin-off game Final Fantasy Crystal Chronicles has been adapted into a manga, and Final Fantasy XI had a novel and manga set in its continuity. Seven novellas based on the Final Fantasy VII universe have also been released. The Final Fantasy: Unlimited story was partially continued in novels and a manga after the anime series ended. The Final Fantasy X and XIII series have also had novellas and audio dramas released. Final Fantasy Tactics Advance has been adapted into a radio drama, and Final Fantasy: Unlimited has received a radio drama sequel.",
"title": "Media"
},
{
"paragraph_id": 18,
"text": "A trading card game named Final Fantasy Trading Card Game is produced by Square Enix and Hobby Japan, first released Japan in 2012 with an English version in 2016. The game has been compared to Magic: the Gathering, and a tournament circuit for the game also takes place.",
"title": "Media"
},
{
"paragraph_id": 19,
"text": "Although most Final Fantasy installments are independent, many gameplay elements recur throughout the series. Most games contain elements of fantasy and science fiction and feature recycled names often inspired from various cultures' history, languages and mythology, including Asian, European, and Middle-Eastern. Examples include weapon names like Excalibur and Masamune—derived from Arthurian legend and the Japanese swordsmith Masamune respectively—as well as the spell names Holy, Meteor, and Ultima. Beginning with Final Fantasy IV, the main series adopted its current logo style that features the same typeface and an emblem designed by Japanese artist Yoshitaka Amano. The emblem relates to a game's plot and typically portrays a character or object in the story. Subsequent remakes of the first three games have replaced the previous logos with ones similar to the rest of the series.",
"title": "Common elements"
},
{
"paragraph_id": 20,
"text": "The central conflict in many Final Fantasy games focuses on a group of characters battling an evil, and sometimes ancient, antagonist that dominates the game's world. Stories frequently involve a sovereign state in rebellion, with the protagonists taking part in the rebellion. The heroes are often destined to defeat the evil, and occasionally gather as a direct result of the antagonist's malicious actions. Another staple of the series is the existence of two villains; the main villain is not always who it appears to be, as the primary antagonist may actually be subservient to another character or entity. The main antagonist introduced at the beginning of the game is not always the final enemy, and the characters must continue their quest beyond what appears to be the final fight.",
"title": "Common elements"
},
{
"paragraph_id": 21,
"text": "Stories in the series frequently emphasize the internal struggles, passions, and tragedies of the characters, and the main plot often recedes into the background as the focus shifts to their personal lives. Games also explore relationships between characters, ranging from love to rivalry. Other recurring situations that drive the plot include amnesia, a hero corrupted by an evil force, mistaken identity, and self-sacrifice. Magical orbs and crystals are recurring in-game items that are frequently connected to the themes of the games' plots. Crystals often play a central role in the creation of the world, and a majority of the Final Fantasy games link crystals and orbs to the planet's life force. As such, control over these crystals drives the main conflict. The classical elements are also a recurring theme in the series related to the heroes, villains, and items. Other common plot and setting themes include the Gaia hypothesis, an apocalypse, and conflicts between advanced technology and nature.",
"title": "Common elements"
},
{
"paragraph_id": 22,
"text": "The series features a number of recurring character archetypes. Most famously, every game since Final Fantasy II, including subsequent remakes of the original Final Fantasy, features a character named Cid. Cid's appearance, personality, goals, and role in the game (non-playable ally, party member, villain) vary dramatically. However, two characteristics many versions of Cid have in common are being a scientist or engineer, and being tied in some way to an airship the party eventually acquires. Every Cid has at least one of these two traits.",
"title": "Common elements"
},
{
"paragraph_id": 23,
"text": "Biggs and Wedge, inspired by two Star Wars characters of the same name, appear in numerous games as minor characters, sometimes as comic relief. The later games in the series feature several males with effeminate characteristics. Recurring creatures include Chocobos, Moogles, and Cactuars. Chocobos are large, often flightless birds that appear in several installments as a means of long-distance travel for characters. Moogles are white, stout creatures resembling teddy bears with wings and a single antenna. They serve different roles in games including mail delivery, weaponsmiths, party members, and saving the game. Cactuars are anthropomorphic cacti with haniwa-like faces presented in a running or dashing pose. They usually appear as recurring enemy units, and also as summoned allies or friendly non-player characters in certain titles. Chocobo and Moogle appearances are often accompanied by specific musical themes that have been arranged differently for separate games.",
"title": "Common elements"
},
{
"paragraph_id": 24,
"text": "In Final Fantasy games, players command a party of characters as they progress through the game's story by exploring the game world and defeating enemies. Enemies are typically encountered randomly through exploring, a trend which changed in Final Fantasy XI and XII. The player issues combat orders—like \"Fight\", \"Magic\", and \"Item\"—to individual characters via a menu-driven interface while engaging in battles. Throughout the series, the games have used different battle systems. Prior to Final Fantasy XI, battles were turn-based with the protagonists and antagonists on different sides of the battlefield. Final Fantasy IV introduced the \"Active Time Battle\" (ATB) system that augmented the turn-based nature with a perpetual time-keeping system. Designed by Hiroyuki Ito, it injected urgency and excitement into combat by requiring the player to act before an enemy attacks, and was used until Final Fantasy X, which implemented the \"Conditional Turn-Based\" (CTB) system. This new system returned to the previous turn-based system, but added nuances to offer players more challenge. Final Fantasy XI adopted a real-time battle system where characters continuously act depending on the issued command. Final Fantasy XII continued this gameplay with the \"Active Dimension Battle\" system. Final Fantasy XIII's combat system, designed by the same man who worked on X, was meant to have an action-oriented feel, emulating the cinematic battles in Final Fantasy VII: Advent Children. Final Fantasy XV introduces a new \"Open Combat\" system. Unlike previous battle systems in the franchise, the \"Open Combat\" system (OCS) allows players to take on a fully active battle scenario, allowing for free range attacks and movement, giving a much more fluid feel of combat. This system also incorporates a \"Tactical\" Option during battle, which pauses active battle to allow use of items.",
"title": "Common elements"
},
{
"paragraph_id": 25,
"text": "Like most RPGs, the Final Fantasy installments use an experience level system for character advancement, in which experience points are accumulated by killing enemies. Character classes, specific jobs that enable unique abilities for characters, are another recurring theme. Introduced in the first game, character classes have been used differently in each game. Some restrict a character to a single job to integrate it into the story, while other games feature dynamic job systems that allow the player to choose from multiple classes and switch throughout the game. Though used heavily in many games, such systems have become less prevalent in favor of characters that are more versatile; characters still match an archetype, but are able to learn skills outside their class.",
"title": "Common elements"
},
{
"paragraph_id": 26,
"text": "Magic is another common RPG element in the series. The method by which characters gain magic varies between installments, but is generally divided into classes organized by color: \"White magic\", which focuses on spells that assist teammates; \"Black magic\", which focuses on harming enemies; \"Red magic\", which is a combination of white and black magic, \"Blue magic\", which mimics enemy attacks; and \"Green magic\" which focuses on applying status effects to either allies or enemies. Other types of magic frequently appear such as \"Time magic\", focusing on the themes of time, space, and gravity; and \"Summoning magic\", which evokes legendary creatures to aid in battle and is a feature that has persisted since Final Fantasy III. Summoned creatures are often referred to by names like \"Espers\" or \"Eidolons\" and have been inspired by mythologies from Arabic, Hindu, Norse, and Greek cultures.",
"title": "Common elements"
},
{
"paragraph_id": 27,
"text": "Different means of transportation have appeared through the series. The most common is the airship for long range travel, accompanied by chocobos for travelling short distances, but others include sea and land vessels. Following Final Fantasy VII, more modern and futuristic vehicle designs have been included.",
"title": "Common elements"
},
{
"paragraph_id": 28,
"text": "In the mid-1980s, Square entered the Japanese video game industry with simple RPGs, racing games, and platformers for Nintendo's Famicom Disk System. In 1987, Square designer Hironobu Sakaguchi chose to create a new fantasy role-playing game for the cartridge-based NES, and drew inspiration from popular fantasy games: Enix's Dragon Quest, Nintendo's The Legend of Zelda, and Origin Systems's Ultima series. Though often attributed to the company allegedly facing bankruptcy, Sakaguchi explained that the game was his personal last-ditch effort in the game industry and that its title, Final Fantasy, stemmed from his feelings at the time; had the game not sold well, he would have quit the business and gone back to college. Despite his explanation, publications have also attributed the name to the company's hopes that the project would solve its financial troubles. In 2015, Sakaguchi explained the name's origin: the team wanted a title that would abbreviate to \"FF\", which would sound good in Japanese. The name was originally going to be Fighting Fantasy, but due to concerns over trademark conflicts with the roleplaying gamebook series of the same name, they needed to settle for something else. As the English word \"Final\" was well-known in Japan, Sakaguchi settled on that. According to Sakaguchi, any title that created the \"FF\" abbreviation would have done.",
"title": "Development and history"
},
{
"paragraph_id": 29,
"text": "The game indeed reversed Square's lagging fortunes, and it became the company's flagship franchise. Following the success, Square immediately developed a second installment. Because Sakaguchi assumed Final Fantasy would be a stand-alone game, its story was not designed to be expanded by a sequel. The developers instead chose to carry over only thematic similarities from its predecessor, while some of the gameplay elements, such as the character advancement system, were overhauled. This approach has continued throughout the series; each major Final Fantasy game features a new setting, a new cast of characters, and an upgraded battle system. Video game writer John Harris attributed the concept of reworking the game system of each installment to Nihon Falcom's Dragon Slayer series, with which Square was previously involved as a publisher. The company regularly released new games in the main series. However, the time between the releases of XI (2002), XII (2006), and XIII (2009) were much longer than previous games. Following Final Fantasy XIV, Square Enix released Final Fantasy games either annually or biennially. This switch was to mimic the development cycles of Western games in the Call of Duty, Assassin's Creed and Battlefield series, as well as maintain fan-interest.",
"title": "Development and history"
},
{
"paragraph_id": 30,
"text": "For the original Final Fantasy, Sakaguchi required a larger production team than Square's previous games. He began crafting the game's story while experimenting with gameplay ideas. Once the gameplay system and game world size were established, Sakaguchi integrated his story ideas into the available resources. A different approach has been taken for subsequent games; the story is completed first and the game built around it. Designers have never been restricted by consistency, though most feel each game should have a minimum number of common elements. The development teams strive to create completely new worlds for each game, and avoid making new games too similar to previous ones. Game locations are conceptualized early in development and design details like building parts are fleshed out as a base for entire structures.",
"title": "Development and history"
},
{
"paragraph_id": 31,
"text": "The first five games were directed by Sakaguchi, who also provided the original concepts. He drew inspiration for game elements from anime films by Hayao Miyazaki; series staples like the airships and chocobos are inspired by elements in Castle in the Sky and Nausicaä of the Valley of the Wind, respectively. Sakaguchi served as a producer for subsequent games until he left Square in 2001. Yoshinori Kitase took over directing the games until Final Fantasy VIII, and has been followed by a new director for each new game. Hiroyuki Ito designed several gameplay systems, including Final Fantasy V's \"Job System\", Final Fantasy VIII's \"Junction System\" and the Active Time Battle concept, which was used from Final Fantasy IV until IX. In designing the Active Time Battle system, Ito drew inspiration from Formula One racing; he thought it would be interesting if character types had different speeds after watching race cars pass each other. Ito also co-directed Final Fantasy VI with Kitase. Kenji Terada was the scenario writer for the first three games; Kitase took over as scenario writer for Final Fantasy V through VII. Kazushige Nojima became the series' primary scenario writer from Final Fantasy VII until his resignation in October 2003; he has since formed his own company, Stellavista. Nojima partially or completely wrote the stories for Final Fantasy VII, VIII, X, and its sequel X-2. He also worked as the scenario writer for the spin-off series, Kingdom Hearts. Daisuke Watanabe co-wrote the scenarios for Final Fantasy X and XII, and was the main writer for the XIII games.",
"title": "Development and history"
},
{
"paragraph_id": 32,
"text": "Artistic design, including character and monster creations, was handled by Japanese artist Yoshitaka Amano from Final Fantasy through Final Fantasy VI. Amano also handled title logo designs for all of the main series and the image illustrations from Final Fantasy VII onward. Tetsuya Nomura was chosen to replace Amano because Nomura's designs were more adaptable to 3D graphics. He worked with the series from Final Fantasy VII through X, then came back for XIII, and for the basic design of XV. For Final Fantasy IX character designs were handled by Shukō Murase, Toshiyuki Itahana, and Shin Nagasawa. For Final Fantasy XV, Roberto Ferrari was responsible for the character design. Nomura is also the character designer of the Kingdom Hearts series, Compilation of Final Fantasy VII, and Fabula Nova Crystallis: Final Fantasy. Other designers include Nobuyoshi Mihara and Akihiko Yoshida. Mihara was the character designer for Final Fantasy XI, and Yoshida served as character designer for Final Fantasy Tactics, the Square-produced Vagrant Story, and Final Fantasy XII.",
"title": "Development and history"
},
{
"paragraph_id": 33,
"text": "Because of graphical limitations, the first games on the NES feature small sprite representations of the leading party members on the main world screen. Battle screens use more detailed, full versions of characters in a side-view perspective. This practice was used until Final Fantasy VI, which uses detailed versions for both screens. The NES sprites are 26 pixels high and use a color palette of 4 colors. 6 frames of animation are used to depict different character statuses like \"healthy\" and \"fatigued\". The SNES installments use updated graphics and effects, as well as higher quality audio than in previous games, but are otherwise similar to their predecessors in basic design. The SNES sprites are 2 pixels shorter, but have larger palettes and feature more animation frames: 11 colors and 40 frames respectively. The upgrade allowed designers to have characters be more detailed in appearance and express more emotions. The first game includes non-player characters (NPCs) the player could interact with, but they are mostly static in-game objects. Beginning with the second game, Square used predetermined pathways for NPCs to create more dynamic scenes that include comedy and drama.",
"title": "Development and history"
},
{
"paragraph_id": 34,
"text": "In 1995, Square showed an interactive SGI technical demonstration of Final Fantasy VI for the then next generation of consoles. The demonstration used Silicon Graphics's prototype Nintendo 64 workstations to create 3D graphics. Fans believed the demo was of a new Final Fantasy game for the Nintendo 64 console. 1997 saw the release of Final Fantasy VII for the Sony PlayStation. The switch was due to a dispute with Nintendo over its use of faster but more expensive cartridges, as opposed to the slower and cheaper, but much higher capacity compact discs used on rival systems. VII introduced 3D graphics with fully pre-rendered backgrounds. It was because of this switch to 3D that a CD-ROM format was chosen over a cartridge format. The switch also led to increased production costs and a greater subdivision of the creative staff for VII and subsequent 3D games in the series.",
"title": "Development and history"
},
{
"paragraph_id": 35,
"text": "Starting with Final Fantasy VIII, the series adopted a more photo-realistic look. Like VII, full motion video (FMV) sequences would have video playing in the background, with the polygonal characters composited on top. Final Fantasy IX returned to the more stylized design of earlier games in the series, although it still maintained, and in many cases slightly upgraded, most of the graphical techniques used in the previous two games. Final Fantasy X was released on the PlayStation 2, and used the more powerful hardware to render graphics in real-time instead of using pre-rendered material to obtain a more dynamic look; the game features full 3D environments, rather than have 3D character models move about pre-rendered backgrounds. It is also the first Final Fantasy game to introduce voice acting, occurring throughout the majority of the game, even with many minor characters. This aspect added a whole new dimension of depth to the character's reactions, emotions, and development.",
"title": "Development and history"
},
{
"paragraph_id": 36,
"text": "Taking a temporary divergence, Final Fantasy XI used the PlayStation 2's online capabilities as an MMORPG. Initially released for the PlayStation 2 with a PC port arriving six months later, XI was also released on the Xbox 360 nearly four years after its original release in Japan. This was the first Final Fantasy game to use a free rotating camera. Final Fantasy XII was released in 2006 for the PlayStation 2 and uses only half as many polygons as Final Fantasy X, in exchange for more advanced textures and lighting. It also retains the freely rotating camera from XI. Final Fantasy XIII and XIV both make use of Crystal Tools, a middleware engine developed by Square Enix.",
"title": "Development and history"
},
{
"paragraph_id": 37,
"text": "Final Fantasy games feature a variety of music, and frequently reuse themes. Most of the games open with a piece called \"Prelude\", which has evolved from a simple, 2-voice arpeggio in the early games to a complex, melodic arrangement in recent installments. Victories in combat are often accompanied by a victory fanfare, a theme that has become one of the most recognized pieces of music in the series. The basic theme that accompanies Chocobo appearances has been rearranged in a different musical style for most installments. Recurring secret bosses such as Gilgamesh are also used as opportunities to revive their musical themes.",
"title": "Development and history"
},
{
"paragraph_id": 38,
"text": "A theme known as the \"Final Fantasy Main Theme\" or \"March\", originally featured in the first game, often accompanies the ending credits. Although leitmotifs are common in the more character-driven installments, theme music is typically reserved for main characters and recurring plot elements.",
"title": "Development and history"
},
{
"paragraph_id": 39,
"text": "Nobuo Uematsu was the primary composer of the Final Fantasy series until his resignation from Square Enix in November 2004. Other notable composers who have worked on main entries in the series include Masashi Hamauzu, Hitoshi Sakimoto, and Yoko Shimomura. Uematsu was allowed to create much of the music with little direction from the production staff. Sakaguchi, however, would request pieces to fit specific game scenes including battles and exploring different areas of the game world. Once a game's major scenarios were completed, Uematsu would begin writing the music based on the story, characters, and accompanying artwork. He started with a game's main theme, and developed other pieces to match its style. In creating character themes, Uematsu read the game's scenario to determine the characters' personality. He would also ask the scenario writer for more details to scenes he was unsure about. Technical limitations were prevalent in earlier games; Sakaguchi would sometimes instruct Uematsu to only use specific notes. It was not until Final Fantasy IV on the SNES that Uematsu was able to add more subtlety to the music.",
"title": "Development and history"
},
{
"paragraph_id": 40,
"text": "Overall, the Final Fantasy series has been critically acclaimed and commercially successful, though each installment has seen different levels of success. The series has seen a steady increase in total sales; it sold over 10 million software units worldwide by early 1996, more than 25 million units by 1999, more than 33 million units and nearly $1 billion revenue (between $1.7–2.6 billion adjusted for inflation) by 2001, 45 million units by August 2003, 63 million by December 2005, and 85 million by July 2008. By June 2011, the series had sold over 100 million units, and by March 2014, it had sold over 110 million units. Its high sales numbers have ranked it as one of the best-selling video game franchises in the industry; in January 2007, the series was listed as number three, and later in July as number four. As of 2019, the series had sold over 149 million units worldwide. As of October 2021, the series had sold over 164 million units worldwide. By March 2022, the series reached cumulative global physical and digital sales of 173 million units.",
"title": "Reception"
},
{
"paragraph_id": 41,
"text": "Several games within the series have become best-selling games. At the end of 2007, the seventh, eighth, and ninth best-selling RPGs were Final Fantasy VII, VIII, and X respectively. The original Final Fantasy VII has sold over 14.1 million copies worldwide, earning it the position of the best-selling Final Fantasy game. Within two days of Final Fantasy VIII's North American release on September 9, 1999, it became the top-selling video game in the United States, a position it held for more than three weeks. Final Fantasy X sold over 1.4 million Japanese units in pre-orders alone, which set a record for the fastest-selling console RPG. The MMORPG, Final Fantasy XI, reached over 200,000 active daily players in March 2006 and had reached over half a million subscribers by July 2007. Final Fantasy XII sold more than 1.7 million copies in its first week in Japan. By November 6, 2006—one week after its release—XII had shipped approximately 1.5 million copies in North America. Final Fantasy XIII became the fastest-selling game in the franchise, and sold one million units on its first day of sale in Japan. Final Fantasy XIV: A Realm Reborn, in comparison to its predecessor, was a runaway success, originally suffering from servers being overcrowded, and eventually gaining over one million unique subscribers within two months of its launch.",
"title": "Reception"
},
{
"paragraph_id": 42,
"text": "The series has received critical acclaim for the quality of its visuals and soundtracks. In 1996, Next Generation ranked the series collectively as the 17th best game of all time, speaking very highly of its graphics, music and stories. In 1999, Next Generation listed the Final Fantasy series as number 16 on their \"Top 50 Games of All Time\", commenting that \"by pairing state-of-the-art technology with memorable, sometimes shamelessly melodramatic storylines, the series has successfully outlasted its competitors [...] and improved with each new installation\". It was awarded a star on the Walk of Game in 2006, making it the first franchise to win a star on the event (other winners were individual games, not franchises). WalkOfGame.com commented that the series has sought perfection as well as having been a risk taker in innovation. In 2006, GameFAQs held a contest for the best video game series ever, with Final Fantasy finishing as the runner-up to The Legend of Zelda. In a 2008 public poll held by The Game Group plc, Final Fantasy was voted the best game series, with five games appearing in their \"Greatest Games of All Time\" list.",
"title": "Reception"
},
{
"paragraph_id": 43,
"text": "Many Final Fantasy games have been included in various lists of top games. Several games have been listed on multiple IGN \"Top Games\" lists. Twelve games were listed on Famitsu's 2006 \"Top 100 Favorite Games of All Time\", four of which were in the top ten, with Final Fantasy X and VII coming first and second, respectively. The series holds seven Guinness World Records in the Guinness World Records Gamer's Edition 2008, which include the \"Most Games in an RPG Series\" (13 main games, seven enhanced games, and 32 spin-off games), the \"Longest Development Period\" (the production of Final Fantasy XII took five years), and the \"Fastest-Selling Console RPG in a Single Day\" (Final Fantasy X). The 2009 edition listed two games from the series among the top 50 consoles games: Final Fantasy XII at number 8 and VII at number 20. In 2018, Final Fantasy VII was inducted as a member of the World Video Game Hall of Fame.",
"title": "Reception"
},
{
"paragraph_id": 44,
"text": "However, the series has garnered some criticism. IGN has commented that the menu system used by the games is a major detractor for many and is a \"significant reason why they haven't touched the series\". The site has also heavily criticized the use of random encounters in the series' battle systems. IGN further stated the various attempts to bring the series into film and animation have either been unsuccessful, unremarkable, or did not live up to the standards of the games. In 2007, Edge criticized the series for a number of related games that include the phrase \"Final Fantasy\" in their titles, which are considered inferior to previous games. It also commented that with the departure of Hironobu Sakaguchi, the series might be in danger of growing stale.",
"title": "Reception"
},
{
"paragraph_id": 45,
"text": "Several individual Final Fantasy games have garnered extra attention; some for their positive reception and others for their negative reception. Final Fantasy VII topped GamePro's \"26 Best RPGs of All Time\" list, as well as GameFAQs \"Best Game Ever\" audience polls in 2004 and 2005. Despite the success of VII, it is sometimes criticized as being overrated. In 2003, GameSpy listed it as the seventh most overrated game of all time, while IGN presented views from both sides. Dirge of Cerberus: Final Fantasy VII shipped 392,000 units in its first week of release, but received review scores that were much lower than that of other Final Fantasy games. A delayed, negative review after the Japanese release of Dirge of Cerberus from Japanese gaming magazine Famitsu hinted at a controversy between the magazine and Square Enix. Though Final Fantasy: The Spirits Within was praised for its visuals, the plot was criticized and the film was considered a box office bomb. Final Fantasy Crystal Chronicles for the GameCube received overall positive review scores, but reviews stated that the use of Game Boy Advances as controllers was a big detractor. The predominantly negative reception of the original version of Final Fantasy XIV caused then-president Yoichi Wada to issue an official apology during a Tokyo press conference, stating that the brand had been \"greatly damaged\" by the game's reception.",
"title": "Reception"
},
{
"paragraph_id": 46,
"text": "Various video game publications have created rankings of the mainline Final Fantasy games. In the table below, the lower the number given, the better the game is in the view of the respective publication. By way of comparison, the ratings provided by Famitsu magazine and the review aggregator Metacritic are also given; in these rows, higher numbers indicate better reviews. Note that Metacritic ratings up until Final Fantasy VII largely represent retrospective reviews from online websites years after their initial release, rather than contemporary reviews from video game magazines at the time of their initial release.",
"title": "Reception"
},
{
"paragraph_id": 47,
"text": "Final Fantasy has been very influential in the history of video games and game mechanics. Final Fantasy IV is considered a milestone for the genre, introducing a dramatic storyline with a strong emphasis on character development and personal relationships. In 1992, Nintendo's Shigeru Miyamoto noted the impact of Final Fantasy on Japanese role-playing games, stating Final Fantasy's \"interactive cinematic approach\" with an emphasis on \"presentation and graphics\" was gradually becoming \"the most common style\" of Japanese RPG at the time. Final Fantasy VII, having been the first title of the series to be officially released in the PAL territories of Europe and Oceania, is credited as having the largest industry impact of the series, and with allowing console role-playing games to gain global mass-market appeal. VII is considered to be one of the most important and influential video games of all time.",
"title": "Legacy"
},
{
"paragraph_id": 48,
"text": "The series affected Square's business on several levels. The commercial failure of Final Fantasy: The Spirits Within resulted in hesitation and delays from Enix during merger discussions with Square. Square's decision to produce games exclusively for the Sony PlayStation—a move followed by Enix's decision with the Dragon Quest series—severed their relationship with Nintendo. Final Fantasy games were absent from Nintendo consoles, specifically the Nintendo 64, for seven years. Critics attribute the switch of strong third-party games like the Final Fantasy and Dragon Quest games to Sony's PlayStation, and away from the Nintendo 64, as one of the reasons behind PlayStation being the more successful of the two consoles. The release of the Nintendo GameCube, which used optical disc media, in 2001 caught the attention of Square. To produce games for the system, Square created the shell company The Game Designers Studio and released Final Fantasy Crystal Chronicles, which spawned its own metaseries within the main franchise. Final Fantasy XI's lack of an online method of subscription cancellation prompted the creation of legislation in Illinois that requires internet gaming services to provide such a method to the state's residents.",
"title": "Legacy"
},
{
"paragraph_id": 49,
"text": "The series' popularity has resulted in its appearance and reference in numerous facets of popular culture like anime, TV series, and webcomics. Music from the series has permeated into different areas of culture. Final Fantasy IV's \"Theme of Love\" was integrated into the curriculum of Japanese school children and has been performed live by orchestras and metal bands. In 2003, Uematsu co-founded The Black Mages, an instrumental rock group independent of Square that has released albums of arranged Final Fantasy tunes. Bronze medalists Alison Bartosik and Anna Kozlova performed their synchronized swimming routine at the 2004 Summer Olympics to music from Final Fantasy VIII. Many of the soundtracks have also been released for sale. Numerous companion books, which normally provide in-depth game information, have been published. In Japan, they are published by Square and are called Ultimania books.",
"title": "Legacy"
},
{
"paragraph_id": 50,
"text": "The series has inspired numerous game developers. Fable creator Peter Molyneux considers Final Fantasy VII to be the RPG that \"defined the genre\" for him. BioWare founder Greg Zeschuk cited Final Fantasy VII as \"the first really emotionally engaging game\" he played and said it had \"a big impact\" on BioWare's work. The Witcher 3 senior environmental artist Jonas Mattsson cited Final Fantasy as \"a huge influence\" and said it was \"the first RPG\" he played through. Mass Effect art director Derek Watts cited Final Fantasy: The Spirits Within as a major influence on the visual design and art direction of the series. BioWare senior product manager David Silverman cited Final Fantasy XII's gambit system as an influence on the gameplay of Dragon Age: Origins. Ubisoft Toronto creative director Maxime Beland cited the original Final Fantasy as a major influence on him. Media Molecule's Constantin Jupp credited Final Fantasy VII with getting him into game design. Tim Schafer also cited Final Fantasy VII as one of his favourite games of all time.",
"title": "Legacy"
}
]
| Final Fantasy is a fantasy anthology media franchise created by Hironobu Sakaguchi which is owned and developed and published by Square Enix. The franchise centers on a series of fantasy role-playing video games. The first game in the series was released in 1987, with 16 numbered main entries having been released to date. The franchise has since branched into other video game genres such as tactical role-playing, action role-playing, massively multiplayer online role-playing, racing, third-person shooter, fighting, and rhythm, as well as branching into other media, including films, anime, manga, and novels. Final Fantasy is mostly an anthology series with primary installments being stand-alone role-playing games, each with different settings, plots and main characters; however, the franchise is linked by several recurring elements, including game mechanics and recurring character names. Each plot centers on a particular group of heroes who are battling a great evil, but also explores the characters' internal struggles and relationships. Character names are frequently derived from the history, languages, pop culture, and mythologies of cultures worldwide. The mechanics of each game involve similar battle systems and maps. Final Fantasy has been both critically and commercially successful. Several entries are regarded as some of the greatest video games, with the series selling more than 185 million copies worldwide, making it one of the best-selling video game franchises of all time. The series is well known for its innovation, visuals, such as the inclusion of full-motion videos, photorealistic character models, and music by Nobuo Uematsu. It has popularized many features now common in role-playing games, also popularizing the genre as a whole in markets outside Japan. | 2001-09-26T04:03:41Z | 2023-12-30T23:28:35Z | [
"Template:Use mdy dates",
"Template:Infobox video game series",
"Template:Nihongo foot",
"Template:Refn",
"Template:Navboxes",
"Template:Featured article",
"Template:US$",
"Template:Reflist",
"Template:Wikiquote",
"Template:Commons category",
"Template:VG timeline",
"Template:Further",
"Template:'s",
"Template:Table alignment",
"Template:Short description",
"Template:See also",
"Template:Cite magazine",
"Template:Official website",
"Template:Citation",
"Template:Cite journal",
"Template:Pp-move-indef",
"Template:Main",
"Template:Notelist",
"Template:Cite web",
"Template:Cite book",
"Template:Cite news",
"Template:About",
"Template:Timeline of release years",
"Template:'",
"Template:Portal",
"Template:Cite press release",
"Template:Curlie",
"Template:Authority control",
"Template:Nowrap"
]
| https://en.wikipedia.org/wiki/Final_Fantasy |
10,975 | Fatty acid | In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids (up to 70% by weight) in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells.
The concept of fatty acid (acide gras) was introduced in 1813 by Michel Eugène Chevreul, though he initially used some variant terms: graisse acide and acide huileux ("acid fat" and "oily acid").
Fatty acids are classified in many ways: by length, by saturation vs unsaturation, by even vs odd carbon content, and by linear vs branched.
Saturated fatty acids have no C=C double bonds. They have the formula CH3(CH2)nCOOH, for different n. An important saturated fatty acid is stearic acid (n = 16), which when neutralized with sodium hydroxide is the most common form of soap.
Unsaturated fatty acids have one or more C=C double bonds. The C=C double bonds can give either cis or trans isomers.
In most naturally occurring unsaturated fatty acids, each double bond has three (n-3), six (n-6), or nine (n-9) carbon atoms after it, and all double bonds have a cis configuration. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation). Some trans fatty acids also occur naturally in the milk and meat of ruminants (such as cattle and sheep). They are produced, by fermentation, in the rumen of these animals. They are also found in dairy products from milk of ruminants, and may be also found in breast milk of women who obtained them from their diet.
The geometric differences between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes).
Most fatty acids are even-chained, e.g. stearic (C18) and oleic (C18), meaning they are composed of an even number of carbon atoms. Some fatty acids have odd numbers of carbon atoms; they are referred to as odd-chained fatty acids (OCFA). The most common OCFA are the saturated C15 and C17 derivatives, pentadecanoic acid and heptadecanoic acid respectively, which are found in dairy products. On a molecular level, OCFAs are biosynthesized and metabolized slightly differently from the even-chained relatives.
Most common fatty acids are straight-chain compounds, with no additional carbon atoms bonded as side groups to the main hydrocarbon chain. Branched-chain fatty acids contain one or more methyl groups bonded to the hydrocarbon chain.
Most naturally occurring fatty acids have an unbranched chain of carbon atoms, with a carboxyl group (–COOH) at one end, and a methyl group (–CH3) at the other end.
The position of each carbon atom in the backbone of a fatty acid is usually indicated by counting from 1 at the −COOH end. Carbon number x is often abbreviated C-x (or sometimes Cx), with x = 1, 2, 3, etc. This is the numbering scheme recommended by the IUPAC.
Another convention uses letters of the Greek alphabet in sequence, starting with the first carbon after the carboxyl group. Thus carbon α (alpha) is C-2, carbon β (beta) is C-3, and so forth.
Although fatty acids can be of diverse lengths, in this second convention the last carbon in the chain is always labelled as ω (omega), which is the last letter in the Greek alphabet. A third numbering convention counts the carbons from that end, using the labels "ω", "ω−1", "ω−2". Alternatively, the label "ω−x" is written "n−x", where the "n" is meant to represent the number of carbons in the chain.
In either numbering scheme, the position of a double bond in a fatty acid chain is always specified by giving the label of the carbon closest to the carboxyl end. Thus, in an 18 carbon fatty acid, a double bond between C-12 (or ω−6) and C-13 (or ω−5) is said to be "at" position C-12 or ω−6. The IUPAC naming of the acid, such as "octadec-12-enoic acid" (or the more pronounceable variant "12-octadecanoic acid") is always based on the "C" numbering.
The notation Δ is traditionally used to specify a fatty acid with double bonds at positions x,y,.... (The capital Greek letter "Δ" (delta) corresponds to Roman "D", for Double bond). Thus, for example, the 20-carbon arachidonic acid is Δ, meaning that it has double bonds between carbons 5 and 6, 8 and 9, 11 and 12, and 14 and 15.
In the context of human diet and fat metabolism, unsaturated fatty acids are often classified by the position of the double bond closest to the ω carbon (only), even in the case of multiple double bonds such as the essential fatty acids. Thus linoleic acid (18 carbons, Δ), γ-linolenic acid (18-carbon, Δ), and arachidonic acid (20-carbon, Δ) are all classified as "ω−6" fatty acids; meaning that their formula ends with –CH=CH–CH2–CH2–CH2–CH2–CH3.
Fatty acids with an odd number of carbon atoms are called odd-chain fatty acids, whereas the rest are even-chain fatty acids. The difference is relevant to gluconeogenesis.
The following table describes the most common systems of naming fatty acids.
When circulating in the plasma (plasma fatty acids), not in their ester, fatty acids are known as non-esterified fatty acids (NEFAs) or free fatty acids (FFAs). FFAs are always bound to a transport protein, such as albumin.
FFAs also form from triglyceride food oils and fats by hydrolysis, contributing to the characteristic rancid odor. An analogous process happens in biodiesel with risk of part corrosion.
Fatty acids are usually produced industrially by the hydrolysis of triglycerides, with the removal of glycerol (see oleochemicals). Phospholipids represent another source. Some fatty acids are produced synthetically by hydrocarboxylation of alkenes.
In animals, fatty acids are formed from carbohydrates predominantly in the liver, adipose tissue, and the mammary glands during lactation.
Carbohydrates are converted into pyruvate by glycolysis as the first important step in the conversion of carbohydrates into fatty acids. Pyruvate is then decarboxylated to form acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to the mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids.
Malonyl-CoA is then involved in a repeating series of reactions that lengthens the growing fatty acid chain by two carbons at a time. Almost all natural fatty acids, therefore, have even numbers of carbon atoms. When synthesis is complete the free fatty acids are nearly always combined with glycerol (three fatty acids to one glycerol molecule) to form triglycerides, the main storage form of fatty acids, and thus of energy in animals. However, fatty acids are also important components of the phospholipids that form the phospholipid bilayers out of which all the membranes of the cell are constructed (the cell wall, and the membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus).
The "uncombined fatty acids" or "free fatty acids" found in the circulation of animals come from the breakdown (or lipolysis) of stored triglycerides. Because they are insoluble in water, these fatty acids are transported bound to plasma albumin. The levels of "free fatty acids" in the blood are limited by the availability of albumin binding sites. They can be taken up from the blood by all cells that have mitochondria (with the exception of the cells of the central nervous system). Fatty acids can only be broken down in mitochondria, by means of beta-oxidation followed by further combustion in the citric acid cycle to CO2 and water. Cells in the central nervous system, although they possess mitochondria, cannot take free fatty acids up from the blood, as the blood–brain barrier is impervious to most free fatty acids, excluding short-chain fatty acids and medium-chain fatty acids. These cells have to manufacture their own fatty acids from carbohydrates, as described above, in order to produce and maintain the phospholipids of their cell membranes, and those of their organelles.
Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids (DHA, omega-3 fatty acid) than reptiles. Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega-3 fatty acids as compared to omega-6 for a given body size. This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions (H & Na), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures.
The following table gives the fatty acid, vitamin E and cholesterol composition of some common dietary fats.
Fatty acids exhibit reactions like other carboxylic acids, i.e. they undergo esterification and acid-base reactions.
Fatty acids do not show a great variation in their acidities, as indicated by their respective pKa. Nonanoic acid, for example, has a pKa of 4.96, being only slightly weaker than acetic acid (4.76). As the chain length increases, the solubility of the fatty acids in water decreases, so that the longer-chain fatty acids have minimal effect on the pH of an aqueous solution. Near neutral pH, fatty acids exist at their conjugate bases, i.e. oleate, etc.
Solutions of fatty acids in ethanol can be titrated with sodium hydroxide solution using phenolphthalein as an indicator. This analysis is used to determine the free fatty acid content of fats; i.e., the proportion of the triglycerides that have been hydrolyzed.
Neutralization of fatty acids, one form of saponification (soap-making), is a widely practiced route to metallic soaps.
Hydrogenation of unsaturated fatty acids is widely practiced. Typical conditions involve 2.0–3.0 MPa of H2 pressure, 150 °C, and nickel supported on silica as a catalyst. This treatment affords saturated fatty acids. The extent of hydrogenation is indicated by the iodine number. Hydrogenated fatty acids are less prone toward rancidification. Since the saturated fatty acids are higher melting than the unsaturated precursors, the process is called hardening. Related technology is used to convert vegetable oils into margarine. The hydrogenation of triglycerides (vs fatty acids) is advantageous because the carboxylic acids degrade the nickel catalysts, affording nickel soaps. During partial hydrogenation, unsaturated fatty acids can be isomerized from cis to trans configuration.
More forcing hydrogenation, i.e. using higher pressures of H2 and higher temperatures, converts fatty acids into fatty alcohols. Fatty alcohols are, however, more easily produced from fatty acid esters.
In the Varrentrapp reaction certain unsaturated fatty acids are cleaved in molten alkali, a reaction which was, at one point of time, relevant to structure elucidation.
Unsaturated fatty acids and their esters undergo auto-oxidation, which involves replacement of a C-H bond with C-O bond. The process requires oxygen (air) and is accelerated by the presence of traces of metals, which serve as catalysts. Doubly unsaturated fatty acids are particularly prone to this reaction. Vegetable oils resist this process to a small degree because they contain antioxidants, such as tocopherol. Fats and oils often are treated with chelating agents such as citric acid to remove the metal catalysts.
Unsaturated fatty acids are susceptible to degradation by ozone. This reaction is practiced in the production of azelaic acid ((CH2)7(CO2H)2) from oleic acid.
Short- and medium-chain fatty acids are absorbed directly into the blood via intestine capillaries and travel through the portal vein just as other absorbed nutrients do. However, long-chain fatty acids are not directly released into the intestinal capillaries. Instead they are absorbed into the fatty walls of the intestine villi and reassemble again into triglycerides. The triglycerides are coated with cholesterol and protein (protein coat) into a compound called a chylomicron.
From within the cell, the chylomicron is released into a lymphatic capillary called a lacteal, which merges into larger lymphatic vessels. It is transported via the lymphatic system and the thoracic duct up to a location near the heart (where the arteries and veins are larger). The thoracic duct empties the chylomicrons into the bloodstream via the left subclavian vein. At this point the chylomicrons can transport the triglycerides to tissues where they are stored or metabolized for energy.
Fatty acids are broken down to CO2 and water by the intra-cellular mitochondria through beta oxidation and the citric acid cycle. In the final step (oxidative phosphorylation), reactions with oxygen release a lot of energy, captured in the form of large quantities of ATP. Many cell types can use either glucose or fatty acids for this purpose, but fatty acids release more energy per gram. Fatty acids (provided either by ingestion or by drawing on triglycerides stored in fatty tissues) are distributed to cells to serve as a fuel for muscular contraction and general metabolism.
Fatty acids that are required for good health but cannot be made in sufficient quantity from other substrates, and therefore must be obtained from food, are called essential fatty acids. There are two series of essential fatty acids: one has a double bond three carbon atoms away from the methyl end; the other has a double bond six carbon atoms away from the methyl end. Humans lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10, as counted from the carboxylic acid side. Two essential fatty acids are linoleic acid (LA) and alpha-linolenic acid (ALA). These fatty acids are widely distributed in plant oils. The human body has a limited ability to convert ALA into the longer-chain omega-3 fatty acids — eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which can also be obtained from fish. Omega-3 and omega-6 fatty acids are biosynthetic precursors to endocannabinoids with antinociceptive, anxiolytic, and neurogenic properties.
Blood fatty acids adopt distinct forms in different stages in the blood circulation. They are taken in through the intestine in chylomicrons, but also exist in very low density lipoproteins (VLDL) and low density lipoproteins (LDL) after processing in the liver. In addition, when released from adipocytes, fatty acids exist in the blood as free fatty acids.
It is proposed that the blend of fatty acids exuded by mammalian skin, together with lactic acid and pyruvic acid, is distinctive and enables animals with a keen sense of smell to differentiate individuals.
The stratum corneum – the outermost layer of the epidermis – is composed of terminally differentiated and enucleated corneocytes within a lipid matrix. Together with cholesterol and ceramides, free fatty acids form a water-impermeable barrier that prevents evaporative water loss. Generally, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (about 50% by weight), cholesterol (25%), and free fatty acids (15%). Saturated fatty acids 16 and 18 carbons in length are the dominant types in the epidermis, while unsaturated fatty acids and saturated fatty acids of various other lengths are also present. The relative abundance of the different fatty acids in the epidermis is dependent on the body site the skin is covering. There are also characteristic epidermal fatty acid alterations that occur in psoriasis, atopic dermatitis, and other inflammatory conditions.
The chemical analysis of fatty acids in lipids typically begins with an interesterification step that breaks down their original esters (triglycerides, waxes, phospholipids etc.) and converts them to methyl esters, which are then separated by gas chromatography or analyzed by gas chromatography and mid-infrared spectroscopy.
Separation of unsaturated isomers is possible by silver ion complemented thin-layer chromatography. Other separation techniques include high-performance liquid chromatography (with short columns packed with silica gel with bonded phenylsulfonic acid groups whose hydrogen atoms have been exchanged for silver ions). The role of silver lies in its ability to form complexes with unsaturated compounds.
Fatty acids are mainly used in the production of soap, both for cosmetic purposes and, in the case of metallic soaps, as lubricants. Fatty acids are also converted, via their methyl esters, to fatty alcohols and fatty amines, which are precursors to surfactants, detergents, and lubricants. Other applications include their use as emulsifiers, texturizing agents, wetting agents, anti-foam agents, or stabilizing agents.
Esters of fatty acids with simpler alcohols (such as methyl-, ethyl-, n-propyl-, isopropyl- and butyl esters) are used as emollients in cosmetics and other personal care products and as synthetic lubricants. Esters of fatty acids with more complex alcohols, such as sorbitol, ethylene glycol, diethylene glycol, and polyethylene glycol are consumed in food, or used for personal care and water treatment, or used as synthetic lubricants or fluids for metal working. | [
{
"paragraph_id": 0,
"text": "In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids (up to 70% by weight) in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The concept of fatty acid (acide gras) was introduced in 1813 by Michel Eugène Chevreul, though he initially used some variant terms: graisse acide and acide huileux (\"acid fat\" and \"oily acid\").",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Fatty acids are classified in many ways: by length, by saturation vs unsaturation, by even vs odd carbon content, and by linear vs branched.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 3,
"text": "Saturated fatty acids have no C=C double bonds. They have the formula CH3(CH2)nCOOH, for different n. An important saturated fatty acid is stearic acid (n = 16), which when neutralized with sodium hydroxide is the most common form of soap.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 4,
"text": "Unsaturated fatty acids have one or more C=C double bonds. The C=C double bonds can give either cis or trans isomers.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 5,
"text": "In most naturally occurring unsaturated fatty acids, each double bond has three (n-3), six (n-6), or nine (n-9) carbon atoms after it, and all double bonds have a cis configuration. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation). Some trans fatty acids also occur naturally in the milk and meat of ruminants (such as cattle and sheep). They are produced, by fermentation, in the rumen of these animals. They are also found in dairy products from milk of ruminants, and may be also found in breast milk of women who obtained them from their diet.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 6,
"text": "The geometric differences between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes).",
"title": "Types of fatty acids"
},
{
"paragraph_id": 7,
"text": "Most fatty acids are even-chained, e.g. stearic (C18) and oleic (C18), meaning they are composed of an even number of carbon atoms. Some fatty acids have odd numbers of carbon atoms; they are referred to as odd-chained fatty acids (OCFA). The most common OCFA are the saturated C15 and C17 derivatives, pentadecanoic acid and heptadecanoic acid respectively, which are found in dairy products. On a molecular level, OCFAs are biosynthesized and metabolized slightly differently from the even-chained relatives.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 8,
"text": "Most common fatty acids are straight-chain compounds, with no additional carbon atoms bonded as side groups to the main hydrocarbon chain. Branched-chain fatty acids contain one or more methyl groups bonded to the hydrocarbon chain.",
"title": "Types of fatty acids"
},
{
"paragraph_id": 9,
"text": "Most naturally occurring fatty acids have an unbranched chain of carbon atoms, with a carboxyl group (–COOH) at one end, and a methyl group (–CH3) at the other end.",
"title": "Nomenclature"
},
{
"paragraph_id": 10,
"text": "The position of each carbon atom in the backbone of a fatty acid is usually indicated by counting from 1 at the −COOH end. Carbon number x is often abbreviated C-x (or sometimes Cx), with x = 1, 2, 3, etc. This is the numbering scheme recommended by the IUPAC.",
"title": "Nomenclature"
},
{
"paragraph_id": 11,
"text": "Another convention uses letters of the Greek alphabet in sequence, starting with the first carbon after the carboxyl group. Thus carbon α (alpha) is C-2, carbon β (beta) is C-3, and so forth.",
"title": "Nomenclature"
},
{
"paragraph_id": 12,
"text": "Although fatty acids can be of diverse lengths, in this second convention the last carbon in the chain is always labelled as ω (omega), which is the last letter in the Greek alphabet. A third numbering convention counts the carbons from that end, using the labels \"ω\", \"ω−1\", \"ω−2\". Alternatively, the label \"ω−x\" is written \"n−x\", where the \"n\" is meant to represent the number of carbons in the chain.",
"title": "Nomenclature"
},
{
"paragraph_id": 13,
"text": "In either numbering scheme, the position of a double bond in a fatty acid chain is always specified by giving the label of the carbon closest to the carboxyl end. Thus, in an 18 carbon fatty acid, a double bond between C-12 (or ω−6) and C-13 (or ω−5) is said to be \"at\" position C-12 or ω−6. The IUPAC naming of the acid, such as \"octadec-12-enoic acid\" (or the more pronounceable variant \"12-octadecanoic acid\") is always based on the \"C\" numbering.",
"title": "Nomenclature"
},
{
"paragraph_id": 14,
"text": "The notation Δ is traditionally used to specify a fatty acid with double bonds at positions x,y,.... (The capital Greek letter \"Δ\" (delta) corresponds to Roman \"D\", for Double bond). Thus, for example, the 20-carbon arachidonic acid is Δ, meaning that it has double bonds between carbons 5 and 6, 8 and 9, 11 and 12, and 14 and 15.",
"title": "Nomenclature"
},
{
"paragraph_id": 15,
"text": "In the context of human diet and fat metabolism, unsaturated fatty acids are often classified by the position of the double bond closest to the ω carbon (only), even in the case of multiple double bonds such as the essential fatty acids. Thus linoleic acid (18 carbons, Δ), γ-linolenic acid (18-carbon, Δ), and arachidonic acid (20-carbon, Δ) are all classified as \"ω−6\" fatty acids; meaning that their formula ends with –CH=CH–CH2–CH2–CH2–CH2–CH3.",
"title": "Nomenclature"
},
{
"paragraph_id": 16,
"text": "Fatty acids with an odd number of carbon atoms are called odd-chain fatty acids, whereas the rest are even-chain fatty acids. The difference is relevant to gluconeogenesis.",
"title": "Nomenclature"
},
{
"paragraph_id": 17,
"text": "The following table describes the most common systems of naming fatty acids.",
"title": "Nomenclature"
},
{
"paragraph_id": 18,
"text": "When circulating in the plasma (plasma fatty acids), not in their ester, fatty acids are known as non-esterified fatty acids (NEFAs) or free fatty acids (FFAs). FFAs are always bound to a transport protein, such as albumin.",
"title": "Nomenclature"
},
{
"paragraph_id": 19,
"text": "FFAs also form from triglyceride food oils and fats by hydrolysis, contributing to the characteristic rancid odor. An analogous process happens in biodiesel with risk of part corrosion.",
"title": "Nomenclature"
},
{
"paragraph_id": 20,
"text": "Fatty acids are usually produced industrially by the hydrolysis of triglycerides, with the removal of glycerol (see oleochemicals). Phospholipids represent another source. Some fatty acids are produced synthetically by hydrocarboxylation of alkenes.",
"title": "Production"
},
{
"paragraph_id": 21,
"text": "In animals, fatty acids are formed from carbohydrates predominantly in the liver, adipose tissue, and the mammary glands during lactation.",
"title": "Production"
},
{
"paragraph_id": 22,
"text": "Carbohydrates are converted into pyruvate by glycolysis as the first important step in the conversion of carbohydrates into fatty acids. Pyruvate is then decarboxylated to form acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl-CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to the mitochondrion as malate. The cytosolic acetyl-CoA is carboxylated by acetyl CoA carboxylase into malonyl-CoA, the first committed step in the synthesis of fatty acids.",
"title": "Production"
},
{
"paragraph_id": 23,
"text": "Malonyl-CoA is then involved in a repeating series of reactions that lengthens the growing fatty acid chain by two carbons at a time. Almost all natural fatty acids, therefore, have even numbers of carbon atoms. When synthesis is complete the free fatty acids are nearly always combined with glycerol (three fatty acids to one glycerol molecule) to form triglycerides, the main storage form of fatty acids, and thus of energy in animals. However, fatty acids are also important components of the phospholipids that form the phospholipid bilayers out of which all the membranes of the cell are constructed (the cell wall, and the membranes that enclose all the organelles within the cells, such as the nucleus, the mitochondria, endoplasmic reticulum, and the Golgi apparatus).",
"title": "Production"
},
{
"paragraph_id": 24,
"text": "The \"uncombined fatty acids\" or \"free fatty acids\" found in the circulation of animals come from the breakdown (or lipolysis) of stored triglycerides. Because they are insoluble in water, these fatty acids are transported bound to plasma albumin. The levels of \"free fatty acids\" in the blood are limited by the availability of albumin binding sites. They can be taken up from the blood by all cells that have mitochondria (with the exception of the cells of the central nervous system). Fatty acids can only be broken down in mitochondria, by means of beta-oxidation followed by further combustion in the citric acid cycle to CO2 and water. Cells in the central nervous system, although they possess mitochondria, cannot take free fatty acids up from the blood, as the blood–brain barrier is impervious to most free fatty acids, excluding short-chain fatty acids and medium-chain fatty acids. These cells have to manufacture their own fatty acids from carbohydrates, as described above, in order to produce and maintain the phospholipids of their cell membranes, and those of their organelles.",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids (DHA, omega-3 fatty acid) than reptiles. Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega-3 fatty acids as compared to omega-6 for a given body size. This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions (H & Na), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures.",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "The following table gives the fatty acid, vitamin E and cholesterol composition of some common dietary fats.",
"title": "Fatty acids in dietary fats"
},
{
"paragraph_id": 27,
"text": "Fatty acids exhibit reactions like other carboxylic acids, i.e. they undergo esterification and acid-base reactions.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 28,
"text": "Fatty acids do not show a great variation in their acidities, as indicated by their respective pKa. Nonanoic acid, for example, has a pKa of 4.96, being only slightly weaker than acetic acid (4.76). As the chain length increases, the solubility of the fatty acids in water decreases, so that the longer-chain fatty acids have minimal effect on the pH of an aqueous solution. Near neutral pH, fatty acids exist at their conjugate bases, i.e. oleate, etc.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 29,
"text": "Solutions of fatty acids in ethanol can be titrated with sodium hydroxide solution using phenolphthalein as an indicator. This analysis is used to determine the free fatty acid content of fats; i.e., the proportion of the triglycerides that have been hydrolyzed.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 30,
"text": "Neutralization of fatty acids, one form of saponification (soap-making), is a widely practiced route to metallic soaps.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 31,
"text": "Hydrogenation of unsaturated fatty acids is widely practiced. Typical conditions involve 2.0–3.0 MPa of H2 pressure, 150 °C, and nickel supported on silica as a catalyst. This treatment affords saturated fatty acids. The extent of hydrogenation is indicated by the iodine number. Hydrogenated fatty acids are less prone toward rancidification. Since the saturated fatty acids are higher melting than the unsaturated precursors, the process is called hardening. Related technology is used to convert vegetable oils into margarine. The hydrogenation of triglycerides (vs fatty acids) is advantageous because the carboxylic acids degrade the nickel catalysts, affording nickel soaps. During partial hydrogenation, unsaturated fatty acids can be isomerized from cis to trans configuration.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 32,
"text": "More forcing hydrogenation, i.e. using higher pressures of H2 and higher temperatures, converts fatty acids into fatty alcohols. Fatty alcohols are, however, more easily produced from fatty acid esters.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 33,
"text": "In the Varrentrapp reaction certain unsaturated fatty acids are cleaved in molten alkali, a reaction which was, at one point of time, relevant to structure elucidation.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 34,
"text": "Unsaturated fatty acids and their esters undergo auto-oxidation, which involves replacement of a C-H bond with C-O bond. The process requires oxygen (air) and is accelerated by the presence of traces of metals, which serve as catalysts. Doubly unsaturated fatty acids are particularly prone to this reaction. Vegetable oils resist this process to a small degree because they contain antioxidants, such as tocopherol. Fats and oils often are treated with chelating agents such as citric acid to remove the metal catalysts.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 35,
"text": "Unsaturated fatty acids are susceptible to degradation by ozone. This reaction is practiced in the production of azelaic acid ((CH2)7(CO2H)2) from oleic acid.",
"title": "Reactions of fatty acids"
},
{
"paragraph_id": 36,
"text": "Short- and medium-chain fatty acids are absorbed directly into the blood via intestine capillaries and travel through the portal vein just as other absorbed nutrients do. However, long-chain fatty acids are not directly released into the intestinal capillaries. Instead they are absorbed into the fatty walls of the intestine villi and reassemble again into triglycerides. The triglycerides are coated with cholesterol and protein (protein coat) into a compound called a chylomicron.",
"title": "Circulation"
},
{
"paragraph_id": 37,
"text": "From within the cell, the chylomicron is released into a lymphatic capillary called a lacteal, which merges into larger lymphatic vessels. It is transported via the lymphatic system and the thoracic duct up to a location near the heart (where the arteries and veins are larger). The thoracic duct empties the chylomicrons into the bloodstream via the left subclavian vein. At this point the chylomicrons can transport the triglycerides to tissues where they are stored or metabolized for energy.",
"title": "Circulation"
},
{
"paragraph_id": 38,
"text": "Fatty acids are broken down to CO2 and water by the intra-cellular mitochondria through beta oxidation and the citric acid cycle. In the final step (oxidative phosphorylation), reactions with oxygen release a lot of energy, captured in the form of large quantities of ATP. Many cell types can use either glucose or fatty acids for this purpose, but fatty acids release more energy per gram. Fatty acids (provided either by ingestion or by drawing on triglycerides stored in fatty tissues) are distributed to cells to serve as a fuel for muscular contraction and general metabolism.",
"title": "Circulation"
},
{
"paragraph_id": 39,
"text": "Fatty acids that are required for good health but cannot be made in sufficient quantity from other substrates, and therefore must be obtained from food, are called essential fatty acids. There are two series of essential fatty acids: one has a double bond three carbon atoms away from the methyl end; the other has a double bond six carbon atoms away from the methyl end. Humans lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10, as counted from the carboxylic acid side. Two essential fatty acids are linoleic acid (LA) and alpha-linolenic acid (ALA). These fatty acids are widely distributed in plant oils. The human body has a limited ability to convert ALA into the longer-chain omega-3 fatty acids — eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), which can also be obtained from fish. Omega-3 and omega-6 fatty acids are biosynthetic precursors to endocannabinoids with antinociceptive, anxiolytic, and neurogenic properties.",
"title": "Circulation"
},
{
"paragraph_id": 40,
"text": "Blood fatty acids adopt distinct forms in different stages in the blood circulation. They are taken in through the intestine in chylomicrons, but also exist in very low density lipoproteins (VLDL) and low density lipoproteins (LDL) after processing in the liver. In addition, when released from adipocytes, fatty acids exist in the blood as free fatty acids.",
"title": "Circulation"
},
{
"paragraph_id": 41,
"text": "It is proposed that the blend of fatty acids exuded by mammalian skin, together with lactic acid and pyruvic acid, is distinctive and enables animals with a keen sense of smell to differentiate individuals.",
"title": "Circulation"
},
{
"paragraph_id": 42,
"text": "The stratum corneum – the outermost layer of the epidermis – is composed of terminally differentiated and enucleated corneocytes within a lipid matrix. Together with cholesterol and ceramides, free fatty acids form a water-impermeable barrier that prevents evaporative water loss. Generally, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (about 50% by weight), cholesterol (25%), and free fatty acids (15%). Saturated fatty acids 16 and 18 carbons in length are the dominant types in the epidermis, while unsaturated fatty acids and saturated fatty acids of various other lengths are also present. The relative abundance of the different fatty acids in the epidermis is dependent on the body site the skin is covering. There are also characteristic epidermal fatty acid alterations that occur in psoriasis, atopic dermatitis, and other inflammatory conditions.",
"title": "Skin"
},
{
"paragraph_id": 43,
"text": "The chemical analysis of fatty acids in lipids typically begins with an interesterification step that breaks down their original esters (triglycerides, waxes, phospholipids etc.) and converts them to methyl esters, which are then separated by gas chromatography or analyzed by gas chromatography and mid-infrared spectroscopy.",
"title": "Analysis"
},
{
"paragraph_id": 44,
"text": "Separation of unsaturated isomers is possible by silver ion complemented thin-layer chromatography. Other separation techniques include high-performance liquid chromatography (with short columns packed with silica gel with bonded phenylsulfonic acid groups whose hydrogen atoms have been exchanged for silver ions). The role of silver lies in its ability to form complexes with unsaturated compounds.",
"title": "Analysis"
},
{
"paragraph_id": 45,
"text": "Fatty acids are mainly used in the production of soap, both for cosmetic purposes and, in the case of metallic soaps, as lubricants. Fatty acids are also converted, via their methyl esters, to fatty alcohols and fatty amines, which are precursors to surfactants, detergents, and lubricants. Other applications include their use as emulsifiers, texturizing agents, wetting agents, anti-foam agents, or stabilizing agents.",
"title": "Industrial uses"
},
{
"paragraph_id": 46,
"text": "Esters of fatty acids with simpler alcohols (such as methyl-, ethyl-, n-propyl-, isopropyl- and butyl esters) are used as emollients in cosmetics and other personal care products and as synthetic lubricants. Esters of fatty acids with more complex alcohols, such as sorbitol, ethylene glycol, diethylene glycol, and polyethylene glycol are consumed in food, or used for personal care and water treatment, or used as synthetic lubricants or fluids for metal working.",
"title": "Industrial uses"
},
{
"paragraph_id": 47,
"text": "",
"title": "Industrial uses"
}
]
| In chemistry, particularly in biochemistry, a fatty acid is a carboxylic acid with an aliphatic chain, which is either saturated or unsaturated. Most naturally occurring fatty acids have an unbranched chain of an even number of carbon atoms, from 4 to 28. Fatty acids are a major component of the lipids in some species such as microalgae but in some other organisms are not found in their standalone form, but instead exist as three main classes of esters: triglycerides, phospholipids, and cholesteryl esters. In any of these forms, fatty acids are both important dietary sources of fuel for animals and important structural components for cells. | 2001-08-13T16:01:27Z | 2023-11-08T11:26:19Z | [
"Template:Main",
"Template:Sub",
"Template:Webarchive",
"Template:Cite book",
"Template:Scholia",
"Template:Fatty-acid metabolism intermediates",
"Template:Authority control",
"Template:Sup",
"Template:Clear",
"Template:Commons",
"Template:Col div",
"Template:Doi",
"Template:Ullmann",
"Template:Fatty acids",
"Template:Main list",
"Template:See also",
"Template:Chem2",
"Template:Ndash",
"Template:Colend",
"Template:Cite web",
"Template:Short description",
"Template:Fats",
"Template:Chem",
"Template:Citation needed",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Fatty_acid |
10,977 | Fearless (1993 film) | Fearless is a 1993 American drama film directed by Peter Weir and starring Jeff Bridges, Isabella Rossellini, Rosie Perez and John Turturro. It was written by Rafael Yglesias, adapted from his novel of the same name.
Rosie Perez was nominated for an Academy Award for Best Supporting Actress for her role as Carla Rodrigo. The film was also entered into the 44th Berlin International Film Festival. Jeff Bridges' role as Max Klein is widely regarded as one of the best performances of his career. The film's soundtrack features part of the first movement of Henryk Górecki's Symphony No. 3, subtitled Symphony of Sorrowful Songs. The film's screenwriter was inspired to write the script after he was in a car accident. Yglesias began writing the story after reading about United Airlines Flight 232, that crashed in Sioux City, Iowa, in 1989.
Max Klein survives an airline crash. The plane plummets, but strangely Max is calm. His calm enables him to dispel fear in the flight cabin. He sits next to Byron Hummel, a young boy flying alone. Flight attendants move through the cabin, telling another passenger, Carla Rodrigo, traveling with an infant, to hold the infant in her lap as the plane plummets out of control, while telling other passengers to buckle into their seats. Max was telling his business partner, Jeff Gordon, of his fear of flying as they took off.
In the aftermath of the crash, most passengers died. Among the few survivors, most are terribly injured. Max is unhurt. The crash site is chaotic, filled with first responders and other emergency personnel. Focusing on the survivors, a team of investigators from the FAA and the airline company conduct interviews. Max is repelled by all the chaos. He is disgusted by the investigators wanting to interview him.
Max rents a car and starts driving home. Along the way he meets an old girlfriend from high school, Alison. They last met 20 years ago. At the restaurant Alison notices Max eating a strawberry. Max is allergic to strawberries. Max grins. He finishes the strawberry without an allergic reaction. The next morning, he is accosted by FBI investigators. They question his choice to not contact family to tell them he is fine. The airline representative offers him train tickets. Max asks for airline tickets. He wants to fly home, having no fear of air travel. The airline books him on the flight. They seat him next to Dr. Bill Perlman, the airline's psychiatrist.
Dr. Perlman annoyingly tags behind Max back to his home, prodding him for information about the crash. Max is forced to snap back at the psychiatrist rudely, to be rid of him. Laura Klein, Max's wife, notices the strange behavior. Max seems different, changed somehow. Max's late business partner's wife, Nan Gordon, asks about Jeff's last moments. Max says Jeff died in the crash.
The media call Max "The Good Samaritan" in news reports. The boy Max sat next to, Byron, publicly thanks him in television interviews, for the way he comforted passengers while the plane fell out of control during the crash. Max is a hero.
Max avoids the press and becomes distant from Laura and his son Jonah. His persona is radically changed. He is preoccupied with his new perspective on life following his near-death experience. He begins drawing abstract pictures of the crash. As he survived without injury, he thinks himself invulnerable to death. Because of his confidence, Dr. Perlman encourages Max to meet with another survivor, Carla Rodrigo, whose infant was held in her lap while the plane fell. Carla struggles with survivor's guilt, and is traumatized for not holding onto him tightly enough, although she was following the flight attendant's instructions. Max and Carla develop a close friendship. He helps her to get past the trauma, to free herself from guilt, deliberately crashing his car to show that it was physically impossible for any person to hold onto anything due to the forces of the crash.
Attorney Steven Brillstein encourages Max to exaggerate testimony, to maximize the settlement offer from the airline. Max reluctantly agrees when he is confronted with Nan's financial predicament as a widow. Cognitive dissonance spurs Max to a panic attack. He runs out of the office, to the roof of the building. He climbs onto the roof's edge. As Max stands on the ledge, looking down at the streets below, his panic subsides. He rejoices in fearlessness. Laura finds Max on the ledge. He is spinning around on the ledge with his overcoat billowing across his face.
Brillstein arrives at the Klein home to celebrate the airline's settlement offer. He brings a fruit basket. Max eats one of the strawberries. This time he experiences an allergic reaction. Max is resuscitated by Laura and survives. He recovers his emotional connection to his family, to the world and to the reality of yet another chance at life.
A book containing the painting The Ascent into the Empyrean by Hieronymus Bosch is shown, and it is said that the dying go into the light of heaven "naked and alone". Near the finale as Max lies on the ground, he relives moving from the fuselage of the aircraft and for a moment moves towards the tunnel of light that appears to be modeled on the painting.
On Rotten Tomatoes the film has an approval rating of 84% based on reviews from 43 critics, with an average score of 7.8/10. The site's consensus states "This underrated gem from director Peter Weir features an outstanding performance from Jeff Bridges as a man dealing with profound grief." Audiences polled by CinemaScore gave the film a grade of "B+" on an A+ to F scale.
Roger Ebert of the Chicago Sun-Times gave it 3 out of 4 stars and wrote: ""Fearless" is like a short story that shines a bright light, briefly, into a corner where you usually do not look." Vincent Canby of The New York Times said "Mr. Bridges does well with a difficult role", and Todd McCarthy of Variety called it one of Bridges best performances. McCarthy was positive about the film calling it "beautifully made in all respects" but noted that as a mainstream film about profound issues and emotions some audiences will appreciate it but others may find it pretentious. Geoff Andrew of Time Out wrote: "As often with Weir, there's considerably less here than meets the eye."
With video and audio quality superseding previous home video releases, Fearless was released on Blu-ray Disc by the Warner Archive Collection in November 2013. | [
{
"paragraph_id": 0,
"text": "Fearless is a 1993 American drama film directed by Peter Weir and starring Jeff Bridges, Isabella Rossellini, Rosie Perez and John Turturro. It was written by Rafael Yglesias, adapted from his novel of the same name.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Rosie Perez was nominated for an Academy Award for Best Supporting Actress for her role as Carla Rodrigo. The film was also entered into the 44th Berlin International Film Festival. Jeff Bridges' role as Max Klein is widely regarded as one of the best performances of his career. The film's soundtrack features part of the first movement of Henryk Górecki's Symphony No. 3, subtitled Symphony of Sorrowful Songs. The film's screenwriter was inspired to write the script after he was in a car accident. Yglesias began writing the story after reading about United Airlines Flight 232, that crashed in Sioux City, Iowa, in 1989.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Max Klein survives an airline crash. The plane plummets, but strangely Max is calm. His calm enables him to dispel fear in the flight cabin. He sits next to Byron Hummel, a young boy flying alone. Flight attendants move through the cabin, telling another passenger, Carla Rodrigo, traveling with an infant, to hold the infant in her lap as the plane plummets out of control, while telling other passengers to buckle into their seats. Max was telling his business partner, Jeff Gordon, of his fear of flying as they took off.",
"title": "Plot"
},
{
"paragraph_id": 3,
"text": "In the aftermath of the crash, most passengers died. Among the few survivors, most are terribly injured. Max is unhurt. The crash site is chaotic, filled with first responders and other emergency personnel. Focusing on the survivors, a team of investigators from the FAA and the airline company conduct interviews. Max is repelled by all the chaos. He is disgusted by the investigators wanting to interview him.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "Max rents a car and starts driving home. Along the way he meets an old girlfriend from high school, Alison. They last met 20 years ago. At the restaurant Alison notices Max eating a strawberry. Max is allergic to strawberries. Max grins. He finishes the strawberry without an allergic reaction. The next morning, he is accosted by FBI investigators. They question his choice to not contact family to tell them he is fine. The airline representative offers him train tickets. Max asks for airline tickets. He wants to fly home, having no fear of air travel. The airline books him on the flight. They seat him next to Dr. Bill Perlman, the airline's psychiatrist.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "Dr. Perlman annoyingly tags behind Max back to his home, prodding him for information about the crash. Max is forced to snap back at the psychiatrist rudely, to be rid of him. Laura Klein, Max's wife, notices the strange behavior. Max seems different, changed somehow. Max's late business partner's wife, Nan Gordon, asks about Jeff's last moments. Max says Jeff died in the crash.",
"title": "Plot"
},
{
"paragraph_id": 6,
"text": "The media call Max \"The Good Samaritan\" in news reports. The boy Max sat next to, Byron, publicly thanks him in television interviews, for the way he comforted passengers while the plane fell out of control during the crash. Max is a hero.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "Max avoids the press and becomes distant from Laura and his son Jonah. His persona is radically changed. He is preoccupied with his new perspective on life following his near-death experience. He begins drawing abstract pictures of the crash. As he survived without injury, he thinks himself invulnerable to death. Because of his confidence, Dr. Perlman encourages Max to meet with another survivor, Carla Rodrigo, whose infant was held in her lap while the plane fell. Carla struggles with survivor's guilt, and is traumatized for not holding onto him tightly enough, although she was following the flight attendant's instructions. Max and Carla develop a close friendship. He helps her to get past the trauma, to free herself from guilt, deliberately crashing his car to show that it was physically impossible for any person to hold onto anything due to the forces of the crash.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "Attorney Steven Brillstein encourages Max to exaggerate testimony, to maximize the settlement offer from the airline. Max reluctantly agrees when he is confronted with Nan's financial predicament as a widow. Cognitive dissonance spurs Max to a panic attack. He runs out of the office, to the roof of the building. He climbs onto the roof's edge. As Max stands on the ledge, looking down at the streets below, his panic subsides. He rejoices in fearlessness. Laura finds Max on the ledge. He is spinning around on the ledge with his overcoat billowing across his face.",
"title": "Plot"
},
{
"paragraph_id": 9,
"text": "Brillstein arrives at the Klein home to celebrate the airline's settlement offer. He brings a fruit basket. Max eats one of the strawberries. This time he experiences an allergic reaction. Max is resuscitated by Laura and survives. He recovers his emotional connection to his family, to the world and to the reality of yet another chance at life.",
"title": "Plot"
},
{
"paragraph_id": 10,
"text": "A book containing the painting The Ascent into the Empyrean by Hieronymus Bosch is shown, and it is said that the dying go into the light of heaven \"naked and alone\". Near the finale as Max lies on the ground, he relives moving from the fuselage of the aircraft and for a moment moves towards the tunnel of light that appears to be modeled on the painting.",
"title": "Aesthetic elements"
},
{
"paragraph_id": 11,
"text": "On Rotten Tomatoes the film has an approval rating of 84% based on reviews from 43 critics, with an average score of 7.8/10. The site's consensus states \"This underrated gem from director Peter Weir features an outstanding performance from Jeff Bridges as a man dealing with profound grief.\" Audiences polled by CinemaScore gave the film a grade of \"B+\" on an A+ to F scale.",
"title": "Reception"
},
{
"paragraph_id": 12,
"text": "Roger Ebert of the Chicago Sun-Times gave it 3 out of 4 stars and wrote: \"\"Fearless\" is like a short story that shines a bright light, briefly, into a corner where you usually do not look.\" Vincent Canby of The New York Times said \"Mr. Bridges does well with a difficult role\", and Todd McCarthy of Variety called it one of Bridges best performances. McCarthy was positive about the film calling it \"beautifully made in all respects\" but noted that as a mainstream film about profound issues and emotions some audiences will appreciate it but others may find it pretentious. Geoff Andrew of Time Out wrote: \"As often with Weir, there's considerably less here than meets the eye.\"",
"title": "Reception"
},
{
"paragraph_id": 13,
"text": "With video and audio quality superseding previous home video releases, Fearless was released on Blu-ray Disc by the Warner Archive Collection in November 2013.",
"title": "Home media"
}
]
| Fearless is a 1993 American drama film directed by Peter Weir and starring Jeff Bridges, Isabella Rossellini, Rosie Perez and John Turturro. It was written by Rafael Yglesias, adapted from his novel of the same name. Rosie Perez was nominated for an Academy Award for Best Supporting Actress for her role as Carla Rodrigo. The film was also entered into the 44th Berlin International Film Festival. Jeff Bridges' role as Max Klein is widely regarded as one of the best performances of his career. The film's soundtrack features part of the first movement of Henryk Górecki's Symphony No. 3, subtitled Symphony of Sorrowful Songs. The film's screenwriter was inspired to write the script after he was in a car accident. Yglesias began writing the story after reading about United Airlines Flight 232, that crashed in Sioux City, Iowa, in 1989. | 2001-08-13T22:55:07Z | 2023-12-25T08:39:14Z | [
"Template:Peter Weir",
"Template:Efn",
"Template:Reflist",
"Template:IMDb title",
"Template:TCMDb title",
"Template:Draw",
"Template:Cite web",
"Template:Short description",
"Template:Runner-up",
"Template:Won",
"Template:Noteslist",
"Template:Cite news",
"Template:Infobox film",
"Template:Nom"
]
| https://en.wikipedia.org/wiki/Fearless_(1993_film) |
10,979 | Franklin D. Roosevelt | Franklin Delano Roosevelt (January 30, 1882 – April 12, 1945), commonly known by his initials FDR, was an American politician and statesman who served as the 32nd president of the United States from 1933 until his death in 1945. He was a member of the Democratic Party and is the only U.S. president to have served more than two terms in office. During his third and fourth terms he was preoccupied with World War II.
A member of the prominent Roosevelt family, after attending university, Roosevelt began to practice law in New York City. He was elected a member of the New York State Senate from 1911 to 1913 and was then the assistant secretary of the Navy under President Woodrow Wilson during World War I. Roosevelt was James M. Cox's running mate on the Democratic Party's ticket in the 1920 U.S. presidential election, but Cox lost to Republican nominee Warren G. Harding. In 1921, Roosevelt contracted a paralytic illness that permanently paralyzed his legs. Partly through the encouragement of his wife, Eleanor Roosevelt, he returned to public office as governor of New York from 1929 to 1933, during which he promoted programs to combat the Great Depression besetting the U.S. In the 1932 presidential election, Roosevelt defeated Republican president Herbert Hoover in a landslide.
During his first 100 days as president, Roosevelt spearheaded unprecedented federal legislation and directed the federal government during most of the Great Depression, implementing the New Deal in response to the most significant economic crisis in American history. He also built the New Deal coalition, realigning American politics into the Fifth Party System and defining American liberalism throughout the middle third of the 20th century. He created numerous programs to provide relief to the unemployed and farmers while seeking economic recovery with the National Recovery Administration and other programs. He also instituted major regulatory reforms related to finance, communications, and labor, and presided over the end of Prohibition. In 1936, Roosevelt won a landslide reelection with the economy having improved from 1933, but the economy relapsed into a deep recession in 1937 and 1938. He was unable to expand the Supreme Court in 1937, the same year the conservative coalition was formed to block the implementation of further New Deal programs and reforms. Major surviving programs and legislation implemented under Roosevelt include the Securities and Exchange Commission, the National Labor Relations Act, the Federal Deposit Insurance Corporation, and Social Security. In 1940, he ran successfully for reelection, becoming the only American president to serve for more than two terms.
With World War II looming after 1938 in addition to the Japanese invasion of China and the aggression of Nazi Germany, Roosevelt gave strong diplomatic and financial support to China as well as the United Kingdom and the Soviet Union while the U.S. remained officially neutral. Following the Japanese attack on Pearl Harbor on December 7, 1941, he obtained a declaration of war on Japan the next day and on Germany and Italy a few days later. He worked closely with other national leaders in leading the Allies against the Axis powers. Roosevelt supervised the mobilization of the American economy to support the war effort and implemented a Europe first strategy. He also initiated the development of the world's first atomic bomb and worked with the other Allied leaders to lay the groundwork for the United Nations and other post-war institutions, even coining the term "United Nations". Roosevelt won reelection in 1944 but died in 1945 after his physical health seriously and steadily declined during the war years. Since then, several of his actions have come under substantial criticism, including his ordering of the internment of Japanese Americans in concentration camps. Nonetheless, historical rankings consistently place him as one of the greatest American presidents.
Franklin Delano Roosevelt was born on January 30, 1882, in the Hudson Valley town of Hyde Park, New York, to businessman James Roosevelt I and his second wife, Sara Ann Delano. His parents, who were sixth cousins, both came from wealthy, established New York families, the Roosevelts, the Aspinwalls and the Delanos, respectively. Roosevelt's paternal ancestor migrated to New Amsterdam in the 17th century, and the Roosevelts succeeded as merchants and landowners. The Delano family patriarch, Philip Delano, traveled to the New World on the Fortune in 1621, and the Delanos thrived as merchants and shipbuilders in Massachusetts. Franklin had a half-brother, James Roosevelt "Rosy" Roosevelt, from his father's previous marriage.
Roosevelt's father, James, graduated from Harvard Law School in 1851 but chose not to practice law after receiving an inheritance from his grandfather. James Roosevelt, a prominent Bourbon Democrat, once took Franklin to meet President Grover Cleveland, who said to him: "My little man, I am making a strange wish for you. It is that you may never be President of the United States." Franklin's mother, the dominant influence in his early years, once declared, "My son Franklin is a Delano, not a Roosevelt at all." James, who was 54 when Franklin was born, was considered by some as a remote father, though biographer James MacGregor Burns indicates James interacted with his son more than was typical at the time.
As a child, Roosevelt learned to ride, shoot, and sail; he also learned to play polo, tennis, and golf. Frequent trips to Europe—beginning at age two and from age seven to fifteen—helped Roosevelt become conversant in German and French. Except for attending public school in Germany at age nine, Roosevelt was home-schooled by tutors until age 14. He then attended Groton School, an Episcopal boarding school in Groton, Massachusetts. He was not among the more popular Groton students, who were better athletes and had rebellious streaks. Its headmaster, Endicott Peabody, preached the duty of Christians to help the less fortunate and urged his students to enter public service. Peabody remained a strong influence throughout Roosevelt's life, officiating at his wedding and visiting him as president.
Like most of his Groton classmates, Roosevelt went to Harvard College. He was a member of the Alpha Delta Phi fraternity and the Fly Club, and served as a school cheerleader. Roosevelt was relatively undistinguished as a student or athlete, but he became editor-in-chief of The Harvard Crimson daily newspaper, a position that required ambition, energy, and the ability to manage others. He later said, "I took economics courses in college for four years, and everything I was taught was wrong."
Roosevelt's father died in 1900, causing great distress for him. The following year, Roosevelt's fifth cousin Theodore Roosevelt became President of the United States. Theodore's vigorous leadership style and reforming zeal made him Franklin's role model and hero. He graduated from Harvard in three years in 1903 with an A.B. in history. He remained there for a fourth year, taking graduate courses and becoming an editor of the Harvard Crimson.
Roosevelt entered Columbia Law School in 1904 but dropped out in 1907 after passing the New York Bar Examination. In 1908, he took a job with the prestigious law firm of Carter Ledyard & Milburn, working in the firm's admiralty law division.
During his second year of college, Roosevelt met and proposed to Boston heiress Alice Sohier, who turned him down. Franklin then began courting his child-acquaintance and fifth cousin once removed, Eleanor Roosevelt, a niece of Theodore Roosevelt. In 1903, Franklin proposed to Eleanor. Following resistance from Roosevelt's mother, Franklin and Eleanor Roosevelt were married on March 17, 1905. Eleanor's father, Elliott, was deceased; her uncle Theodore, who was then the president of the United States, gave away the bride. The young couple moved into Springwood. Franklin and Sara Roosevelt also provided a townhouse for the newlyweds in New York City, and Sara had a house built for herself alongside that townhouse. Eleanor never felt at home in the houses at Hyde Park or New York; however, she loved the family's vacation home on Campobello Island, which Sara also gave the couple. Burns indicates that young Franklin Roosevelt was self-assured and at ease in the upper class. On the other hands, Eleanor was then shy and disliked social life. Initially, Eleanor stayed home to raise their children. As his father had done, Franklin left the raising of the children to his wife, and Eleanor delegated the task to caregivers. She later said that she knew "absolutely nothing about handling or feeding a baby." Although Eleanor thought sex was "an ordeal to be endured", she and Franklin had six children. Anna, James, and Elliott were born in 1906, 1907, and 1910, respectively. The couple's second son, Franklin, died in infancy in 1909. Another son, also named Franklin, was born in 1914, and the youngest child, John, was born in 1916.
Roosevelt had several extramarital affairs. He commenced an affair with Eleanor's social secretary, Lucy Mercer, soon after she was hired in 1914. That affair was discovered by Eleanor in 1918. Franklin contemplated divorcing Eleanor, but Sara objected, and Mercer would not marry a divorced man with five children. Franklin and Eleanor remained married, and Franklin promised never to see Mercer again. Eleanor never forgave him for the affair, and their marriage shifted to become a political partnership. Eleanor soon established a separate home in Hyde Park at Val-Kill and devoted herself to social and political causes independent of her husband. The emotional break in their marriage was so severe that when Franklin asked Eleanor in 1942—in light of his failing health—to come back home and live with him again, she refused. Roosevelt was not always aware of Eleanor's visits to the White House. For some time, Eleanor could not easily reach Roosevelt on the telephone without his secretary's help; Franklin, in turn, did not visit Eleanor's New York City apartment until late 1944.
Franklin broke his promise to Eleanor regarding Lucy Mercer. He and Mercer maintained a formal correspondence, and began seeing each other again in 1941 or earlier. Roosevelt's son Elliott claimed that his father had a 20-year affair with his private secretary, Marguerite "Missy" LeHand. Another son, James, stated that "there is a real possibility that a romantic relationship existed" between his father and Crown Princess Märtha of Norway, who resided in the White House during part of World War II. Aides began to refer to her at the time as "the president's girlfriend", and gossip linking the two romantically appeared in the newspapers.
Roosevelt cared little for the practice of law and told friends he planned to enter politics. Despite his admiration for cousin Theodore, Franklin shared his father's bond with the Democratic Party, and in preparation for the 1910 elections, the party recruited Roosevelt to run for a seat in the New York State Assembly. Roosevelt was a compelling recruit for the party. He had the personality and energy for campaigning, and he had the money to pay for his own campaign. But Roosevelt's campaign for the state assembly ended after the Democratic incumbent, Lewis Stuyvesant Chanler, chose to seek re-election. Rather than putting his political hopes on hold, Roosevelt ran for a seat in the state senate. The senate district, located in Dutchess, Columbia, and Putnam, was strongly Republican. Roosevelt feared that opposition from Theodore could end his campaign, but Theodore encouraged his candidacy despite their party differences. Acting as his own campaign manager, Roosevelt traveled throughout the senate district via automobile at a time when few could afford a car. Due to his aggressive campaign, his name recognition in the Hudson Valley, and the Democratic landslide in the 1910 United States elections, Roosevelt won a surprising victory.
Despite short legislative sessions, Roosevelt treated his new position as a full-time career. Taking his seat on January 1, 1911, Roosevelt soon became the leader of a group of "Insurgents" in opposition to the Tammany Hall machine that dominated the state Democratic Party. In the 1911 U.S. Senate election, which was determined in a joint session of the New York state legislature, Roosevelt and nineteen other Democrats caused a prolonged deadlock by opposing a series of Tammany-backed candidates. Tammany threw its backing behind James A. O'Gorman, a highly regarded judge whom Roosevelt found acceptable, and O'Gorman won the election in late March. Roosevelt in the process became a popular figure among New York Democrats. News articles and cartoons depicted "the second coming of a Roosevelt", sending "cold shivers down the spine of Tammany".
Roosevelt opposed Tammany Hall by supporting New Jersey Governor Woodrow Wilson's successful bid for the 1912 Democratic nomination. The election became a three-way contest when Theodore Roosevelt left the Republican Party to launch a third party campaign against Wilson and sitting Republican President William Howard Taft. Franklin's decision to back Wilson over his cousin in the general election alienated some of his family, except Theodore. Roosevelt overcame a bout of typhoid fever, and with help from journalist Louis McHenry Howe, he was re-elected in the 1912 elections. After the election, he served as chairman of the Agriculture Committee, and his success with farm and labor bills was a precursor to his New Deal policies years later. He had then become more consistently progressive, in support of labor and social welfare programs.
Roosevelt's support of Wilson led to his appointment in March 1913 as Assistant Secretary of the Navy, the second-ranking official in the Navy Department after Secretary Josephus Daniels who paid it little attention. Roosevelt had an affection for the Navy, was well-read on the subject, and was a most ardent supporter of a large, efficient force. With Wilson's support, Daniels and Roosevelt instituted a merit-based promotion system and made other reforms to extend civilian control over the autonomous departments of the Navy. Roosevelt oversaw the Navy's civilian employees and earned the respect of union leaders for his fairness in resolving disputes. No strikes occurred during his seven-plus years in the office, as he gained valuable experience in labor issues, wartime management, naval issues, and logistics.
In 1914, Roosevelt ran for the seat of retiring Republican Senator Elihu Root of New York. Though he had the backing of Treasury Secretary William Gibbs McAdoo and Governor Martin H. Glynn, he faced a formidable opponent in Tammany-Hall's James W. Gerard. He also was without Wilson's support, as the president needed Tammany's forces for his legislation and 1916 re-election. Roosevelt was soundly defeated in the Democratic primary by Gerard, who in turn lost the general election to Republican James Wolcott Wadsworth Jr. He learned that federal patronage alone, without White House support, could not defeat a strong local organization. After the election, he and Tammany Hall boss Charles Francis Murphy sought accommodation and became allies.
Roosevelt refocused on the Navy Department, as World War I broke out in Europe in August 1914. Though he remained publicly supportive of Wilson, Roosevelt sympathized with the Preparedness Movement, whose leaders strongly favored the Allied Powers and called for a military build-up. The Wilson administration initiated an expansion of the Navy after the sinking of the RMS Lusitania by a German submarine, and Roosevelt helped establish the United States Navy Reserve and the Council of National Defense. In April 1917, after Germany declared it would engage in unrestricted submarine warfare and attacked several U.S. ships, Congress approved Wilson's call for a declaration of war on Germany.
Roosevelt requested that he be allowed to serve as a naval officer, but Wilson insisted that he continue to serve as Assistant Secretary. For the next year, Roosevelt remained in Washington to coordinate the deployment of naval vessels and personnel, as the Navy expanded fourfold. In the summer of 1918, Roosevelt traveled to Europe to inspect naval installations and meet with French and British officials. In September, he returned to the United States on board the USS Leviathan. On the 11-day voyage, the pandemic influenza virus struck and killed many on board. Roosevelt became very ill with influenza and complicating pneumonia, but recovered by the time the ship landed in New York. After Germany signed an armistice in November 1918, Daniels and Roosevelt supervised the demobilization of the Navy. Against the advice of older officers such as Admiral William Benson—who claimed he could not "conceive of any use the fleet will ever have for aviation"—Roosevelt personally ordered the preservation of the Navy's Aviation Division. With the Wilson administration near an end, Roosevelt planned his next run for office. He approached Herbert Hoover about running for the 1920 Democratic presidential nomination, with Roosevelt as his running mate.
Roosevelt's plan for Hoover to run for the nomination fell through after Hoover publicly declared himself to be a Republican, but Roosevelt decided to seek the 1920 vice presidential nomination. After Governor James M. Cox of Ohio won the party's presidential nomination at the 1920 Democratic National Convention, he chose Roosevelt as his running mate, and the convention nominated him by acclamation. Although his nomination surprised most people, he balanced the ticket as a moderate, a Wilsonian, and a prohibitionist with a famous name. Roosevelt, then 38, resigned as Assistant Secretary after the Democratic convention and campaigned across the nation for the party ticket.
During the campaign, Cox and Roosevelt defended the Wilson administration and the League of Nations, both of which were unpopular in 1920. Roosevelt personally supported U.S. membership in the League of Nations, but, unlike Wilson, he favored compromising with Senator Henry Cabot Lodge and other "Reservationists". The Cox–Roosevelt ticket was defeated by Republicans Warren G. Harding and Calvin Coolidge in the presidential election by a wide margin, and the Republican ticket carried every state outside of the South. Roosevelt accepted the loss without issue and later reflected that the relationships and goodwill that he built in the 1920 campaign proved to be a major asset in his 1932 campaign. The 1920 election also saw the first public participation of Eleanor Roosevelt who, with the support of Louis Howe, established herself as a valuable political player. After the election, Roosevelt returned to New York City, where he practiced law and served as a vice president of the Fidelity and Deposit Company.
Roosevelt’s future political career came under threat when his role as head the Section A of the Office of the Assistant Secretary of the U.S. Navy in the Newport Sex Scandal became public knowledge. Secretary Daniels had created Section A, commonly known as the Newport Sex Squad, in 1919 to investigate homosexual activity at the US naval base in Newport, Rhode Island. Investigations by both a naval board of inquiry and the U.S. Senate Committee on Naval Affairs revealed a pattern of entrapment and intimidation by forty-one operatives acting under Roosevelt's authority. In 1921, the Senate Committee's final report concluded that Roosevelt's "direct supervision" made him "morally responsible" for these abuses and even suggested that he was unfit to hold any public office. The front-page story on the report in The New York Times on July 23, 1921 featured the headline, "Lay Navy Scandal to F. D. Roosevelt — Details Are Unprintable."
Roosevelt sought to build support for a political comeback in the 1922 elections, but his career was derailed by an illness which began less than three weeks after the U.S. Senate Committee on Naval Affairs issued its final report the Newport Sex Scandal. While the Roosevelts were vacationing at Campobello Island in August 1921, he fell ill. His main symptoms were fever; symmetric, ascending paralysis; facial paralysis; bowel and bladder dysfunction; numbness and hyperesthesia; and a descending pattern of recovery. Roosevelt was left permanently paralyzed from the waist down and was diagnosed with polio. Historians have noted a 2003 study strongly favoring a diagnosis of Guillain–Barré syndrome, but have continued to describe his paralysis according to the initial diagnosis.
Though his mother favored his retirement from public life, Roosevelt, his wife, and Roosevelt's close friend and adviser, Louis Howe, were all determined that he continue his political career. He convinced many people that he was improving, which he believed to be essential prior to running for public office again. He laboriously taught himself to walk short distances while wearing iron braces on his hips and legs, by swiveling his torso while supporting himself with a cane. He was careful never to be seen using his wheelchair in public, and great care was taken to prevent any portrayal in the press that would highlight his disability. However, his disability was well known before and during his presidency and became a major part of his image. He usually appeared in public standing upright, supported on one side by an aide or one of his sons.
Beginning in 1925, Roosevelt spent most of his time in the Southern United States, at first on his houseboat, the Larooco. Intrigued by the potential benefits of hydrotherapy, he established a rehabilitation center at Warm Springs, Georgia, in 1926. To create the rehabilitation center, he assembled a staff of physical therapists and used most of his inheritance to purchase the Merriweather Inn. In 1938, he founded the National Foundation for Infantile Paralysis, leading to the development of polio vaccines.
Roosevelt maintained contacts with the Democratic Party during the 1920s, and he remained active in New York politics while also establishing contacts in the South, particularly in Georgia. He issued an open letter endorsing Al Smith's successful campaign in New York's 1922 gubernatorial election, which both aided Smith and showed Roosevelt's continuing relevance as a political figure. Roosevelt and Smith came from different backgrounds and never fully trusted one another, but Roosevelt supported Smith's progressive policies, while Smith was happy to have the backing of the prominent and well-respected Roosevelt.
Roosevelt gave presidential nominating speeches for Smith at the 1924 and 1928 Democratic National Conventions; the speech at the 1924 convention marked a return to public life following his illness and convalescence. That year, the Democrats were badly divided between an urban wing, led by Smith, and a conservative, rural wing, led by William Gibbs McAdoo. On the 101st ballot, the nomination went to John W. Davis, a compromise candidate who suffered a landslide defeat in the 1924 presidential election. Like many others throughout the United States, Roosevelt did not abstain from alcohol during the Prohibition era, but publicly he sought to find a compromise on Prohibition acceptable to both wings of the party.
In 1925, Smith appointed Roosevelt to the Taconic State Park Commission, and his fellow commissioners chose him as chairman. In this role, he came into conflict with Robert Moses, a Smith protégé, who was the primary force behind the Long Island State Park Commission and the New York State Council of Parks. Roosevelt accused Moses of using the name recognition of prominent individuals including Roosevelt to win political support for state parks, but then diverting funds to the ones Moses favored on Long Island, while Moses worked to block the appointment of Howe to a salaried position as the Taconic commission's secretary. Roosevelt served on the commission until the end of 1928, and his contentious relationship with Moses continued as their careers progressed.
Peace was the catchword of the 1920s, and in 1923 Edward Bok established the $100,000 American Peace Award for the best plan to bring peace to the world. Roosevelt had leisure time and interest, and he drafted a plan for the contest. He never submitted it because his wife Eleanor Roosevelt was selected as a judge for the prize. His plan called for a new world organization that would replace the League of Nations. Although Roosevelt had been the vice-presidential candidate on the Democratic ticket of 1920 that supported the League of Nations, by 1924 he was ready to scrap it. His draft of a "Society of Nations" accepted the reservations proposed by Henry Cabot Lodge in the 1919 Senate debate. The new Society would not become involved in the Western Hemisphere, where the Monroe doctrine held sway. It would not have any control over military forces. Although Roosevelt's plan was never made public, he thought about the problem a great deal and incorporated some of his 1924 ideas into the design for the United Nations in 1944–1945.
Smith, the Democratic presidential nominee in the 1928 election, asked Roosevelt to run for governor of New York in the 1928 state election. Roosevelt initially resisted, as he was reluctant to leave Warm Springs and feared a Republican landslide in 1928. Party leaders eventually convinced him only he could defeat the Republican gubernatorial nominee, New York Attorney General Albert Ottinger. He won the party's gubernatorial nomination by acclamation and again turned to Howe to lead his campaign. Roosevelt was also joined on the campaign trail by associates Samuel Rosenman, Frances Perkins, and James Farley. While Smith lost the presidency in a landslide, and was defeated in his home state, Roosevelt was elected governor by a one-percent margin, and became a contender in the next presidential election.
Roosevelt proposed the construction of hydroelectric power plants and addressed the ongoing farm crisis of the 1920s. Relations between Roosevelt and Smith suffered after he chose not to retain key Smith appointees like Moses. He and his wife Eleanor established an understanding for the rest of his career; she would dutifully serve as the governor's wife but would also be free to pursue her own agenda and interests. He also began holding "fireside chats", in which he directly addressed his constituents via radio, often pressuring the New York State Legislature to advance his agenda.
In October 1929, the Wall Street Crash occurred, and with it came the Great Depression in the United States. Roosevelt saw the seriousness of the situation and established a state employment commission. He also became the first governor to publicly endorse the idea of unemployment insurance.
When Roosevelt began his run for a second term in May 1930, he reiterated his doctrine from the campaign two years before: "that progressive government by its very terms must be a living and growing thing, that the battle for it is never-ending and that if we let up for one single moment or one single year, not merely do we stand still but we fall back in the march of civilization." He ran on a platform that called for aid to farmers, full employment, unemployment insurance, and old-age pensions. He was elected to a second term by a 14% margin.
Roosevelt proposed an economic relief package and the establishment of the Temporary Emergency Relief Administration to distribute those funds. Led first by Jesse I. Straus and then by Harry Hopkins, the agency assisted well over one-third of New York's population between 1932 and 1938. Roosevelt also began an investigation into corruption in New York City among the judiciary, the police force, and organized crime, prompting the creation of the Seabury Commission. The Seabury investigations exposed an extortion ring, led many public officials to be removed from office, and made the decline of Tammany Hall inevitable.
Roosevelt supported reforestation with the Hewitt Amendment in 1931, which gave birth to New York's State Forest system.
As the 1932 presidential election approached, Roosevelt turned his attention to national politics, established a campaign team led by Howe and Farley, and a "brain trust" of policy advisers, primarily composed of Columbia University and Harvard University professors. There were some who were not so sanguine about his chances, such as Walter Lippmann, the dean of political commentators, who observed of Roosevelt: "He is a pleasant man who, without any important qualifications for the office, would very much like to be president."
However, Roosevelt's efforts as governor to address the effects of the depression in his own state established him as the front-runner for the 1932 Democratic presidential nomination. Roosevelt rallied the progressive supporters of the Wilson administration while also appealing to many conservatives, establishing himself as the leading candidate in the South and West. The chief opposition to Roosevelt's candidacy came from Northeastern conservatives, Speaker of the House John Nance Garner of Texas and Al Smith, the 1928 Democratic presidential nominee.
Roosevelt entered the convention with a delegate lead due to his success in the 1932 Democratic primaries, but most delegates entered the convention unbound to any particular candidate. On the first presidential ballot of the convention, Roosevelt received the votes of more than half but less than two-thirds of the delegates, with Smith finishing in a distant second place. Roosevelt then promised the vice-presidential nomination to Garner, who controlled the votes of Texas and California; Garner threw his support behind Roosevelt after the third ballot, and Roosevelt clinched the nomination on the fourth ballot. Roosevelt flew in from New York to Chicago after learning that he had won the nomination, becoming the first major-party presidential nominee to accept the nomination in person. His appearance was essential, to show himself as vigorous, despite the ravaging disease that disabled him physically.
In his acceptance speech, Roosevelt declared, "I pledge you, I pledge myself to a new deal for the American people... This is more than a political campaign. It is a call to arms." Roosevelt promised securities regulation, tariff reduction, farm relief, government-funded public works, and other government actions to address the Great Depression. Reflecting changing public opinion, the Democratic platform included a call for the repeal of Prohibition; Roosevelt himself had not taken a public stand on the issue prior to the convention but promised to uphold the party platform. Otherwise, Roosevelt's primary campaign strategy was one of caution, intent upon avoiding mistakes that would distract from Hoover's failings on the economy. His statements attacked the incumbent and included no other specific policies or programs.
After the convention, Roosevelt won endorsements from several progressive Republicans, including George W. Norris, Hiram Johnson, and Robert La Follette Jr. He also reconciled with the party's conservative wing, and even Al Smith was persuaded to support the Democratic ticket. Hoover's handling of the Bonus Army further damaged the incumbent's popularity, as newspapers across the country criticized the use of force to disperse assembled veterans.
Roosevelt won 57% of the popular vote and carried all but six states. Historians and political scientists consider the 1932–36 elections to be a political realignment. Roosevelt's victory was enabled by the creation of the New Deal coalition, small farmers, the Southern whites, Catholics, big city political machines, labor unions, northern African Americans (southern ones were still disfranchised), Jews, intellectuals, and political liberals. The creation of the New Deal coalition transformed American politics and started what political scientists call the "New Deal Party System" or the Fifth Party System. Between the Civil War and 1929, Democrats had rarely controlled both houses of Congress and had won just four of seventeen presidential elections; from 1932 to 1979, Democrats won eight of twelve presidential elections and generally controlled both houses of Congress.
Roosevelt was elected in November 1932 but like his predecessors did not take office until the following March. After the election, President Hoover sought to convince Roosevelt to renounce much of his campaign platform and to endorse the Hoover administration's policies. Roosevelt refused Hoover's request to develop a joint program to stop the economic decline, claiming that it would tie his hands and that Hoover had the power to act.
During the transition, Roosevelt chose Howe as his chief of staff, and Farley as Postmaster General. Frances Perkins, as Secretary of Labor, became the first woman appointed to a cabinet position. William H. Woodin, a Republican industrialist close to Roosevelt, was the choice for Secretary of the Treasury, while Roosevelt chose Senator Cordell Hull of Tennessee as Secretary of State. Harold L. Ickes and Henry A. Wallace, two progressive Republicans, were selected for the roles of Secretary of the Interior and Secretary of Agriculture, respectively.
In February 1933, Roosevelt escaped an assassination attempt by Giuseppe Zangara, who expressed a "hate for all rulers." As he was attempting to shoot Roosevelt, Zangara was struck by a woman with her purse; he instead mortally wounded Chicago Mayor Anton Cermak, who was sitting alongside Roosevelt.
As president, Roosevelt appointed powerful men to top positions but made all the major decisions, regardless of delays, inefficiency, or resentment. Analyzing the president's administrative style, Burns concludes:
The president stayed in charge of his administration...by drawing fully on his formal and informal powers as Chief Executive; by raising goals, creating momentum, inspiring a personal loyalty, getting the best out of people...by deliberately fostering among his aides a sense of competition and a clash of wills that led to disarray, heartbreak, and anger but also set off pulses of executive energy and sparks of creativity...by handing out one job to several men and several jobs to one man, thus strengthening his own position as a court of appeals, as a depository of information, and as a tool of co-ordination; by ignoring or bypassing collective decision-making agencies, such as the Cabinet...and always by persuading, flattering, juggling, improvising, reshuffling, harmonizing, conciliating, manipulating.
When Roosevelt was inaugurated on March 4, 1933, the U.S. was at the nadir of the worst depression in its history. A quarter of the workforce was unemployed, and farmers were in deep trouble as prices had fallen by 60%. Industrial production had fallen by more than half since 1929. Two million people were homeless. By the evening of March 4, 32 of the 48 states—as well as the District of Columbia—had closed their banks.
Historians categorized Roosevelt's program as "relief, recovery, and reform." Relief was urgently needed by tens of millions of unemployed. Recovery meant boosting the economy back to normal, and reform was required of the financial and banking systems. Through Roosevelt's series of 30 "fireside chats", he presented his proposals directly to the American public as a series of radio addresses. Energized by his own victory over paralytic illness, he used persistent optimism and activism to renew the national spirit.
On his second day in office, Roosevelt declared a four-day national "bank holiday", to end the run by depositors seeking to withdraw funds. He called for a special session of Congress on March 9, when Congress passed, almost sight unseen, the Emergency Banking Act. The act, first developed by the Hoover administration and Wall Street bankers, gave the president the power to determine the opening and closing of banks and authorized the Federal Reserve Banks to issue banknotes. The "first 100 Days" of the 73rd United States Congress saw an unprecedented amount of legislation and set a benchmark against which future presidents have been compared. When the banks reopened on Monday, March 15, stock prices rose by 15 percent and in the following weeks over $1 billion was returned to bank vaults, ending the bank panic. On March 22, Roosevelt signed the Cullen–Harrison Act, which brought Prohibition to a close.
Roosevelt saw the establishment of a number of agencies and measures designed to provide relief for the unemployed and others. The Federal Emergency Relief Administration (FERA), under the leadership of Harry Hopkins, distributed relief to state governments. The Public Works Administration (PWA), under Secretary of the Interior Harold Ickes, oversaw the construction of large-scale public works such as dams, bridges, and schools. The most popular of all New Deal agencies—and Roosevelt's favorite—was the Civilian Conservation Corps (CCC), which hired 250,000 unemployed men to work in rural projects. Roosevelt also expanded Hoover's Reconstruction Finance Corporation, which financed railroads and industry. Congress gave the Federal Trade Commission (FTC) broad regulatory powers and provided mortgage relief to millions of farmers and homeowners. Roosevelt also set up the Agricultural Adjustment Administration (AAA) to increase commodity prices, by paying farmers to leave land uncultivated and cut herds. In many instances, crops were plowed under and livestock killed, while many Americans died of hunger and were ill-clothed; critics labeled such policies "utterly idiotic." On the positive side, nothing did more to rescue the farm family from isolation than the Rural Electrification Administration (REA), which brought electricity for the first time to millions of rural homes and with it such conveniences as radios and washing machines."
Reform of the economy was the goal of the National Industrial Recovery Act (NIRA) of 1933. It sought to end cutthroat competition by forcing industries to establish rules such as minimum prices, agreements not to compete, and production restrictions. Industry leaders negotiated the rules with NIRA officials, who suspended antitrust laws in return for better wages. The Supreme Court in May 1935 declared NIRA unconstitutional by a unanimous decision, to Roosevelt's chagrin. He reformed financial regulations with the Glass–Steagall Act, creating the Federal Deposit Insurance Corporation (FDIC) to underwrite savings deposits. The act also limited affiliations between commercial banks and securities firms. In 1934, the Securities and Exchange Commission was created to regulate the trading of securities, while the Federal Communications Commission (FCC) was established to regulate telecommunications.
Recovery was sought through federal spending, as the NIRA included $3.3 billion (equivalent to $74.6 billion in 2022) of spending through the Public Works Administration. Roosevelt worked with Senator Norris to create the largest government-owned industrial enterprise in American history—the Tennessee Valley Authority (TVA)—which built dams and power stations, controlled floods, and modernized agriculture and home conditions in the poverty-stricken Tennessee Valley. However, natives criticized the TVA for displacing thousands of people for these projects. The Soil Conservation Service trained farmers in the proper methods of cultivation, and with the TVA, Roosevelt became the father of soil conservation. Executive Order 6102 declared that all privately held gold of American citizens was to be sold to the U.S. Treasury and the price raised from $20 to $35 per ounce. The goal was to counter the deflation which was paralyzing the economy.
Roosevelt tried to keep his campaign promise by cutting the federal budget. This included a reduction in military spending from $752 million in 1932 to $531 million in 1934 and a 40% cut in spending on veterans benefits. 500,000 veterans and widows were removed from the pension rolls, and benefits were reduced for the remainder. Federal salaries were cut and spending on research and education was reduced. The veterans were well organized and strongly protested, so most benefits were restored or increased by 1934. Veterans groups such as the American Legion and the Veterans of Foreign Wars won their campaign to transform their benefits from payments due in 1945 to immediate cash when Congress overrode the President's veto and passed the Bonus Act in January 1936. It pumped sums equal to 2% of the GDP into the consumer economy and had a major stimulus effect.
Roosevelt expected that his party would lose seats in the 1934 Congressional elections, as the president's party had done in most previous midterm elections. Unexpectedly the Democrats picked up seats in both houses of Congress. Empowered by the public's vote of confidence, the first item on Roosevelt's agenda in the 74th Congress was the creation of a social insurance program. The Social Security Act established Social Security and promised economic security for the elderly, the poor, and the sick. Roosevelt insisted that it should be funded by payroll taxes rather than from the general fund, saying, "We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program." Compared with the social security systems in western European countries, the Social Security Act of 1935 was rather conservative. But for the first time, the federal government took responsibility for the economic security of the aged, the temporarily unemployed, dependent children, and disabled people. Against Roosevelt's original intention for universal coverage, the act excluded farmers, domestic workers, and other groups, which made up about forty percent of the labor force.
Roosevelt consolidated the various relief organizations, though some, like the PWA, continued to exist. After winning Congressional authorization for further funding of relief efforts, he established the Works Progress Administration (WPA). Under the leadership of Harry Hopkins, the WPA employed over three million people in its first year of operations. It undertook numerous massive construction projects in cooperation with local governments. It also set up the National Youth Administration and arts organizations.
The National Labor Relations Act guaranteed workers the right to collective bargaining through unions of their own choice. The act also established the National Labor Relations Board (NLRB) to facilitate wage agreements and suppress repeated labor disturbances. The act did not compel employers to reach an agreement with their employees, but it opened possibilities for American labor. The result was a tremendous growth of membership in the labor unions, especially in the mass-production sector. When the Flint sit-down strike threatened the production of General Motors, Roosevelt broke with the precedent set by many former presidents and refused to intervene; the strike ultimately led to the unionization of both General Motors and its rivals in the American automobile industry.
While the First New Deal of 1933 had broad support from most sectors, the Second New Deal challenged the business community. Conservative Democrats, led by Al Smith, fought back with the American Liberty League, savagely attacking Roosevelt and equating him with socialism. But Smith overplayed his hand, and his boisterous rhetoric let Roosevelt isolate his opponents and identify them with the wealthy vested interests that opposed the New Deal, strengthening Roosevelt for the 1936 landslide. By contrast, labor unions, energized by labor legislation, signed up millions of new members and became a major backer of Roosevelt's re-elections in 1936, 1940, and 1944.
Burns suggests that Roosevelt's policy decisions were guided more by pragmatism than ideology and that he "was like the general of a guerrilla army whose columns, fighting blindly in the mountains through dense ravines and thickets, suddenly converge, half by plan and half by coincidence, and debouch into the plain below." Roosevelt argued that such apparently haphazard methodology was necessary. "The country needs and, unless I mistake its temper, the country demands bold, persistent experimentation," he wrote. "It is common sense to take a method and try it; if it fails, admit it frankly and try another. But above all, try something."
Eight million workers remained unemployed in 1936, and though economic conditions had improved since 1932, they remained sluggish. By 1936, Roosevelt had lost the backing he once held in the business community because of his support for the NLRB and the Social Security Act. The Republicans had few alternative candidates and nominated Kansas Governor Alf Landon, a little-known bland candidate whose chances were damaged by the public re-emergence of the still-unpopular Herbert Hoover. While Roosevelt campaigned on his New Deal programs and continued to attack Hoover, Landon sought to win voters who approved of the goals of the New Deal but disagreed with its implementation.
An attempt by Louisiana Senator Huey Long to organize a left-wing third party collapsed after Long's assassination in 1935. The remnants, helped by Father Charles Coughlin, supported William Lemke of the newly formed Union Party. Roosevelt won re-nomination with little opposition at the 1936 Democratic National Convention, while his allies overcame Southern resistance to permanently abolish the long-established rule that had required Democratic presidential candidates to win the votes of two-thirds of the delegates rather than a simple majority.
In the election against Landon and a third-party candidate, Roosevelt won 60.8% of the vote and carried every state except Maine and Vermont. The Democratic ticket won the highest proportion of the popular vote. Democrats also expanded their majorities in Congress, winning control of over three-quarters of the seats in each house. The election also saw the consolidation of the New Deal coalition; while the Democrats lost some of their traditional allies in big business, they were replaced by groups such as organized labor and African Americans, the latter of whom voted Democratic for the first time since the Civil War. Roosevelt lost high-income voters, especially businessmen and professionals, but made major gains among the poor and minorities. He won 86 percent of the Jewish vote, 81 percent of Catholics, 80 percent of union members, 76 percent of Southerners, 76 percent of blacks in northern cities, and 75 percent of people on relief. Roosevelt carried 102 of the country's 106 cities with a population of 100,000 or more.
The Supreme Court became Roosevelt's primary domestic focus during his second term after the court overturned many of his programs, including NIRA. The more conservative members of the court upheld the principles of the Lochner era, which saw numerous economic regulations struck down on the basis of freedom of contract. Roosevelt proposed the Judicial Procedures Reform Bill of 1937, which would have allowed him to appoint an additional Justice for each incumbent Justice over the age of 70; in 1937, there were six Supreme Court Justices over the age of 70. The size of the Court had been set at nine since the passage of the Judiciary Act of 1869, and Congress had altered the number of Justices six other times throughout U.S. history. Roosevelt's "court packing" plan ran into intense political opposition from his own party, led by Vice President Garner since it upset the separation of powers. A bipartisan coalition of liberals and conservatives of both parties opposed the bill, and Chief Justice Charles Evans Hughes broke with precedent by publicly advocating the defeat of the bill. Any chance of passing the bill ended with the death of Senate Majority Leader Joseph Taylor Robinson in July 1937.
Starting with the 1937 case of West Coast Hotel Co. v. Parrish, the court began to take a more favorable view of economic regulations. Historians have described this as, "the switch in time that saved nine." That same year, Roosevelt appointed a Supreme Court Justice for the first time, and by 1941, seven of the nine Justices had been appointed by Roosevelt. After Parrish, the Court shifted its focus from judicial review of economic regulations to the protection of civil liberties. Four of Roosevelt's Supreme Court appointees, Felix Frankfurter, Robert H. Jackson, Hugo Black, and William O. Douglas, were particularly influential in reshaping the jurisprudence of the Court.
With Roosevelt's influence on the wane following the failure of the Judicial Procedures Reform Bill of 1937, conservative Democrats joined with Republicans to block the implementation of further New Deal programs. Roosevelt did manage to pass some legislation, including the Housing Act of 1937, a second Agricultural Adjustment Act, and the Fair Labor Standards Act (FLSA) of 1938, which was the last major piece of New Deal legislation. The FLSA outlawed child labor, established a federal minimum wage, and required overtime pay for certain employees who work in excess of forty-hours per week. He also won passage of the Reorganization Act of 1939 and subsequently created the Executive Office of the President, making it "the nerve center of the federal administrative system." When the economy began to deteriorate again in mid-1937, during the onset of the recession of 1937–1938, Roosevelt launched a rhetorical campaign against big business and monopoly power in the United States, alleging that the recession was the result of a capital strike and even ordering the Federal Bureau of Investigation to look for a criminal conspiracy (of which they found none). He then asked Congress for $5 billion (equivalent to $101.78 billion in 2022) in relief and public works funding. This managed to eventually create as many as 3.3 million WPA jobs by 1938. Projects accomplished under the WPA ranged from new federal courthouses and post offices to facilities and infrastructure for national parks, bridges, and other infrastructure across the country, and architectural surveys and archaeological excavations—investments to construct facilities and preserve important resources. Beyond this, however, Roosevelt recommended to a special congressional session only a permanent national farm act, administrative reorganization, and regional planning measures, all of which were leftovers from a regular session. According to Burns, this attempt illustrated Roosevelt's inability to settle on a basic economic program.
Determined to overcome the opposition of conservative Democrats in Congress, Roosevelt became involved in the 1938 Democratic primaries, actively campaigning for challengers who were more supportive of New Deal reform. Roosevelt failed badly, managing to defeat only one of the ten targeted, a conservative Democrat from New York City. In the November 1938 elections, Democrats lost six Senate seats and 71 House seats, with losses concentrated among pro-New Deal Democrats. When Congress reconvened in 1939, Republicans under Senator Robert Taft formed a Conservative coalition with Southern Democrats, virtually ending Roosevelt's ability to enact his domestic proposals. Despite their opposition to Roosevelt's domestic policies, many of these conservative Congressmen would provide crucial support for Roosevelt's foreign policy before and during World War II.
Roosevelt had a lifelong interest in the environment and conservation starting with his youthful interest in forestry on his family estate. Although he was never an outdoorsman or sportsman on Theodore Roosevelt's scale, his growth of the national systems was comparable. When Franklin was Governor of New York, the Temporary Emergency Relief Administration was essentially a state-level predecessor of the federal Civilian Conservation Corps, with 10,000 or more men building fire trails, combating soil erosion and planting tree seedlings in marginal farmland in the state of New York. As President, Roosevelt was active in expanding, funding, and promoting the National Park and National Forest systems. Their popularity soared, from three million visitors a year at the start of the decade to 15.5 million in 1939. The Civilian Conservation Corps enrolled 3.4 million young men and built 13,000 miles (21,000 kilometres) of trails, planted two billion trees, and upgraded 125,000 miles (201,000 kilometres) of dirt roads. Every state had its own state parks, and Roosevelt made sure that WPA and CCC projects were set up to upgrade them as well as the national systems.
Government spending increased from 8.0% of the gross national product (GNP) under Hoover in 1932 to 10.2% in 1936. The national debt as a percentage of the GNP had more than doubled under Hoover from 16% to 40% of the GNP in early 1933. It held steady at close to 40% as late as fall 1941, then grew rapidly during the war. The GNP was 34% higher in 1936 than in 1932 and 58% higher in 1940 on the eve of war. That is, the economy grew 58% from 1932 to 1940 in eight years of peacetime, and then grew 56% from 1940 to 1945 in five years of wartime. Unemployment fell dramatically during Roosevelt's first term. It increased in 1938 ("a depression within a depression") but continually declined after 1938. Total employment during Roosevelt's term expanded by 18.31 million jobs, with an average annual increase in jobs during his administration of 5.3%.
The main foreign policy initiative of Roosevelt's first term was the Good Neighbor Policy, which was a re-evaluation of U.S. policy toward Latin America. The United States frequently intervened in Latin America following the promulgation of the Monroe Doctrine in 1823, and the United States occupied several Latin American nations in the Banana Wars that occurred following the Spanish–American War of 1898. After Roosevelt took office, he withdrew U.S. forces from Haiti and reached new treaties with Cuba and Panama, ending their status as U.S. protectorates. In December 1933, Roosevelt signed the Montevideo Convention on the Rights and Duties of States, renouncing the right to intervene unilaterally in the affairs of Latin American countries. Roosevelt also normalized relations with the Soviet Union, which the United States had refused to recognize since the 1920s. He hoped to renegotiate the Russian debt from World War I and open trade relations, but no progress was made on either issue and "both nations were soon disillusioned by the accord."
The rejection of the Treaty of Versailles in 1919–1920 marked the dominance of isolationism in American foreign policy. Despite Roosevelt's Wilsonian background, he and Secretary of State Cordell Hull acted with great care not to provoke isolationist sentiment. The isolationist movement was bolstered in the early to mid-1930s by Senator Gerald Nye and others who succeeded in their effort to stop the "merchants of death" in the U.S. from selling arms abroad. This effort took the form of the Neutrality Acts; the president was refused a provision he requested giving him the discretion to allow the sale of arms to victims of aggression. He largely acquiesced to Congress's non-interventionist policies in the early-to-mid 1930s. In the interim, Fascist Italy under Benito Mussolini proceeded to overcome Ethiopia, and the Italians joined Nazi Germany under Adolf Hitler in supporting General Francisco Franco and the Nationalist cause in the Spanish Civil War. As that conflict drew to a close in early 1939, Roosevelt expressed regret in not aiding the Spanish Republicans. When Japan invaded China in 1937, isolationism limited Roosevelt's ability to aid China, despite atrocities like the Nanking Massacre and the USS Panay incident.
Germany annexed Austria in 1938, and soon turned its attention to its eastern neighbors. Roosevelt made it clear that, in the event of German aggression against Czechoslovakia, the U.S. would remain neutral. After completion of the Munich Agreement and the execution of Kristallnacht, American public opinion turned against Germany, and Roosevelt began preparing for a possible war with Germany. Relying on an interventionist political coalition of Southern Democrats and business-oriented Republicans, Roosevelt oversaw the expansion of U.S. airpower and war production capacity.
When World War II began in September 1939 with Germany's invasion of Poland and Britain and France's subsequent declaration of war upon Germany, Roosevelt sought ways to assist Britain and France militarily. Isolationist leaders like Charles Lindbergh and Senator William Borah successfully mobilized opposition to Roosevelt's proposed repeal of the Neutrality Act, but Roosevelt won Congressional approval of the sale of arms on a cash-and-carry basis. He also began a regular secret correspondence with Britain's First Lord of the Admiralty, Winston Churchill, in September 1939—the first of 1,700 letters and telegrams between them. Roosevelt forged a close personal relationship with Churchill, who became Prime Minister of the United Kingdom in May 1940.
The Fall of France in June 1940 shocked the American public, and isolationist sentiment declined. In July 1940, Roosevelt appointed two interventionist Republican leaders, Henry L. Stimson and Frank Knox, as Secretaries of War and the Navy, respectively. Both parties gave support to his plans for a rapid build-up of the American military, but the isolationists warned that Roosevelt would get the nation into an unnecessary war with Germany. In July 1940, a group of Congressmen introduced a bill that would authorize the nation's first peacetime draft, and with the support of the Roosevelt administration, the Selective Training and Service Act of 1940 passed in September. The size of the army increased from 189,000 men at the end of 1939 to 1.4 million men in mid-1941. In September 1940, Roosevelt openly defied the Neutrality Acts by reaching the Destroyers for Bases Agreement, which, in exchange for military base rights in the British Caribbean Islands, gave 50 WWI American destroyers to Britain.
While working under President Wilson, Roosevelt had perpetuated ideas of American racial superiority by believing that the people of Latin American were uncapable of self-government. However, by 1928 he had switched his point of view, becoming an advocate for cooperation. In an effort to denounce past U.S. interventionism and subdue any subsequent fears of Latin Americans, Roosevelt announced on March 4, 1933, during his inaugural address, "In the field of World policy, I would dedicate this nation to the policy of the good neighbor, the neighbor who resolutely respects himself and, because he does so, respects the rights of others, the neighbor who respects his obligations and respects the sanctity of his agreements in and with a World of neighbors."
In order to create a friendly relationship between the United States and Central as well as South American countries, Roosevelt sought to abstain from asserting military force in the region. This position was affirmed by Cordell Hull, Roosevelt's Secretary of State at a conference of American states in Montevideo in December 1933. Hull said: "No country has the right to intervene in the internal or external affairs of another." Roosevelt then confirmed the policy in December of the same year: "The definite policy of the United States from now on is one opposed to armed intervention." The fact that the policy was even put into place meant that the U.S. now recognised the maturity of Latin American countries and as a result were now more open to working together, especially when it comes to maintaining the peace. The policy, in the end, was yet another way for the U.S. to assert its own superiority.
In the months prior to the July 1940 Democratic National Convention, there was much speculation as to whether Roosevelt would run for an unprecedented third term. The president was silent, and even his closest advisors were in the dark. The two-term tradition, although not yet enshrined in the Constitution, had been established by George Washington when he refused to run for a third term in the 1796 presidential election. Roosevelt refused to give a definitive statement as to his willingness to be a candidate again, and he even indicated to some ambitious Democrats, such as James Farley, that he would not run for a third term and that they could seek the Democratic nomination. Farley and Vice President John Garner were not pleased with Roosevelt when he ultimately made the decision to break from Washington's precedent. As Germany swept through Western Europe and menaced Britain in mid-1940, Roosevelt decided that only he had the necessary experience and skills to see the nation safely through the Nazi threat. He was aided by the party's political bosses, who feared that no Democrat except Roosevelt could defeat Wendell Willkie, the popular Republican nominee.
At the July 1940 Democratic Convention in Chicago, Roosevelt easily swept aside challenges from Farley and Vice President Garner, who had turned against Roosevelt in his second term because of his liberal economic and social policies. To replace Garner on the ticket, Roosevelt turned to Secretary of Agriculture Henry Wallace of Iowa, a former Republican who strongly supported the New Deal and was popular in farm states. The choice was strenuously opposed by many of the party's conservatives, who felt Wallace was too radical and "eccentric" in his private life to be an effective running mate. But Roosevelt insisted that without Wallace on the ticket he would decline re-nomination, and Wallace won the vice-presidential nomination, defeating Speaker of the House William B. Bankhead and other candidates.
A late August poll taken by Gallup found the race to be essentially tied, but Roosevelt's popularity surged in September following the announcement of the Destroyers for Bases Agreement. Willkie supported much of the New Deal as well as rearmament and aid to Britain but warned that Roosevelt would drag the country into another European war. Responding to Willkie's attacks, Roosevelt promised to keep the country out of the war. Over its last month, the campaign degenerated into a series of outrageous accusations and mud-slinging, if not by the two candidates themselves then by their respective parties. Roosevelt won the 1940 election with 55% of the popular vote, 38 of the 48 states, and almost 85% of the electoral vote.
World War II dominated Roosevelt's attention, with far more time devoted to world affairs than ever before. Domestic politics and relations with Congress were largely shaped by his efforts to achieve total mobilization of the nation's economic, financial, and institutional resources for the war effort. Even relationships with Latin America and Canada were structured by wartime demands. Roosevelt maintained close personal control of all major diplomatic and military decisions, working closely with his generals and admirals, the war and Navy departments, the British, and even the Soviet Union. His key advisors on diplomacy were Harry Hopkins (who was based in the White House), Sumner Welles (based in the State Department), and Henry Morgenthau Jr. at Treasury. In military affairs, Roosevelt worked most closely with Secretary Henry L. Stimson at the War Department, Army Chief of Staff George Marshall, and Admiral William D. Leahy.
By late 1940, re-armament was in high gear, partly to expand and re-equip the Army and Navy and partly to become the "Arsenal of Democracy" for Britain and other countries. With his Four Freedoms speech in January 1941, Roosevelt laid out the case for an Allied battle for basic rights throughout the world. Assisted by Willkie, Roosevelt won Congressional approval of the Lend-Lease program, which directed massive military and economic aid to Britain, and China. In sharp contrast to the loans of World War I, there would be no repayment after the war. As Roosevelt took a firmer stance against Japan, Germany, and Italy, American isolationists such as Charles Lindbergh and the America First Committee vehemently attacked Roosevelt as an irresponsible warmonger. When Germany invaded the Soviet Union in June 1941, Roosevelt agreed to extend Lend-Lease to the Soviets. Thus, Roosevelt had committed the U.S. to the Allied side with a policy of "all aid short of war." By July 1941, Roosevelt authorized the creation of the Office of the Coordinator of Inter-American Affairs (OCIAA) to counter perceived propaganda efforts in Latin America by Germany and Italy.
In August 1941, Roosevelt and Churchill conducted a highly secret bilateral meeting in which they drafted the Atlantic Charter, conceptually outlining global wartime and postwar goals. This would be the first of several wartime conferences; Churchill and Roosevelt would meet ten more times in person. Though Churchill pressed for an American declaration of war against Germany, Roosevelt believed that Congress would reject any attempt to bring the United States into the war. In September, a German submarine fired on the U.S. destroyer Greer, and Roosevelt declared that the U.S. Navy would assume an escort role for Allied convoys in the Atlantic as far east as Great Britain and would fire upon German ships or submarines (U-boats) of the Kriegsmarine if they entered the U.S. Navy zone. According to historian George Donelson Moss, Roosevelt "misled" Americans by reporting the Greer incident as if it would have been an unprovoked German attack on a peaceful American ship. This "shoot on sight" policy effectively declared naval war on Germany and was favored by Americans by a margin of 2-to-1.
After the German invasion of Poland, the primary concern of both Roosevelt and his top military staff was on the war in Europe, but Japan also presented foreign policy challenges. Relations with Japan had continually deteriorated since its invasion of Manchuria in 1931, and they had further worsened with Roosevelt's support of China. With the war in Europe occupying the attention of the major colonial powers, Japanese leaders eyed vulnerable colonies such as the Dutch East Indies, French Indochina, and British Malaya. After Roosevelt announced a $100 million loan (equivalent to $2.1 billion in 2022) to China in reaction to Japan's occupation of northern French Indochina, Japan signed the Tripartite Pact with Germany and Italy. The pact bound each country to defend the others against attack, and Germany, Japan, and Italy became known as the Axis powers. Overcoming those who favored invading the Soviet Union, the Japanese Army high command successfully advocated for the conquest of Southeast Asia to ensure continued access to raw materials. In July 1941, after Japan occupied the remainder of French Indochina, Roosevelt cut off the sale of oil to Japan, depriving Japan of more than 95 percent of its oil supply. He also placed the Philippine military under American command and reinstated General Douglas MacArthur into active duty to command U.S. forces in the Philippines.
The Japanese were incensed by the embargo and Japanese leaders became determined to attack the United States unless it lifted the embargo. The Roosevelt administration was unwilling to reverse the policy, and Secretary of State Hull blocked a potential summit between Roosevelt and Prime Minister Fumimaro Konoe. After diplomatic efforts to end the embargo failed, the Privy Council of Japan authorized a strike against the United States. The Japanese believed that the destruction of the United States Asiatic Fleet (stationed in the Philippines) and the United States Pacific Fleet (stationed at Pearl Harbor in Hawaii) was vital to the conquest of Southeast Asia. On the morning of December 7, 1941, the Japanese struck the U.S. naval base at Pearl Harbor with a surprise attack, knocking out the main American battleship fleet and killing 2,403 American servicemen and civilians. At the same time, separate Japanese task forces attacked Thailand, British Hong Kong, the Philippines, and other targets. Roosevelt called for war in his "Infamy Speech" to Congress, in which he said: "Yesterday, December 7, 1941—a date which will live in infamy—the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan." In a nearly unanimous vote, Congress declared war on Japan. After the Japanese attack at Pearl Harbor, antiwar sentiment in the United States largely evaporated overnight. On December 11, 1941, Hitler and Mussolini declared war on the United States, which responded in kind.
A majority of scholars have rejected the conspiracy theories that Roosevelt, or any other high government officials, knew in advance about the Japanese attack on Pearl Harbor. The Japanese had kept their secrets closely guarded. Senior American officials were aware that war was imminent, but they did not expect an attack on Pearl Harbor. Roosevelt had expected that the Japanese would attack either the Dutch East Indies or Thailand.
In late December 1941, Churchill and Roosevelt met at the Arcadia Conference, which established a joint strategy between the U.S. and Britain. Both agreed on a Europe first strategy that prioritized the defeat of Germany before Japan. The U.S. and Britain established the Combined Chiefs of Staff to coordinate military policy and the Combined Munitions Assignments Board to coordinate the allocation of supplies. An agreement was also reached to establish a centralized command in the Pacific theater called ABDA, named for the American, British, Dutch, and Australian forces in the theater. On January 1, 1942, the United States, Britain, China, the Soviet Union, and twenty-two other countries (the Allied Powers) issued the Declaration by United Nations, in which each nation pledged to defeat the Axis powers.
In 1942, Roosevelt formed a new body, the Joint Chiefs of Staff, which made the final decisions on American military strategy. Admiral Ernest J. King as Chief of Naval Operations commanded the Navy and Marines, while General George C. Marshall led the Army and was in nominal control of the Air Force, which in practice was commanded by General Hap Arnold. The Joint Chiefs were chaired by Admiral William D. Leahy, the most senior officer in the military. Roosevelt avoided micromanaging the war and let his top military officers make most decisions. Roosevelt's civilian appointees handled the draft and procurement of men and equipment, but no civilians—not even the secretaries of War or Navy—had a voice in strategy. Roosevelt avoided the State Department and conducted high-level diplomacy through his aides, especially Harry Hopkins, whose influence was bolstered by his control of the Lend Lease funds.
In August 1939, Leo Szilard and Albert Einstein sent the Einstein–Szilárd letter to Roosevelt, warning of the possibility of a German project to develop nuclear weapons. Szilard realized that the recently discovered process of nuclear fission could be used to create a nuclear chain reaction that could be used as a weapon of mass destruction. Roosevelt feared the consequences of allowing Germany to have sole possession of the technology and authorized preliminary research into nuclear weapons. After the attack on Pearl Harbor, the Roosevelt administration secured the funds needed to continue research and selected General Leslie Groves to oversee the Manhattan Project, which was charged with developing the first nuclear weapons. Roosevelt and Churchill agreed to jointly pursue the project, and Roosevelt helped ensure that American scientists cooperated with their British counterparts.
Roosevelt coined the term "Four Policemen" to refer to the "Big Four" Allied powers of World War II, the United States, the United Kingdom, the Soviet Union, and China. The "Big Three" of Roosevelt, Winston Churchill, and Soviet leader Joseph Stalin, together with Chinese Generalissimo Chiang Kai-shek, cooperated informally on a plan in which American and British troops concentrated in the West; Soviet troops fought on the Eastern front; and Chinese, British and American troops fought in Asia and the Pacific. The United States also continued to send aid via the Lend-Lease program to the Soviet Union and other countries. The Allies formulated strategy in a series of high-profile conferences as well as by contact through diplomatic and military channels. Beginning in May 1942, the Soviets urged an Anglo-American invasion of German-occupied France in order to divert troops from the Eastern front. Concerned that their forces were not yet ready for an invasion of France, Churchill and Roosevelt decided to delay such an invasion until at least 1943 and instead focus on a landing in North Africa, known as Operation Torch.
In November 1943, Roosevelt, Churchill, and Stalin met to discuss strategy and post-war plans at the Tehran Conference, where Roosevelt met Stalin for the first time. At the conference, Britain and the United States committed to opening a second front against Germany in 1944, while Stalin committed to entering the war against Japan at an unspecified date. Subsequent conferences at Bretton Woods and Dumbarton Oaks established the framework for the post-war international monetary system and the United Nations, an intergovernmental organization similar to the failed League of Nations. Taking up the Wilsonian mantle, Roosevelt pushed as his highest postwar priority the establishment of the United Nations. Roosevelt expected it would be controlled by Washington, Moscow, London and Beijing, and would resolve all major world problems.
Roosevelt, Churchill, and Stalin met for a second time at the February 1945 Yalta Conference in Crimea. With the end of the war in Europe approaching, Roosevelt's primary focus was on convincing Stalin to enter the war against Japan; the Joint Chiefs had estimated that an American invasion of Japan would cause as many as one million American casualties. In return for the Soviet Union's entrance into the war against Japan, the Soviet Union was promised control of Asian territories such as Sakhalin Island. The three leaders agreed to hold a conference in 1945 to establish the United Nations, and they also agreed on the structure of the United Nations Security Council, which would be charged with ensuring international peace and security. Roosevelt did not push for the immediate evacuation of Soviet soldiers from Poland, but he won the issuance of the Declaration on Liberated Europe, which promised free elections in countries that had been occupied by Germany. Germany itself would not be dismembered but would be jointly occupied by the United States, France, Britain, and the Soviet Union. Against Soviet pressure, Roosevelt and Churchill refused to consent to impose huge reparations and deindustrialization on Germany after the war. Roosevelt's role in the Yalta Conference has been controversial; critics charge that he naively trusted the Soviet Union to allow free elections in Eastern Europe, while supporters argue that there was little more that Roosevelt could have done for the Eastern European countries given the Soviet occupation and the need for cooperation with the Soviet Union during and after the war.
The Allies invaded French North Africa in November 1942, securing the surrender of Vichy French forces within days of landing. At the January 1943 Casablanca Conference, the Allies agreed to defeat Axis forces in North Africa and then launch an invasion of Sicily, with an attack on France to take place in 1944. At the conference, Roosevelt also announced that he would only accept the unconditional surrender of Germany, Japan, and Italy. In February 1943, the Soviet Union won a major victory at the Battle of Stalingrad, and in May 1943, the Allies secured the surrender of over 250,000 German and Italian soldiers in North Africa, ending the North African Campaign. The Allies launched an invasion of Sicily in July 1943, capturing the island by the end of the following month. In September 1943, the Allies secured an armistice from Italian Prime Minister Pietro Badoglio, but Germany quickly restored Mussolini to power. The Allied invasion of mainland Italy commenced in September 1943, but the Italian Campaign continued until 1945 as German and Italian troops resisted the Allied advance.
To command the invasion of France, Roosevelt chose General Dwight D. Eisenhower, who had successfully commanded a multinational coalition in North Africa and Sicily. Eisenhower chose to launch Operation Overlord on June 6, 1944. Supported by 12,000 aircraft and the largest naval force ever assembled, the Allies successfully established a beachhead in Normandy and then advanced further into France. Though reluctant to back an unelected government, Roosevelt recognized Charles de Gaulle's Provisional Government of the French Republic as the de facto government of France in July 1944. After most of France had been liberated from German occupation, Roosevelt granted formal recognition to de Gaulle's government in October 1944. Over the following months, the Allies liberated more territory from Nazi occupation and began the invasion of Germany. By April 1945, Nazi resistance was crumbling in the face of advances by both the Western Allies and the Soviet Union.
In the opening weeks of the war, Japan conquered the Philippines and the British and Dutch colonies in Southeast Asia. The Japanese advance reached its maximum extent by June 1942, when the U.S. Navy scored a decisive victory at the Battle of Midway. American and Australian forces then began a slow and costly strategy called island hopping or leapfrogging through the Pacific Islands, with the objective of gaining bases from which strategic airpower could be brought to bear on Japan and from which Japan could ultimately be invaded. In contrast to Hitler, Roosevelt took no direct part in the tactical naval operations, though he approved strategic decisions. Roosevelt gave way in part to insistent demands from the public and Congress that more effort be devoted against Japan, but he always insisted on Germany first. The strength of the Japanese navy was decimated in the Battle of Leyte Gulf, and by April 1945 the Allies had re-captured much of their lost territory in the Pacific.
The home front was subject to dynamic social changes throughout the war, though domestic issues were no longer Roosevelt's most urgent policy concern. The military buildup spurred economic growth. Unemployment fell in half from 7.7 million in spring 1940 to 3.4 million in fall 1941 and fell in half again to 1.5 million in fall 1942, out of a labor force of 54 million. There was a growing labor shortage, accelerating the second wave of the Great Migration of African Americans, farmers and rural populations to manufacturing centers. African Americans from the South went to California and other West Coast states for new jobs in the defense industry. To pay for increased government spending, in 1941 Roosevelt proposed that Congress enact an income tax rate of 99.5% on all income over $100,000; when the proposal failed, he issued an executive order imposing an income tax of 100% on income over $25,000, which Congress rescinded. The Revenue Act of 1942 instituted top tax rates as high as 94% (after accounting for the excess profits tax), greatly increased the tax base, and instituted the first federal withholding tax. In 1944, Roosevelt requested that Congress enact legislation which would tax all "unreasonable" profits, both corporate and individual, and thereby support his declared need for over $10 billion in revenue for the war and other government measures. Congress overrode Roosevelt's veto to pass a smaller revenue bill raising $2 billion.
In 1942, with the United States now in the conflict, war production increased dramatically but fell short of the goals established by the president, due in part to manpower shortages. The effort was also hindered by numerous strikes, especially among union workers in the coal mining and railroad industries, which lasted well into 1944. Nonetheless, between 1941 and 1945, the United States produced 2.4 million trucks, 300,000 military aircraft, 88,400 tanks, and 40 billion rounds of ammunition. The production capacity of the United States dwarfed that of other countries; for example, in 1944, the United States produced more military aircraft than the combined production of Germany, Japan, Britain, and the Soviet Union. The White House became the ultimate site for labor mediation, conciliation or arbitration. One particular battle royale occurred between Vice President Wallace, who headed the Board of Economic Warfare, and Jesse H. Jones, in charge of the Reconstruction Finance Corporation; both agencies assumed responsibility for the acquisition of rubber supplies and came to loggerheads over funding. Roosevelt resolved the dispute by dissolving both agencies. In 1943, Roosevelt established the Office of War Mobilization to oversee the home front; the agency was led by James F. Byrnes, who came to be known as the "assistant president" due to his influence.
Roosevelt's 1944 State of the Union Address advocated that Americans should think of basic economic rights as a Second Bill of Rights. He stated that all Americans should have the right to "adequate medical care", "a good education", "a decent home", and a "useful and remunerative job". In the most ambitious domestic proposal of his third term, Roosevelt proposed the G.I. Bill, which would create a massive benefits program for returning soldiers. Benefits included post-secondary education, medical care, unemployment insurance, job counseling, and low-cost loans for homes and businesses. The G.I. Bill passed unanimously in both houses of Congress and was signed into law in June 1944. Of the fifteen million Americans who served in World War II, more than half benefitted from the educational opportunities provided for in the G.I. Bill.
Roosevelt, a chain-smoker throughout his entire adult life, had been in declining physical health since at least 1940. In March 1944, shortly after his 62nd birthday, he underwent testing at Bethesda Hospital and was found to have high blood pressure, atherosclerosis, coronary artery disease causing angina pectoris, and congestive heart failure.
Hospital physicians and two outside specialists ordered Roosevelt to rest. His personal physician, Admiral Ross McIntire, created a daily schedule that banned business guests for lunch and incorporated two hours of rest each day. During the 1944 re-election campaign, McIntire denied several times that Roosevelt's health was poor; on October 12, for example, he announced that "The President's health is perfectly OK. There are absolutely no organic difficulties at all." Roosevelt realized that his declining health could eventually make it impossible for him to continue as president, and in 1945 he told a confidant that he might resign from the presidency following the end of the war.
While some Democrats had opposed Roosevelt's nomination in 1940, the president faced little difficulty in securing his re-nomination at the 1944 Democratic National Convention. Roosevelt made it clear before the convention that he was seeking another term, and on the lone presidential ballot of the convention, Roosevelt won the vast majority of delegates, although a minority of Southern Democrats voted for Harry F. Byrd. Party leaders prevailed upon Roosevelt to drop Vice President Wallace from the ticket, believing him to be an electoral liability and a poor potential successor in case of Roosevelt's death. Roosevelt preferred Byrnes as Wallace's replacement but was convinced to support Senator Harry S. Truman of Missouri, who had earned renown for his investigation of war production inefficiency and was acceptable to the various factions of the party. On the second vice presidential ballot of the convention, Truman defeated Wallace to win the nomination.
The Republicans nominated Thomas E. Dewey, the governor of New York, who had a reputation as a liberal in his party. They accused the Roosevelt administration of domestic corruption and bureaucratic inefficiency, but Dewey's most effective gambit was to raise discreetly the age issue. He assailed the President as a "tired old man" with "tired old men" in his cabinet, pointedly suggesting that the President's lack of vigor had produced a less than vigorous economic recovery. Roosevelt, as most observers could see from his weight loss and haggard appearance, was a tired man in 1944. But upon entering the campaign in earnest in late September 1944, Roosevelt displayed enough passion and fight to allay most concerns and to deflect Republican attacks. With the war still raging, he urged voters not to "change horses in mid-stream." Labor unions, which had grown rapidly in the war, fully supported Roosevelt. Roosevelt and Truman won the 1944 election by a comfortable margin, defeating Dewey and his running mate John W. Bricker with 53.4% of the popular vote and 432 out of the 531 electoral votes. The president campaigned in favor of a strong United Nations, so his victory symbolized support for the nation's future participation in the international community.
When Roosevelt returned to the United States from the Yalta Conference, many were shocked to see how old, thin and frail he looked. He spoke while seated in the well of the House, an unprecedented concession to his physical incapacity. During March 1945, he sent strongly worded messages to Stalin accusing him of breaking his Yalta commitments over Poland, Germany, prisoners of war and other issues. When Stalin accused the western Allies of plotting behind his back a separate peace with Hitler, Roosevelt replied: "I cannot avoid a feeling of bitter resentment towards your informers, whoever they are, for such vile misrepresentations of my actions or those of my trusted subordinates." On March 29, 1945, Roosevelt went to the Little White House at Warm Springs, Georgia, to rest before his anticipated appearance at the founding conference of the United Nations.
In the afternoon of April 12, 1945, in Warm Springs, Georgia, while sitting for a portrait by Elizabeth Shoumatoff, Roosevelt said: "I have a terrific headache." He then slumped forward in his chair, unconscious, and was carried into his bedroom. The president's attending cardiologist, Howard Bruenn, diagnosed the medical emergency as a massive intracerebral hemorrhage. At 3:35 p.m. that day, Roosevelt died at the age of 63.
The following morning, Roosevelt's body was placed in a flag-draped coffin and loaded onto the presidential train for the trip back to Washington. Along the route, thousands flocked to the tracks to pay their respects. After a White House funeral on April 14, Roosevelt was transported by train from Washington, D.C., to his place of birth at Hyde Park. On April 15 he was buried, per his wish, in the rose garden of his Springwood estate.
Roosevelt's declining physical health had been kept secret from the public. His death was met with shock and grief across the world. Germany surrendered during the 30-day mourning period, but Harry Truman (who had succeeded Roosevelt as president) ordered flags to remain at half-staff; he also dedicated Victory in Europe Day and its celebrations to Roosevelt's memory. World War II finally ended with the signed surrender of Japan in September.
Coincidentally, on April 12, 1945, a devastating tornado outbreak occurred in the United States, which killed 128 people and injured over a thousand others. The tornado outbreak included the fourth deadliest tornado in Oklahoma history, which leveled a third of the town of Antlers. Roosevelt's death overshadowed what would have "commanded national media attention" for a while. Tornado expert Thomas P. Grazulis said that, "even nearby newspapers had more information on the death of the President than on the tornado".
Roosevelt was viewed as a hero by many African Americans, Catholics, and Jews, and he was highly successful in attracting large majorities of these voters into his New Deal coalition. From his first term until 1939, the Mexican Repatriation started by President Herbert Hoover continued under Roosevelt, which scholars today contend was a form of ethnic cleansing towards Mexican Americans. Roosevelt ended federal involvement in the deportations. After 1934, the number of deportations fell by approximately 50 percent. However, Roosevelt did not attempt to suppress the deportations on a local or state level. Mexican Americans were the only group explicitly excluded from New Deal benefits. The deprival of due process for Mexican Americans is cited as a precedent for Roosevelt's internment of Japanese Americans in concentration camps during World War II. Roosevelt won strong support from Chinese Americans and Filipino Americans, but not Japanese Americans, as he presided over their internment during the war. African Americans and Native Americans fared well in two New Deal relief programs, the Civilian Conservation Corps and the Indian Reorganization Act, respectively. Sitkoff reports that the WPA "provided an economic floor for the whole black community in the 1930s, rivaling both agriculture and domestic service as the chief source" of income.
In contrast to Presidents Harding and Coolidge, Roosevelt stopped short of joining NAACP leaders in pushing for federal anti-lynching legislation. He asserted that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen though by 1940 even his conservative Texas vice-president, Garner, supported federal action against lynching.
In his twelve years as president, Roosevelt did not appoint or nominate a single African American as secretary or assistant secretary to his cabinet. About one-hundred African Americans met with each other informally, however, to provide the administration with advice on issues related to African Americans. Although sometimes described as an "Black Cabinet," Roosevelt never officially acknowledged it as such nor did he make "appointments" to it
First Lady Eleanor Roosevelt vocally supported efforts designed to aid the African American community, including the Fair Labor Standards Act, which helped boost wages for nonwhite workers in the South. In 1941, Roosevelt established the Fair Employment Practices Committee (FEPC) to implement Executive Order 8802, which prohibited racial and religious discrimination in employment among defense contractors. The FEPC was the first national program directed against employment discrimination, and it played a major role in opening up new employment opportunities to non-white workers. During World War II, the proportion of African American men employed in manufacturing positions rose significantly. In response to Roosevelt's policies, African Americans increasingly defected from the Republican Party during the 1930s and 1940s, becoming an important Democratic voting bloc in several Northern states.
The attack on Pearl Harbor raised concerns in the public regarding the possibility of sabotage by Japanese Americans. This suspicion was fed by long-standing racism against Japanese immigrants, as well as the findings of the Roberts Commission, which concluded that the attack on Pearl Harbor had been assisted by Japanese spies. On February 19, 1942, President Roosevelt signed Executive Order 9066, which relocated 110,000 Japanese-American citizens and immigrants, most of whom lived on the Pacific Coast. They were forced to liquidate their properties and businesses and interned in hastily built camps in interior, harsh locations. Internment was consistent with the racial views expressed in Roosevelt's articles during the 1920s for the Macon Telegraph condemning the “the mingling of Asiatic blood with European or American blood” and praising California’s laws to bar Japanese immigrants from owning land as well his confidential suggestion in 1936 that Japanese Americans in Hawaii greeting Japanese ships or having any connection with their officers be put "on a special list of those who would be the first to be placed in a concentration camp" in the event of war.
Roosevelt delegated the decision for internment to Secretary of War Stimson, who in turn relied on the judgment of Assistant Secretary of War John J. McCloy. The Supreme Court upheld the constitutionality of the executive order in the 1944 case of Korematsu v. United States. A much smaller number of German and Italian citizens were arrested or placed into internment camps. Unlike the concentration camps for Japanese Americans, however, they were not sent to them on the sole basis of racial ancestry.
There is controversy among historians about Roosevelt's attitude to Jews and the Holocaust. Arthur M. Schlesinger Jr. says Roosevelt "did what he could do" to help Jews; David Wyman says Roosevelt's record on Jewish refugees and their rescue is "very poor" and one of the worst failures of his presidency. In 1923, as a member of the Harvard board of directors, Roosevelt decided there were too many Jewish students at Harvard University and helped institute a quota to limit the number of Jews admitted to Harvard. After Kristallnacht in 1938, Roosevelt had his ambassador to Germany recalled back to Washington. He did not loosen immigration quotas but did allow German Jews already in the U.S. on visas to stay indefinitely. According to Rafael Medoff, the U.S. president could have saved 190,000 Jewish lives by telling his State Department to fill immigration quotas to the legal limit, but his administration discouraged and disqualified Jewish refugees based on its prohibitive requirements that left less than 25% of the quotas filled.
Hitler chose to implement the "Final Solution"—the extermination of the European Jewish population—by January 1942, and American officials learned of the scale of the Nazi extermination campaign in the following months. Against the objections of the State Department, Roosevelt convinced the other Allied leaders to jointly issue the Joint Declaration by Members of the United Nations, which condemned the ongoing Holocaust and warned to try its perpetrators as war criminals. In 1943, Roosevelt told U.S. government officials that there should be limits on Jews in various professions to "eliminate the specific and understandable complaints which the Germans bore towards the Jews in Germany." The same year, Roosevelt was personally briefed by Polish Home Army intelligence agent Jan Karski who was an eyewitness of the Holocaust; pleading for action, Karski told him that 1.8 million Jews had already been exterminated. Karski recalled that in response, Roosevelt "did not ask one question about the Jews." In January 1944, Roosevelt established the War Refugee Board to aid Jews and other victims of Axis atrocities. Aside from these actions, Roosevelt believed that the best way to help the persecuted populations of Europe was to end the war as quickly as possible. Top military leaders and War Department leaders rejected any campaign to bomb the extermination camps or the rail lines leading to the camps, fearing it would be a diversion from the war effort. According to biographer Jean Edward Smith, there is no evidence that anyone ever proposed such a campaign to Roosevelt.
Roosevelt is widely considered to be one of the most important figures in the history of the United States, as well as one of the most influential figures of the 20th century. Historians and political scientists consistently rank Roosevelt, George Washington, and Abraham Lincoln as the three greatest presidents, although the order varies. Reflecting on Roosevelt's presidency, "which brought the United States through the Great Depression and World War II to a prosperous future", biographer Jean Edward Smith said in 2007, "He lifted himself from a wheelchair to lift the nation from its knees."
His commitment to the working class and unemployed in need of relief in the nation's longest recession made him a favorite of the blue collar workers, labor unions, and ethnic minorities. The rapid expansion of government programs that occurred during Roosevelt's term redefined the role of the government in the United States, and Roosevelt's advocacy of government social programs was instrumental in redefining liberalism for coming generations. Roosevelt firmly established the United States' leadership role on the world stage, with his role in shaping and financing World War II. His isolationist critics faded away, and even the Republicans joined in his overall policies. He also created a new understanding of the presidency, permanently increasing the power of the president at the expense of Congress.
His Second Bill of Rights became, according to historian Joshua Zeitz, "the basis of the Democratic Party's aspirations for the better part of four decades." After his death, his widow, Eleanor, continued to be a forceful presence in U.S. and world politics, serving as delegate to the conference which established the United Nations and championing civil rights and liberalism generally. Some junior New Dealers played leading roles in the presidencies of Truman, John Kennedy, and Lyndon Johnson. Kennedy came from a Roosevelt-hating family. Historian William Leuchtenburg says that before 1960, "Kennedy showed a conspicuous lack of inclination to identify himself as a New Deal liberal." He adds, as president, "Kennedy never wholly embraced the Roosevelt tradition and at times he deliberately severed himself from it." By contrast, young Lyndon Johnson had been an enthusiastic New Dealer and a favorite of Roosevelt. Johnson modelled his presidency on Roosevelt's and relied heavily on New Deal lawyer Abe Fortas, as well as James H. Rowe, Anna M. Rosenberg, Thomas Gardiner Corcoran, and Benjamin V. Cohen.
During his presidency, and continuing to a lesser extent afterwards, there has been much criticism of Roosevelt, some of it intense. Critics have questioned not only his policies, positions, and the consolidation of power that occurred due to his responses to the crises of the Depression and World War II but also his breaking with tradition by running for a third term as president. Long after his death, new lines of attack criticized Roosevelt's policies regarding helping the Jews of Europe, incarcerating the Japanese on the West Coast, and opposing anti-lynching legislation.
Roosevelt was criticized by conservatives for his economic policies, especially the shift in tone from individualism to collectivism with the expansion of the welfare state and regulation of the economy. Those criticisms continued decades after his death. One factor in the revisiting of these issues in later decades was the election of Ronald Reagan in 1980, who opposed the New Deal.
Roosevelt's home in Hyde Park is now a National Historic Site and home to his Presidential library. Washington, D.C., hosts two memorials to the former president. The largest, the 7+1⁄2-acre (3-hectare) Roosevelt Memorial, is located next to the Jefferson Memorial on the Tidal Basin. A more modest memorial, a block of marble in front of the National Archives building suggested by Roosevelt himself, was erected in 1965. Roosevelt's leadership in the March of Dimes is one reason he is commemorated on the American dime. Roosevelt has also appeared on several U.S. Postage stamps. On April 29, 1945, seventeen days after Roosevelt's death, the carrier USS Franklin D. Roosevelt was launched and served from 1945 to 1977. London's Westminster Abbey also has a stone tablet memorial to President Roosevelt that was unveiled by Attlee and Churchill in 1948. Welfare Island was renamed after Roosevelt in September 1973. | [
{
"paragraph_id": 0,
"text": "Franklin Delano Roosevelt (January 30, 1882 – April 12, 1945), commonly known by his initials FDR, was an American politician and statesman who served as the 32nd president of the United States from 1933 until his death in 1945. He was a member of the Democratic Party and is the only U.S. president to have served more than two terms in office. During his third and fourth terms he was preoccupied with World War II.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A member of the prominent Roosevelt family, after attending university, Roosevelt began to practice law in New York City. He was elected a member of the New York State Senate from 1911 to 1913 and was then the assistant secretary of the Navy under President Woodrow Wilson during World War I. Roosevelt was James M. Cox's running mate on the Democratic Party's ticket in the 1920 U.S. presidential election, but Cox lost to Republican nominee Warren G. Harding. In 1921, Roosevelt contracted a paralytic illness that permanently paralyzed his legs. Partly through the encouragement of his wife, Eleanor Roosevelt, he returned to public office as governor of New York from 1929 to 1933, during which he promoted programs to combat the Great Depression besetting the U.S. In the 1932 presidential election, Roosevelt defeated Republican president Herbert Hoover in a landslide.",
"title": ""
},
{
"paragraph_id": 2,
"text": "During his first 100 days as president, Roosevelt spearheaded unprecedented federal legislation and directed the federal government during most of the Great Depression, implementing the New Deal in response to the most significant economic crisis in American history. He also built the New Deal coalition, realigning American politics into the Fifth Party System and defining American liberalism throughout the middle third of the 20th century. He created numerous programs to provide relief to the unemployed and farmers while seeking economic recovery with the National Recovery Administration and other programs. He also instituted major regulatory reforms related to finance, communications, and labor, and presided over the end of Prohibition. In 1936, Roosevelt won a landslide reelection with the economy having improved from 1933, but the economy relapsed into a deep recession in 1937 and 1938. He was unable to expand the Supreme Court in 1937, the same year the conservative coalition was formed to block the implementation of further New Deal programs and reforms. Major surviving programs and legislation implemented under Roosevelt include the Securities and Exchange Commission, the National Labor Relations Act, the Federal Deposit Insurance Corporation, and Social Security. In 1940, he ran successfully for reelection, becoming the only American president to serve for more than two terms.",
"title": ""
},
{
"paragraph_id": 3,
"text": "With World War II looming after 1938 in addition to the Japanese invasion of China and the aggression of Nazi Germany, Roosevelt gave strong diplomatic and financial support to China as well as the United Kingdom and the Soviet Union while the U.S. remained officially neutral. Following the Japanese attack on Pearl Harbor on December 7, 1941, he obtained a declaration of war on Japan the next day and on Germany and Italy a few days later. He worked closely with other national leaders in leading the Allies against the Axis powers. Roosevelt supervised the mobilization of the American economy to support the war effort and implemented a Europe first strategy. He also initiated the development of the world's first atomic bomb and worked with the other Allied leaders to lay the groundwork for the United Nations and other post-war institutions, even coining the term \"United Nations\". Roosevelt won reelection in 1944 but died in 1945 after his physical health seriously and steadily declined during the war years. Since then, several of his actions have come under substantial criticism, including his ordering of the internment of Japanese Americans in concentration camps. Nonetheless, historical rankings consistently place him as one of the greatest American presidents.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Franklin Delano Roosevelt was born on January 30, 1882, in the Hudson Valley town of Hyde Park, New York, to businessman James Roosevelt I and his second wife, Sara Ann Delano. His parents, who were sixth cousins, both came from wealthy, established New York families, the Roosevelts, the Aspinwalls and the Delanos, respectively. Roosevelt's paternal ancestor migrated to New Amsterdam in the 17th century, and the Roosevelts succeeded as merchants and landowners. The Delano family patriarch, Philip Delano, traveled to the New World on the Fortune in 1621, and the Delanos thrived as merchants and shipbuilders in Massachusetts. Franklin had a half-brother, James Roosevelt \"Rosy\" Roosevelt, from his father's previous marriage.",
"title": "Early life and marriage"
},
{
"paragraph_id": 5,
"text": "Roosevelt's father, James, graduated from Harvard Law School in 1851 but chose not to practice law after receiving an inheritance from his grandfather. James Roosevelt, a prominent Bourbon Democrat, once took Franklin to meet President Grover Cleveland, who said to him: \"My little man, I am making a strange wish for you. It is that you may never be President of the United States.\" Franklin's mother, the dominant influence in his early years, once declared, \"My son Franklin is a Delano, not a Roosevelt at all.\" James, who was 54 when Franklin was born, was considered by some as a remote father, though biographer James MacGregor Burns indicates James interacted with his son more than was typical at the time.",
"title": "Early life and marriage"
},
{
"paragraph_id": 6,
"text": "As a child, Roosevelt learned to ride, shoot, and sail; he also learned to play polo, tennis, and golf. Frequent trips to Europe—beginning at age two and from age seven to fifteen—helped Roosevelt become conversant in German and French. Except for attending public school in Germany at age nine, Roosevelt was home-schooled by tutors until age 14. He then attended Groton School, an Episcopal boarding school in Groton, Massachusetts. He was not among the more popular Groton students, who were better athletes and had rebellious streaks. Its headmaster, Endicott Peabody, preached the duty of Christians to help the less fortunate and urged his students to enter public service. Peabody remained a strong influence throughout Roosevelt's life, officiating at his wedding and visiting him as president.",
"title": "Early life and marriage"
},
{
"paragraph_id": 7,
"text": "Like most of his Groton classmates, Roosevelt went to Harvard College. He was a member of the Alpha Delta Phi fraternity and the Fly Club, and served as a school cheerleader. Roosevelt was relatively undistinguished as a student or athlete, but he became editor-in-chief of The Harvard Crimson daily newspaper, a position that required ambition, energy, and the ability to manage others. He later said, \"I took economics courses in college for four years, and everything I was taught was wrong.\"",
"title": "Early life and marriage"
},
{
"paragraph_id": 8,
"text": "Roosevelt's father died in 1900, causing great distress for him. The following year, Roosevelt's fifth cousin Theodore Roosevelt became President of the United States. Theodore's vigorous leadership style and reforming zeal made him Franklin's role model and hero. He graduated from Harvard in three years in 1903 with an A.B. in history. He remained there for a fourth year, taking graduate courses and becoming an editor of the Harvard Crimson.",
"title": "Early life and marriage"
},
{
"paragraph_id": 9,
"text": "Roosevelt entered Columbia Law School in 1904 but dropped out in 1907 after passing the New York Bar Examination. In 1908, he took a job with the prestigious law firm of Carter Ledyard & Milburn, working in the firm's admiralty law division.",
"title": "Early life and marriage"
},
{
"paragraph_id": 10,
"text": "During his second year of college, Roosevelt met and proposed to Boston heiress Alice Sohier, who turned him down. Franklin then began courting his child-acquaintance and fifth cousin once removed, Eleanor Roosevelt, a niece of Theodore Roosevelt. In 1903, Franklin proposed to Eleanor. Following resistance from Roosevelt's mother, Franklin and Eleanor Roosevelt were married on March 17, 1905. Eleanor's father, Elliott, was deceased; her uncle Theodore, who was then the president of the United States, gave away the bride. The young couple moved into Springwood. Franklin and Sara Roosevelt also provided a townhouse for the newlyweds in New York City, and Sara had a house built for herself alongside that townhouse. Eleanor never felt at home in the houses at Hyde Park or New York; however, she loved the family's vacation home on Campobello Island, which Sara also gave the couple. Burns indicates that young Franklin Roosevelt was self-assured and at ease in the upper class. On the other hands, Eleanor was then shy and disliked social life. Initially, Eleanor stayed home to raise their children. As his father had done, Franklin left the raising of the children to his wife, and Eleanor delegated the task to caregivers. She later said that she knew \"absolutely nothing about handling or feeding a baby.\" Although Eleanor thought sex was \"an ordeal to be endured\", she and Franklin had six children. Anna, James, and Elliott were born in 1906, 1907, and 1910, respectively. The couple's second son, Franklin, died in infancy in 1909. Another son, also named Franklin, was born in 1914, and the youngest child, John, was born in 1916.",
"title": "Early life and marriage"
},
{
"paragraph_id": 11,
"text": "Roosevelt had several extramarital affairs. He commenced an affair with Eleanor's social secretary, Lucy Mercer, soon after she was hired in 1914. That affair was discovered by Eleanor in 1918. Franklin contemplated divorcing Eleanor, but Sara objected, and Mercer would not marry a divorced man with five children. Franklin and Eleanor remained married, and Franklin promised never to see Mercer again. Eleanor never forgave him for the affair, and their marriage shifted to become a political partnership. Eleanor soon established a separate home in Hyde Park at Val-Kill and devoted herself to social and political causes independent of her husband. The emotional break in their marriage was so severe that when Franklin asked Eleanor in 1942—in light of his failing health—to come back home and live with him again, she refused. Roosevelt was not always aware of Eleanor's visits to the White House. For some time, Eleanor could not easily reach Roosevelt on the telephone without his secretary's help; Franklin, in turn, did not visit Eleanor's New York City apartment until late 1944.",
"title": "Early life and marriage"
},
{
"paragraph_id": 12,
"text": "Franklin broke his promise to Eleanor regarding Lucy Mercer. He and Mercer maintained a formal correspondence, and began seeing each other again in 1941 or earlier. Roosevelt's son Elliott claimed that his father had a 20-year affair with his private secretary, Marguerite \"Missy\" LeHand. Another son, James, stated that \"there is a real possibility that a romantic relationship existed\" between his father and Crown Princess Märtha of Norway, who resided in the White House during part of World War II. Aides began to refer to her at the time as \"the president's girlfriend\", and gossip linking the two romantically appeared in the newspapers.",
"title": "Early life and marriage"
},
{
"paragraph_id": 13,
"text": "Roosevelt cared little for the practice of law and told friends he planned to enter politics. Despite his admiration for cousin Theodore, Franklin shared his father's bond with the Democratic Party, and in preparation for the 1910 elections, the party recruited Roosevelt to run for a seat in the New York State Assembly. Roosevelt was a compelling recruit for the party. He had the personality and energy for campaigning, and he had the money to pay for his own campaign. But Roosevelt's campaign for the state assembly ended after the Democratic incumbent, Lewis Stuyvesant Chanler, chose to seek re-election. Rather than putting his political hopes on hold, Roosevelt ran for a seat in the state senate. The senate district, located in Dutchess, Columbia, and Putnam, was strongly Republican. Roosevelt feared that opposition from Theodore could end his campaign, but Theodore encouraged his candidacy despite their party differences. Acting as his own campaign manager, Roosevelt traveled throughout the senate district via automobile at a time when few could afford a car. Due to his aggressive campaign, his name recognition in the Hudson Valley, and the Democratic landslide in the 1910 United States elections, Roosevelt won a surprising victory.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 14,
"text": "Despite short legislative sessions, Roosevelt treated his new position as a full-time career. Taking his seat on January 1, 1911, Roosevelt soon became the leader of a group of \"Insurgents\" in opposition to the Tammany Hall machine that dominated the state Democratic Party. In the 1911 U.S. Senate election, which was determined in a joint session of the New York state legislature, Roosevelt and nineteen other Democrats caused a prolonged deadlock by opposing a series of Tammany-backed candidates. Tammany threw its backing behind James A. O'Gorman, a highly regarded judge whom Roosevelt found acceptable, and O'Gorman won the election in late March. Roosevelt in the process became a popular figure among New York Democrats. News articles and cartoons depicted \"the second coming of a Roosevelt\", sending \"cold shivers down the spine of Tammany\".",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 15,
"text": "Roosevelt opposed Tammany Hall by supporting New Jersey Governor Woodrow Wilson's successful bid for the 1912 Democratic nomination. The election became a three-way contest when Theodore Roosevelt left the Republican Party to launch a third party campaign against Wilson and sitting Republican President William Howard Taft. Franklin's decision to back Wilson over his cousin in the general election alienated some of his family, except Theodore. Roosevelt overcame a bout of typhoid fever, and with help from journalist Louis McHenry Howe, he was re-elected in the 1912 elections. After the election, he served as chairman of the Agriculture Committee, and his success with farm and labor bills was a precursor to his New Deal policies years later. He had then become more consistently progressive, in support of labor and social welfare programs.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 16,
"text": "Roosevelt's support of Wilson led to his appointment in March 1913 as Assistant Secretary of the Navy, the second-ranking official in the Navy Department after Secretary Josephus Daniels who paid it little attention. Roosevelt had an affection for the Navy, was well-read on the subject, and was a most ardent supporter of a large, efficient force. With Wilson's support, Daniels and Roosevelt instituted a merit-based promotion system and made other reforms to extend civilian control over the autonomous departments of the Navy. Roosevelt oversaw the Navy's civilian employees and earned the respect of union leaders for his fairness in resolving disputes. No strikes occurred during his seven-plus years in the office, as he gained valuable experience in labor issues, wartime management, naval issues, and logistics.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 17,
"text": "In 1914, Roosevelt ran for the seat of retiring Republican Senator Elihu Root of New York. Though he had the backing of Treasury Secretary William Gibbs McAdoo and Governor Martin H. Glynn, he faced a formidable opponent in Tammany-Hall's James W. Gerard. He also was without Wilson's support, as the president needed Tammany's forces for his legislation and 1916 re-election. Roosevelt was soundly defeated in the Democratic primary by Gerard, who in turn lost the general election to Republican James Wolcott Wadsworth Jr. He learned that federal patronage alone, without White House support, could not defeat a strong local organization. After the election, he and Tammany Hall boss Charles Francis Murphy sought accommodation and became allies.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 18,
"text": "Roosevelt refocused on the Navy Department, as World War I broke out in Europe in August 1914. Though he remained publicly supportive of Wilson, Roosevelt sympathized with the Preparedness Movement, whose leaders strongly favored the Allied Powers and called for a military build-up. The Wilson administration initiated an expansion of the Navy after the sinking of the RMS Lusitania by a German submarine, and Roosevelt helped establish the United States Navy Reserve and the Council of National Defense. In April 1917, after Germany declared it would engage in unrestricted submarine warfare and attacked several U.S. ships, Congress approved Wilson's call for a declaration of war on Germany.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 19,
"text": "Roosevelt requested that he be allowed to serve as a naval officer, but Wilson insisted that he continue to serve as Assistant Secretary. For the next year, Roosevelt remained in Washington to coordinate the deployment of naval vessels and personnel, as the Navy expanded fourfold. In the summer of 1918, Roosevelt traveled to Europe to inspect naval installations and meet with French and British officials. In September, he returned to the United States on board the USS Leviathan. On the 11-day voyage, the pandemic influenza virus struck and killed many on board. Roosevelt became very ill with influenza and complicating pneumonia, but recovered by the time the ship landed in New York. After Germany signed an armistice in November 1918, Daniels and Roosevelt supervised the demobilization of the Navy. Against the advice of older officers such as Admiral William Benson—who claimed he could not \"conceive of any use the fleet will ever have for aviation\"—Roosevelt personally ordered the preservation of the Navy's Aviation Division. With the Wilson administration near an end, Roosevelt planned his next run for office. He approached Herbert Hoover about running for the 1920 Democratic presidential nomination, with Roosevelt as his running mate.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 20,
"text": "Roosevelt's plan for Hoover to run for the nomination fell through after Hoover publicly declared himself to be a Republican, but Roosevelt decided to seek the 1920 vice presidential nomination. After Governor James M. Cox of Ohio won the party's presidential nomination at the 1920 Democratic National Convention, he chose Roosevelt as his running mate, and the convention nominated him by acclamation. Although his nomination surprised most people, he balanced the ticket as a moderate, a Wilsonian, and a prohibitionist with a famous name. Roosevelt, then 38, resigned as Assistant Secretary after the Democratic convention and campaigned across the nation for the party ticket.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 21,
"text": "During the campaign, Cox and Roosevelt defended the Wilson administration and the League of Nations, both of which were unpopular in 1920. Roosevelt personally supported U.S. membership in the League of Nations, but, unlike Wilson, he favored compromising with Senator Henry Cabot Lodge and other \"Reservationists\". The Cox–Roosevelt ticket was defeated by Republicans Warren G. Harding and Calvin Coolidge in the presidential election by a wide margin, and the Republican ticket carried every state outside of the South. Roosevelt accepted the loss without issue and later reflected that the relationships and goodwill that he built in the 1920 campaign proved to be a major asset in his 1932 campaign. The 1920 election also saw the first public participation of Eleanor Roosevelt who, with the support of Louis Howe, established herself as a valuable political player. After the election, Roosevelt returned to New York City, where he practiced law and served as a vice president of the Fidelity and Deposit Company.",
"title": "Early political career (1910–1920)"
},
{
"paragraph_id": 22,
"text": "Roosevelt’s future political career came under threat when his role as head the Section A of the Office of the Assistant Secretary of the U.S. Navy in the Newport Sex Scandal became public knowledge. Secretary Daniels had created Section A, commonly known as the Newport Sex Squad, in 1919 to investigate homosexual activity at the US naval base in Newport, Rhode Island. Investigations by both a naval board of inquiry and the U.S. Senate Committee on Naval Affairs revealed a pattern of entrapment and intimidation by forty-one operatives acting under Roosevelt's authority. In 1921, the Senate Committee's final report concluded that Roosevelt's \"direct supervision\" made him \"morally responsible\" for these abuses and even suggested that he was unfit to hold any public office. The front-page story on the report in The New York Times on July 23, 1921 featured the headline, \"Lay Navy Scandal to F. D. Roosevelt — Details Are Unprintable.\"",
"title": "Fallout from the Newport Sex Scandal (1921)"
},
{
"paragraph_id": 23,
"text": "Roosevelt sought to build support for a political comeback in the 1922 elections, but his career was derailed by an illness which began less than three weeks after the U.S. Senate Committee on Naval Affairs issued its final report the Newport Sex Scandal. While the Roosevelts were vacationing at Campobello Island in August 1921, he fell ill. His main symptoms were fever; symmetric, ascending paralysis; facial paralysis; bowel and bladder dysfunction; numbness and hyperesthesia; and a descending pattern of recovery. Roosevelt was left permanently paralyzed from the waist down and was diagnosed with polio. Historians have noted a 2003 study strongly favoring a diagnosis of Guillain–Barré syndrome, but have continued to describe his paralysis according to the initial diagnosis.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 24,
"text": "Though his mother favored his retirement from public life, Roosevelt, his wife, and Roosevelt's close friend and adviser, Louis Howe, were all determined that he continue his political career. He convinced many people that he was improving, which he believed to be essential prior to running for public office again. He laboriously taught himself to walk short distances while wearing iron braces on his hips and legs, by swiveling his torso while supporting himself with a cane. He was careful never to be seen using his wheelchair in public, and great care was taken to prevent any portrayal in the press that would highlight his disability. However, his disability was well known before and during his presidency and became a major part of his image. He usually appeared in public standing upright, supported on one side by an aide or one of his sons.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 25,
"text": "Beginning in 1925, Roosevelt spent most of his time in the Southern United States, at first on his houseboat, the Larooco. Intrigued by the potential benefits of hydrotherapy, he established a rehabilitation center at Warm Springs, Georgia, in 1926. To create the rehabilitation center, he assembled a staff of physical therapists and used most of his inheritance to purchase the Merriweather Inn. In 1938, he founded the National Foundation for Infantile Paralysis, leading to the development of polio vaccines.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 26,
"text": "Roosevelt maintained contacts with the Democratic Party during the 1920s, and he remained active in New York politics while also establishing contacts in the South, particularly in Georgia. He issued an open letter endorsing Al Smith's successful campaign in New York's 1922 gubernatorial election, which both aided Smith and showed Roosevelt's continuing relevance as a political figure. Roosevelt and Smith came from different backgrounds and never fully trusted one another, but Roosevelt supported Smith's progressive policies, while Smith was happy to have the backing of the prominent and well-respected Roosevelt.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 27,
"text": "Roosevelt gave presidential nominating speeches for Smith at the 1924 and 1928 Democratic National Conventions; the speech at the 1924 convention marked a return to public life following his illness and convalescence. That year, the Democrats were badly divided between an urban wing, led by Smith, and a conservative, rural wing, led by William Gibbs McAdoo. On the 101st ballot, the nomination went to John W. Davis, a compromise candidate who suffered a landslide defeat in the 1924 presidential election. Like many others throughout the United States, Roosevelt did not abstain from alcohol during the Prohibition era, but publicly he sought to find a compromise on Prohibition acceptable to both wings of the party.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 28,
"text": "In 1925, Smith appointed Roosevelt to the Taconic State Park Commission, and his fellow commissioners chose him as chairman. In this role, he came into conflict with Robert Moses, a Smith protégé, who was the primary force behind the Long Island State Park Commission and the New York State Council of Parks. Roosevelt accused Moses of using the name recognition of prominent individuals including Roosevelt to win political support for state parks, but then diverting funds to the ones Moses favored on Long Island, while Moses worked to block the appointment of Howe to a salaried position as the Taconic commission's secretary. Roosevelt served on the commission until the end of 1928, and his contentious relationship with Moses continued as their careers progressed.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 29,
"text": "Peace was the catchword of the 1920s, and in 1923 Edward Bok established the $100,000 American Peace Award for the best plan to bring peace to the world. Roosevelt had leisure time and interest, and he drafted a plan for the contest. He never submitted it because his wife Eleanor Roosevelt was selected as a judge for the prize. His plan called for a new world organization that would replace the League of Nations. Although Roosevelt had been the vice-presidential candidate on the Democratic ticket of 1920 that supported the League of Nations, by 1924 he was ready to scrap it. His draft of a \"Society of Nations\" accepted the reservations proposed by Henry Cabot Lodge in the 1919 Senate debate. The new Society would not become involved in the Western Hemisphere, where the Monroe doctrine held sway. It would not have any control over military forces. Although Roosevelt's plan was never made public, he thought about the problem a great deal and incorporated some of his 1924 ideas into the design for the United Nations in 1944–1945.",
"title": "Paralytic illness and political comeback (1921–1928)"
},
{
"paragraph_id": 30,
"text": "Smith, the Democratic presidential nominee in the 1928 election, asked Roosevelt to run for governor of New York in the 1928 state election. Roosevelt initially resisted, as he was reluctant to leave Warm Springs and feared a Republican landslide in 1928. Party leaders eventually convinced him only he could defeat the Republican gubernatorial nominee, New York Attorney General Albert Ottinger. He won the party's gubernatorial nomination by acclamation and again turned to Howe to lead his campaign. Roosevelt was also joined on the campaign trail by associates Samuel Rosenman, Frances Perkins, and James Farley. While Smith lost the presidency in a landslide, and was defeated in his home state, Roosevelt was elected governor by a one-percent margin, and became a contender in the next presidential election.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 31,
"text": "Roosevelt proposed the construction of hydroelectric power plants and addressed the ongoing farm crisis of the 1920s. Relations between Roosevelt and Smith suffered after he chose not to retain key Smith appointees like Moses. He and his wife Eleanor established an understanding for the rest of his career; she would dutifully serve as the governor's wife but would also be free to pursue her own agenda and interests. He also began holding \"fireside chats\", in which he directly addressed his constituents via radio, often pressuring the New York State Legislature to advance his agenda.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 32,
"text": "In October 1929, the Wall Street Crash occurred, and with it came the Great Depression in the United States. Roosevelt saw the seriousness of the situation and established a state employment commission. He also became the first governor to publicly endorse the idea of unemployment insurance.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 33,
"text": "When Roosevelt began his run for a second term in May 1930, he reiterated his doctrine from the campaign two years before: \"that progressive government by its very terms must be a living and growing thing, that the battle for it is never-ending and that if we let up for one single moment or one single year, not merely do we stand still but we fall back in the march of civilization.\" He ran on a platform that called for aid to farmers, full employment, unemployment insurance, and old-age pensions. He was elected to a second term by a 14% margin.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 34,
"text": "Roosevelt proposed an economic relief package and the establishment of the Temporary Emergency Relief Administration to distribute those funds. Led first by Jesse I. Straus and then by Harry Hopkins, the agency assisted well over one-third of New York's population between 1932 and 1938. Roosevelt also began an investigation into corruption in New York City among the judiciary, the police force, and organized crime, prompting the creation of the Seabury Commission. The Seabury investigations exposed an extortion ring, led many public officials to be removed from office, and made the decline of Tammany Hall inevitable.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 35,
"text": "Roosevelt supported reforestation with the Hewitt Amendment in 1931, which gave birth to New York's State Forest system.",
"title": "Governor of New York (1929–1932)"
},
{
"paragraph_id": 36,
"text": "As the 1932 presidential election approached, Roosevelt turned his attention to national politics, established a campaign team led by Howe and Farley, and a \"brain trust\" of policy advisers, primarily composed of Columbia University and Harvard University professors. There were some who were not so sanguine about his chances, such as Walter Lippmann, the dean of political commentators, who observed of Roosevelt: \"He is a pleasant man who, without any important qualifications for the office, would very much like to be president.\"",
"title": "1932 presidential election"
},
{
"paragraph_id": 37,
"text": "However, Roosevelt's efforts as governor to address the effects of the depression in his own state established him as the front-runner for the 1932 Democratic presidential nomination. Roosevelt rallied the progressive supporters of the Wilson administration while also appealing to many conservatives, establishing himself as the leading candidate in the South and West. The chief opposition to Roosevelt's candidacy came from Northeastern conservatives, Speaker of the House John Nance Garner of Texas and Al Smith, the 1928 Democratic presidential nominee.",
"title": "1932 presidential election"
},
{
"paragraph_id": 38,
"text": "Roosevelt entered the convention with a delegate lead due to his success in the 1932 Democratic primaries, but most delegates entered the convention unbound to any particular candidate. On the first presidential ballot of the convention, Roosevelt received the votes of more than half but less than two-thirds of the delegates, with Smith finishing in a distant second place. Roosevelt then promised the vice-presidential nomination to Garner, who controlled the votes of Texas and California; Garner threw his support behind Roosevelt after the third ballot, and Roosevelt clinched the nomination on the fourth ballot. Roosevelt flew in from New York to Chicago after learning that he had won the nomination, becoming the first major-party presidential nominee to accept the nomination in person. His appearance was essential, to show himself as vigorous, despite the ravaging disease that disabled him physically.",
"title": "1932 presidential election"
},
{
"paragraph_id": 39,
"text": "In his acceptance speech, Roosevelt declared, \"I pledge you, I pledge myself to a new deal for the American people... This is more than a political campaign. It is a call to arms.\" Roosevelt promised securities regulation, tariff reduction, farm relief, government-funded public works, and other government actions to address the Great Depression. Reflecting changing public opinion, the Democratic platform included a call for the repeal of Prohibition; Roosevelt himself had not taken a public stand on the issue prior to the convention but promised to uphold the party platform. Otherwise, Roosevelt's primary campaign strategy was one of caution, intent upon avoiding mistakes that would distract from Hoover's failings on the economy. His statements attacked the incumbent and included no other specific policies or programs.",
"title": "1932 presidential election"
},
{
"paragraph_id": 40,
"text": "After the convention, Roosevelt won endorsements from several progressive Republicans, including George W. Norris, Hiram Johnson, and Robert La Follette Jr. He also reconciled with the party's conservative wing, and even Al Smith was persuaded to support the Democratic ticket. Hoover's handling of the Bonus Army further damaged the incumbent's popularity, as newspapers across the country criticized the use of force to disperse assembled veterans.",
"title": "1932 presidential election"
},
{
"paragraph_id": 41,
"text": "Roosevelt won 57% of the popular vote and carried all but six states. Historians and political scientists consider the 1932–36 elections to be a political realignment. Roosevelt's victory was enabled by the creation of the New Deal coalition, small farmers, the Southern whites, Catholics, big city political machines, labor unions, northern African Americans (southern ones were still disfranchised), Jews, intellectuals, and political liberals. The creation of the New Deal coalition transformed American politics and started what political scientists call the \"New Deal Party System\" or the Fifth Party System. Between the Civil War and 1929, Democrats had rarely controlled both houses of Congress and had won just four of seventeen presidential elections; from 1932 to 1979, Democrats won eight of twelve presidential elections and generally controlled both houses of Congress.",
"title": "1932 presidential election"
},
{
"paragraph_id": 42,
"text": "Roosevelt was elected in November 1932 but like his predecessors did not take office until the following March. After the election, President Hoover sought to convince Roosevelt to renounce much of his campaign platform and to endorse the Hoover administration's policies. Roosevelt refused Hoover's request to develop a joint program to stop the economic decline, claiming that it would tie his hands and that Hoover had the power to act.",
"title": "1932 presidential election"
},
{
"paragraph_id": 43,
"text": "During the transition, Roosevelt chose Howe as his chief of staff, and Farley as Postmaster General. Frances Perkins, as Secretary of Labor, became the first woman appointed to a cabinet position. William H. Woodin, a Republican industrialist close to Roosevelt, was the choice for Secretary of the Treasury, while Roosevelt chose Senator Cordell Hull of Tennessee as Secretary of State. Harold L. Ickes and Henry A. Wallace, two progressive Republicans, were selected for the roles of Secretary of the Interior and Secretary of Agriculture, respectively.",
"title": "1932 presidential election"
},
{
"paragraph_id": 44,
"text": "In February 1933, Roosevelt escaped an assassination attempt by Giuseppe Zangara, who expressed a \"hate for all rulers.\" As he was attempting to shoot Roosevelt, Zangara was struck by a woman with her purse; he instead mortally wounded Chicago Mayor Anton Cermak, who was sitting alongside Roosevelt.",
"title": "1932 presidential election"
},
{
"paragraph_id": 45,
"text": "As president, Roosevelt appointed powerful men to top positions but made all the major decisions, regardless of delays, inefficiency, or resentment. Analyzing the president's administrative style, Burns concludes:",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 46,
"text": "The president stayed in charge of his administration...by drawing fully on his formal and informal powers as Chief Executive; by raising goals, creating momentum, inspiring a personal loyalty, getting the best out of people...by deliberately fostering among his aides a sense of competition and a clash of wills that led to disarray, heartbreak, and anger but also set off pulses of executive energy and sparks of creativity...by handing out one job to several men and several jobs to one man, thus strengthening his own position as a court of appeals, as a depository of information, and as a tool of co-ordination; by ignoring or bypassing collective decision-making agencies, such as the Cabinet...and always by persuading, flattering, juggling, improvising, reshuffling, harmonizing, conciliating, manipulating.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 47,
"text": "When Roosevelt was inaugurated on March 4, 1933, the U.S. was at the nadir of the worst depression in its history. A quarter of the workforce was unemployed, and farmers were in deep trouble as prices had fallen by 60%. Industrial production had fallen by more than half since 1929. Two million people were homeless. By the evening of March 4, 32 of the 48 states—as well as the District of Columbia—had closed their banks.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 48,
"text": "Historians categorized Roosevelt's program as \"relief, recovery, and reform.\" Relief was urgently needed by tens of millions of unemployed. Recovery meant boosting the economy back to normal, and reform was required of the financial and banking systems. Through Roosevelt's series of 30 \"fireside chats\", he presented his proposals directly to the American public as a series of radio addresses. Energized by his own victory over paralytic illness, he used persistent optimism and activism to renew the national spirit.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 49,
"text": "On his second day in office, Roosevelt declared a four-day national \"bank holiday\", to end the run by depositors seeking to withdraw funds. He called for a special session of Congress on March 9, when Congress passed, almost sight unseen, the Emergency Banking Act. The act, first developed by the Hoover administration and Wall Street bankers, gave the president the power to determine the opening and closing of banks and authorized the Federal Reserve Banks to issue banknotes. The \"first 100 Days\" of the 73rd United States Congress saw an unprecedented amount of legislation and set a benchmark against which future presidents have been compared. When the banks reopened on Monday, March 15, stock prices rose by 15 percent and in the following weeks over $1 billion was returned to bank vaults, ending the bank panic. On March 22, Roosevelt signed the Cullen–Harrison Act, which brought Prohibition to a close.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 50,
"text": "Roosevelt saw the establishment of a number of agencies and measures designed to provide relief for the unemployed and others. The Federal Emergency Relief Administration (FERA), under the leadership of Harry Hopkins, distributed relief to state governments. The Public Works Administration (PWA), under Secretary of the Interior Harold Ickes, oversaw the construction of large-scale public works such as dams, bridges, and schools. The most popular of all New Deal agencies—and Roosevelt's favorite—was the Civilian Conservation Corps (CCC), which hired 250,000 unemployed men to work in rural projects. Roosevelt also expanded Hoover's Reconstruction Finance Corporation, which financed railroads and industry. Congress gave the Federal Trade Commission (FTC) broad regulatory powers and provided mortgage relief to millions of farmers and homeowners. Roosevelt also set up the Agricultural Adjustment Administration (AAA) to increase commodity prices, by paying farmers to leave land uncultivated and cut herds. In many instances, crops were plowed under and livestock killed, while many Americans died of hunger and were ill-clothed; critics labeled such policies \"utterly idiotic.\" On the positive side, nothing did more to rescue the farm family from isolation than the Rural Electrification Administration (REA), which brought electricity for the first time to millions of rural homes and with it such conveniences as radios and washing machines.\"",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 51,
"text": "Reform of the economy was the goal of the National Industrial Recovery Act (NIRA) of 1933. It sought to end cutthroat competition by forcing industries to establish rules such as minimum prices, agreements not to compete, and production restrictions. Industry leaders negotiated the rules with NIRA officials, who suspended antitrust laws in return for better wages. The Supreme Court in May 1935 declared NIRA unconstitutional by a unanimous decision, to Roosevelt's chagrin. He reformed financial regulations with the Glass–Steagall Act, creating the Federal Deposit Insurance Corporation (FDIC) to underwrite savings deposits. The act also limited affiliations between commercial banks and securities firms. In 1934, the Securities and Exchange Commission was created to regulate the trading of securities, while the Federal Communications Commission (FCC) was established to regulate telecommunications.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 52,
"text": "Recovery was sought through federal spending, as the NIRA included $3.3 billion (equivalent to $74.6 billion in 2022) of spending through the Public Works Administration. Roosevelt worked with Senator Norris to create the largest government-owned industrial enterprise in American history—the Tennessee Valley Authority (TVA)—which built dams and power stations, controlled floods, and modernized agriculture and home conditions in the poverty-stricken Tennessee Valley. However, natives criticized the TVA for displacing thousands of people for these projects. The Soil Conservation Service trained farmers in the proper methods of cultivation, and with the TVA, Roosevelt became the father of soil conservation. Executive Order 6102 declared that all privately held gold of American citizens was to be sold to the U.S. Treasury and the price raised from $20 to $35 per ounce. The goal was to counter the deflation which was paralyzing the economy.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 53,
"text": "Roosevelt tried to keep his campaign promise by cutting the federal budget. This included a reduction in military spending from $752 million in 1932 to $531 million in 1934 and a 40% cut in spending on veterans benefits. 500,000 veterans and widows were removed from the pension rolls, and benefits were reduced for the remainder. Federal salaries were cut and spending on research and education was reduced. The veterans were well organized and strongly protested, so most benefits were restored or increased by 1934. Veterans groups such as the American Legion and the Veterans of Foreign Wars won their campaign to transform their benefits from payments due in 1945 to immediate cash when Congress overrode the President's veto and passed the Bonus Act in January 1936. It pumped sums equal to 2% of the GDP into the consumer economy and had a major stimulus effect.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 54,
"text": "Roosevelt expected that his party would lose seats in the 1934 Congressional elections, as the president's party had done in most previous midterm elections. Unexpectedly the Democrats picked up seats in both houses of Congress. Empowered by the public's vote of confidence, the first item on Roosevelt's agenda in the 74th Congress was the creation of a social insurance program. The Social Security Act established Social Security and promised economic security for the elderly, the poor, and the sick. Roosevelt insisted that it should be funded by payroll taxes rather than from the general fund, saying, \"We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program.\" Compared with the social security systems in western European countries, the Social Security Act of 1935 was rather conservative. But for the first time, the federal government took responsibility for the economic security of the aged, the temporarily unemployed, dependent children, and disabled people. Against Roosevelt's original intention for universal coverage, the act excluded farmers, domestic workers, and other groups, which made up about forty percent of the labor force.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 55,
"text": "Roosevelt consolidated the various relief organizations, though some, like the PWA, continued to exist. After winning Congressional authorization for further funding of relief efforts, he established the Works Progress Administration (WPA). Under the leadership of Harry Hopkins, the WPA employed over three million people in its first year of operations. It undertook numerous massive construction projects in cooperation with local governments. It also set up the National Youth Administration and arts organizations.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 56,
"text": "The National Labor Relations Act guaranteed workers the right to collective bargaining through unions of their own choice. The act also established the National Labor Relations Board (NLRB) to facilitate wage agreements and suppress repeated labor disturbances. The act did not compel employers to reach an agreement with their employees, but it opened possibilities for American labor. The result was a tremendous growth of membership in the labor unions, especially in the mass-production sector. When the Flint sit-down strike threatened the production of General Motors, Roosevelt broke with the precedent set by many former presidents and refused to intervene; the strike ultimately led to the unionization of both General Motors and its rivals in the American automobile industry.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 57,
"text": "While the First New Deal of 1933 had broad support from most sectors, the Second New Deal challenged the business community. Conservative Democrats, led by Al Smith, fought back with the American Liberty League, savagely attacking Roosevelt and equating him with socialism. But Smith overplayed his hand, and his boisterous rhetoric let Roosevelt isolate his opponents and identify them with the wealthy vested interests that opposed the New Deal, strengthening Roosevelt for the 1936 landslide. By contrast, labor unions, energized by labor legislation, signed up millions of new members and became a major backer of Roosevelt's re-elections in 1936, 1940, and 1944.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 58,
"text": "Burns suggests that Roosevelt's policy decisions were guided more by pragmatism than ideology and that he \"was like the general of a guerrilla army whose columns, fighting blindly in the mountains through dense ravines and thickets, suddenly converge, half by plan and half by coincidence, and debouch into the plain below.\" Roosevelt argued that such apparently haphazard methodology was necessary. \"The country needs and, unless I mistake its temper, the country demands bold, persistent experimentation,\" he wrote. \"It is common sense to take a method and try it; if it fails, admit it frankly and try another. But above all, try something.\"",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 59,
"text": "Eight million workers remained unemployed in 1936, and though economic conditions had improved since 1932, they remained sluggish. By 1936, Roosevelt had lost the backing he once held in the business community because of his support for the NLRB and the Social Security Act. The Republicans had few alternative candidates and nominated Kansas Governor Alf Landon, a little-known bland candidate whose chances were damaged by the public re-emergence of the still-unpopular Herbert Hoover. While Roosevelt campaigned on his New Deal programs and continued to attack Hoover, Landon sought to win voters who approved of the goals of the New Deal but disagreed with its implementation.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 60,
"text": "An attempt by Louisiana Senator Huey Long to organize a left-wing third party collapsed after Long's assassination in 1935. The remnants, helped by Father Charles Coughlin, supported William Lemke of the newly formed Union Party. Roosevelt won re-nomination with little opposition at the 1936 Democratic National Convention, while his allies overcame Southern resistance to permanently abolish the long-established rule that had required Democratic presidential candidates to win the votes of two-thirds of the delegates rather than a simple majority.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 61,
"text": "In the election against Landon and a third-party candidate, Roosevelt won 60.8% of the vote and carried every state except Maine and Vermont. The Democratic ticket won the highest proportion of the popular vote. Democrats also expanded their majorities in Congress, winning control of over three-quarters of the seats in each house. The election also saw the consolidation of the New Deal coalition; while the Democrats lost some of their traditional allies in big business, they were replaced by groups such as organized labor and African Americans, the latter of whom voted Democratic for the first time since the Civil War. Roosevelt lost high-income voters, especially businessmen and professionals, but made major gains among the poor and minorities. He won 86 percent of the Jewish vote, 81 percent of Catholics, 80 percent of union members, 76 percent of Southerners, 76 percent of blacks in northern cities, and 75 percent of people on relief. Roosevelt carried 102 of the country's 106 cities with a population of 100,000 or more.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 62,
"text": "The Supreme Court became Roosevelt's primary domestic focus during his second term after the court overturned many of his programs, including NIRA. The more conservative members of the court upheld the principles of the Lochner era, which saw numerous economic regulations struck down on the basis of freedom of contract. Roosevelt proposed the Judicial Procedures Reform Bill of 1937, which would have allowed him to appoint an additional Justice for each incumbent Justice over the age of 70; in 1937, there were six Supreme Court Justices over the age of 70. The size of the Court had been set at nine since the passage of the Judiciary Act of 1869, and Congress had altered the number of Justices six other times throughout U.S. history. Roosevelt's \"court packing\" plan ran into intense political opposition from his own party, led by Vice President Garner since it upset the separation of powers. A bipartisan coalition of liberals and conservatives of both parties opposed the bill, and Chief Justice Charles Evans Hughes broke with precedent by publicly advocating the defeat of the bill. Any chance of passing the bill ended with the death of Senate Majority Leader Joseph Taylor Robinson in July 1937.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 63,
"text": "Starting with the 1937 case of West Coast Hotel Co. v. Parrish, the court began to take a more favorable view of economic regulations. Historians have described this as, \"the switch in time that saved nine.\" That same year, Roosevelt appointed a Supreme Court Justice for the first time, and by 1941, seven of the nine Justices had been appointed by Roosevelt. After Parrish, the Court shifted its focus from judicial review of economic regulations to the protection of civil liberties. Four of Roosevelt's Supreme Court appointees, Felix Frankfurter, Robert H. Jackson, Hugo Black, and William O. Douglas, were particularly influential in reshaping the jurisprudence of the Court.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 64,
"text": "With Roosevelt's influence on the wane following the failure of the Judicial Procedures Reform Bill of 1937, conservative Democrats joined with Republicans to block the implementation of further New Deal programs. Roosevelt did manage to pass some legislation, including the Housing Act of 1937, a second Agricultural Adjustment Act, and the Fair Labor Standards Act (FLSA) of 1938, which was the last major piece of New Deal legislation. The FLSA outlawed child labor, established a federal minimum wage, and required overtime pay for certain employees who work in excess of forty-hours per week. He also won passage of the Reorganization Act of 1939 and subsequently created the Executive Office of the President, making it \"the nerve center of the federal administrative system.\" When the economy began to deteriorate again in mid-1937, during the onset of the recession of 1937–1938, Roosevelt launched a rhetorical campaign against big business and monopoly power in the United States, alleging that the recession was the result of a capital strike and even ordering the Federal Bureau of Investigation to look for a criminal conspiracy (of which they found none). He then asked Congress for $5 billion (equivalent to $101.78 billion in 2022) in relief and public works funding. This managed to eventually create as many as 3.3 million WPA jobs by 1938. Projects accomplished under the WPA ranged from new federal courthouses and post offices to facilities and infrastructure for national parks, bridges, and other infrastructure across the country, and architectural surveys and archaeological excavations—investments to construct facilities and preserve important resources. Beyond this, however, Roosevelt recommended to a special congressional session only a permanent national farm act, administrative reorganization, and regional planning measures, all of which were leftovers from a regular session. According to Burns, this attempt illustrated Roosevelt's inability to settle on a basic economic program.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 65,
"text": "Determined to overcome the opposition of conservative Democrats in Congress, Roosevelt became involved in the 1938 Democratic primaries, actively campaigning for challengers who were more supportive of New Deal reform. Roosevelt failed badly, managing to defeat only one of the ten targeted, a conservative Democrat from New York City. In the November 1938 elections, Democrats lost six Senate seats and 71 House seats, with losses concentrated among pro-New Deal Democrats. When Congress reconvened in 1939, Republicans under Senator Robert Taft formed a Conservative coalition with Southern Democrats, virtually ending Roosevelt's ability to enact his domestic proposals. Despite their opposition to Roosevelt's domestic policies, many of these conservative Congressmen would provide crucial support for Roosevelt's foreign policy before and during World War II.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 66,
"text": "Roosevelt had a lifelong interest in the environment and conservation starting with his youthful interest in forestry on his family estate. Although he was never an outdoorsman or sportsman on Theodore Roosevelt's scale, his growth of the national systems was comparable. When Franklin was Governor of New York, the Temporary Emergency Relief Administration was essentially a state-level predecessor of the federal Civilian Conservation Corps, with 10,000 or more men building fire trails, combating soil erosion and planting tree seedlings in marginal farmland in the state of New York. As President, Roosevelt was active in expanding, funding, and promoting the National Park and National Forest systems. Their popularity soared, from three million visitors a year at the start of the decade to 15.5 million in 1939. The Civilian Conservation Corps enrolled 3.4 million young men and built 13,000 miles (21,000 kilometres) of trails, planted two billion trees, and upgraded 125,000 miles (201,000 kilometres) of dirt roads. Every state had its own state parks, and Roosevelt made sure that WPA and CCC projects were set up to upgrade them as well as the national systems.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 67,
"text": "Government spending increased from 8.0% of the gross national product (GNP) under Hoover in 1932 to 10.2% in 1936. The national debt as a percentage of the GNP had more than doubled under Hoover from 16% to 40% of the GNP in early 1933. It held steady at close to 40% as late as fall 1941, then grew rapidly during the war. The GNP was 34% higher in 1936 than in 1932 and 58% higher in 1940 on the eve of war. That is, the economy grew 58% from 1932 to 1940 in eight years of peacetime, and then grew 56% from 1940 to 1945 in five years of wartime. Unemployment fell dramatically during Roosevelt's first term. It increased in 1938 (\"a depression within a depression\") but continually declined after 1938. Total employment during Roosevelt's term expanded by 18.31 million jobs, with an average annual increase in jobs during his administration of 5.3%.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 68,
"text": "The main foreign policy initiative of Roosevelt's first term was the Good Neighbor Policy, which was a re-evaluation of U.S. policy toward Latin America. The United States frequently intervened in Latin America following the promulgation of the Monroe Doctrine in 1823, and the United States occupied several Latin American nations in the Banana Wars that occurred following the Spanish–American War of 1898. After Roosevelt took office, he withdrew U.S. forces from Haiti and reached new treaties with Cuba and Panama, ending their status as U.S. protectorates. In December 1933, Roosevelt signed the Montevideo Convention on the Rights and Duties of States, renouncing the right to intervene unilaterally in the affairs of Latin American countries. Roosevelt also normalized relations with the Soviet Union, which the United States had refused to recognize since the 1920s. He hoped to renegotiate the Russian debt from World War I and open trade relations, but no progress was made on either issue and \"both nations were soon disillusioned by the accord.\"",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 69,
"text": "The rejection of the Treaty of Versailles in 1919–1920 marked the dominance of isolationism in American foreign policy. Despite Roosevelt's Wilsonian background, he and Secretary of State Cordell Hull acted with great care not to provoke isolationist sentiment. The isolationist movement was bolstered in the early to mid-1930s by Senator Gerald Nye and others who succeeded in their effort to stop the \"merchants of death\" in the U.S. from selling arms abroad. This effort took the form of the Neutrality Acts; the president was refused a provision he requested giving him the discretion to allow the sale of arms to victims of aggression. He largely acquiesced to Congress's non-interventionist policies in the early-to-mid 1930s. In the interim, Fascist Italy under Benito Mussolini proceeded to overcome Ethiopia, and the Italians joined Nazi Germany under Adolf Hitler in supporting General Francisco Franco and the Nationalist cause in the Spanish Civil War. As that conflict drew to a close in early 1939, Roosevelt expressed regret in not aiding the Spanish Republicans. When Japan invaded China in 1937, isolationism limited Roosevelt's ability to aid China, despite atrocities like the Nanking Massacre and the USS Panay incident.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 70,
"text": "Germany annexed Austria in 1938, and soon turned its attention to its eastern neighbors. Roosevelt made it clear that, in the event of German aggression against Czechoslovakia, the U.S. would remain neutral. After completion of the Munich Agreement and the execution of Kristallnacht, American public opinion turned against Germany, and Roosevelt began preparing for a possible war with Germany. Relying on an interventionist political coalition of Southern Democrats and business-oriented Republicans, Roosevelt oversaw the expansion of U.S. airpower and war production capacity.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 71,
"text": "When World War II began in September 1939 with Germany's invasion of Poland and Britain and France's subsequent declaration of war upon Germany, Roosevelt sought ways to assist Britain and France militarily. Isolationist leaders like Charles Lindbergh and Senator William Borah successfully mobilized opposition to Roosevelt's proposed repeal of the Neutrality Act, but Roosevelt won Congressional approval of the sale of arms on a cash-and-carry basis. He also began a regular secret correspondence with Britain's First Lord of the Admiralty, Winston Churchill, in September 1939—the first of 1,700 letters and telegrams between them. Roosevelt forged a close personal relationship with Churchill, who became Prime Minister of the United Kingdom in May 1940.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 72,
"text": "The Fall of France in June 1940 shocked the American public, and isolationist sentiment declined. In July 1940, Roosevelt appointed two interventionist Republican leaders, Henry L. Stimson and Frank Knox, as Secretaries of War and the Navy, respectively. Both parties gave support to his plans for a rapid build-up of the American military, but the isolationists warned that Roosevelt would get the nation into an unnecessary war with Germany. In July 1940, a group of Congressmen introduced a bill that would authorize the nation's first peacetime draft, and with the support of the Roosevelt administration, the Selective Training and Service Act of 1940 passed in September. The size of the army increased from 189,000 men at the end of 1939 to 1.4 million men in mid-1941. In September 1940, Roosevelt openly defied the Neutrality Acts by reaching the Destroyers for Bases Agreement, which, in exchange for military base rights in the British Caribbean Islands, gave 50 WWI American destroyers to Britain.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 73,
"text": "While working under President Wilson, Roosevelt had perpetuated ideas of American racial superiority by believing that the people of Latin American were uncapable of self-government. However, by 1928 he had switched his point of view, becoming an advocate for cooperation. In an effort to denounce past U.S. interventionism and subdue any subsequent fears of Latin Americans, Roosevelt announced on March 4, 1933, during his inaugural address, \"In the field of World policy, I would dedicate this nation to the policy of the good neighbor, the neighbor who resolutely respects himself and, because he does so, respects the rights of others, the neighbor who respects his obligations and respects the sanctity of his agreements in and with a World of neighbors.\"",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 74,
"text": "In order to create a friendly relationship between the United States and Central as well as South American countries, Roosevelt sought to abstain from asserting military force in the region. This position was affirmed by Cordell Hull, Roosevelt's Secretary of State at a conference of American states in Montevideo in December 1933. Hull said: \"No country has the right to intervene in the internal or external affairs of another.\" Roosevelt then confirmed the policy in December of the same year: \"The definite policy of the United States from now on is one opposed to armed intervention.\" The fact that the policy was even put into place meant that the U.S. now recognised the maturity of Latin American countries and as a result were now more open to working together, especially when it comes to maintaining the peace. The policy, in the end, was yet another way for the U.S. to assert its own superiority.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 75,
"text": "In the months prior to the July 1940 Democratic National Convention, there was much speculation as to whether Roosevelt would run for an unprecedented third term. The president was silent, and even his closest advisors were in the dark. The two-term tradition, although not yet enshrined in the Constitution, had been established by George Washington when he refused to run for a third term in the 1796 presidential election. Roosevelt refused to give a definitive statement as to his willingness to be a candidate again, and he even indicated to some ambitious Democrats, such as James Farley, that he would not run for a third term and that they could seek the Democratic nomination. Farley and Vice President John Garner were not pleased with Roosevelt when he ultimately made the decision to break from Washington's precedent. As Germany swept through Western Europe and menaced Britain in mid-1940, Roosevelt decided that only he had the necessary experience and skills to see the nation safely through the Nazi threat. He was aided by the party's political bosses, who feared that no Democrat except Roosevelt could defeat Wendell Willkie, the popular Republican nominee.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 76,
"text": "At the July 1940 Democratic Convention in Chicago, Roosevelt easily swept aside challenges from Farley and Vice President Garner, who had turned against Roosevelt in his second term because of his liberal economic and social policies. To replace Garner on the ticket, Roosevelt turned to Secretary of Agriculture Henry Wallace of Iowa, a former Republican who strongly supported the New Deal and was popular in farm states. The choice was strenuously opposed by many of the party's conservatives, who felt Wallace was too radical and \"eccentric\" in his private life to be an effective running mate. But Roosevelt insisted that without Wallace on the ticket he would decline re-nomination, and Wallace won the vice-presidential nomination, defeating Speaker of the House William B. Bankhead and other candidates.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 77,
"text": "A late August poll taken by Gallup found the race to be essentially tied, but Roosevelt's popularity surged in September following the announcement of the Destroyers for Bases Agreement. Willkie supported much of the New Deal as well as rearmament and aid to Britain but warned that Roosevelt would drag the country into another European war. Responding to Willkie's attacks, Roosevelt promised to keep the country out of the war. Over its last month, the campaign degenerated into a series of outrageous accusations and mud-slinging, if not by the two candidates themselves then by their respective parties. Roosevelt won the 1940 election with 55% of the popular vote, 38 of the 48 states, and almost 85% of the electoral vote.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 78,
"text": "World War II dominated Roosevelt's attention, with far more time devoted to world affairs than ever before. Domestic politics and relations with Congress were largely shaped by his efforts to achieve total mobilization of the nation's economic, financial, and institutional resources for the war effort. Even relationships with Latin America and Canada were structured by wartime demands. Roosevelt maintained close personal control of all major diplomatic and military decisions, working closely with his generals and admirals, the war and Navy departments, the British, and even the Soviet Union. His key advisors on diplomacy were Harry Hopkins (who was based in the White House), Sumner Welles (based in the State Department), and Henry Morgenthau Jr. at Treasury. In military affairs, Roosevelt worked most closely with Secretary Henry L. Stimson at the War Department, Army Chief of Staff George Marshall, and Admiral William D. Leahy.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 79,
"text": "By late 1940, re-armament was in high gear, partly to expand and re-equip the Army and Navy and partly to become the \"Arsenal of Democracy\" for Britain and other countries. With his Four Freedoms speech in January 1941, Roosevelt laid out the case for an Allied battle for basic rights throughout the world. Assisted by Willkie, Roosevelt won Congressional approval of the Lend-Lease program, which directed massive military and economic aid to Britain, and China. In sharp contrast to the loans of World War I, there would be no repayment after the war. As Roosevelt took a firmer stance against Japan, Germany, and Italy, American isolationists such as Charles Lindbergh and the America First Committee vehemently attacked Roosevelt as an irresponsible warmonger. When Germany invaded the Soviet Union in June 1941, Roosevelt agreed to extend Lend-Lease to the Soviets. Thus, Roosevelt had committed the U.S. to the Allied side with a policy of \"all aid short of war.\" By July 1941, Roosevelt authorized the creation of the Office of the Coordinator of Inter-American Affairs (OCIAA) to counter perceived propaganda efforts in Latin America by Germany and Italy.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 80,
"text": "In August 1941, Roosevelt and Churchill conducted a highly secret bilateral meeting in which they drafted the Atlantic Charter, conceptually outlining global wartime and postwar goals. This would be the first of several wartime conferences; Churchill and Roosevelt would meet ten more times in person. Though Churchill pressed for an American declaration of war against Germany, Roosevelt believed that Congress would reject any attempt to bring the United States into the war. In September, a German submarine fired on the U.S. destroyer Greer, and Roosevelt declared that the U.S. Navy would assume an escort role for Allied convoys in the Atlantic as far east as Great Britain and would fire upon German ships or submarines (U-boats) of the Kriegsmarine if they entered the U.S. Navy zone. According to historian George Donelson Moss, Roosevelt \"misled\" Americans by reporting the Greer incident as if it would have been an unprovoked German attack on a peaceful American ship. This \"shoot on sight\" policy effectively declared naval war on Germany and was favored by Americans by a margin of 2-to-1.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 81,
"text": "After the German invasion of Poland, the primary concern of both Roosevelt and his top military staff was on the war in Europe, but Japan also presented foreign policy challenges. Relations with Japan had continually deteriorated since its invasion of Manchuria in 1931, and they had further worsened with Roosevelt's support of China. With the war in Europe occupying the attention of the major colonial powers, Japanese leaders eyed vulnerable colonies such as the Dutch East Indies, French Indochina, and British Malaya. After Roosevelt announced a $100 million loan (equivalent to $2.1 billion in 2022) to China in reaction to Japan's occupation of northern French Indochina, Japan signed the Tripartite Pact with Germany and Italy. The pact bound each country to defend the others against attack, and Germany, Japan, and Italy became known as the Axis powers. Overcoming those who favored invading the Soviet Union, the Japanese Army high command successfully advocated for the conquest of Southeast Asia to ensure continued access to raw materials. In July 1941, after Japan occupied the remainder of French Indochina, Roosevelt cut off the sale of oil to Japan, depriving Japan of more than 95 percent of its oil supply. He also placed the Philippine military under American command and reinstated General Douglas MacArthur into active duty to command U.S. forces in the Philippines.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 82,
"text": "The Japanese were incensed by the embargo and Japanese leaders became determined to attack the United States unless it lifted the embargo. The Roosevelt administration was unwilling to reverse the policy, and Secretary of State Hull blocked a potential summit between Roosevelt and Prime Minister Fumimaro Konoe. After diplomatic efforts to end the embargo failed, the Privy Council of Japan authorized a strike against the United States. The Japanese believed that the destruction of the United States Asiatic Fleet (stationed in the Philippines) and the United States Pacific Fleet (stationed at Pearl Harbor in Hawaii) was vital to the conquest of Southeast Asia. On the morning of December 7, 1941, the Japanese struck the U.S. naval base at Pearl Harbor with a surprise attack, knocking out the main American battleship fleet and killing 2,403 American servicemen and civilians. At the same time, separate Japanese task forces attacked Thailand, British Hong Kong, the Philippines, and other targets. Roosevelt called for war in his \"Infamy Speech\" to Congress, in which he said: \"Yesterday, December 7, 1941—a date which will live in infamy—the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan.\" In a nearly unanimous vote, Congress declared war on Japan. After the Japanese attack at Pearl Harbor, antiwar sentiment in the United States largely evaporated overnight. On December 11, 1941, Hitler and Mussolini declared war on the United States, which responded in kind.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 83,
"text": "A majority of scholars have rejected the conspiracy theories that Roosevelt, or any other high government officials, knew in advance about the Japanese attack on Pearl Harbor. The Japanese had kept their secrets closely guarded. Senior American officials were aware that war was imminent, but they did not expect an attack on Pearl Harbor. Roosevelt had expected that the Japanese would attack either the Dutch East Indies or Thailand.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 84,
"text": "In late December 1941, Churchill and Roosevelt met at the Arcadia Conference, which established a joint strategy between the U.S. and Britain. Both agreed on a Europe first strategy that prioritized the defeat of Germany before Japan. The U.S. and Britain established the Combined Chiefs of Staff to coordinate military policy and the Combined Munitions Assignments Board to coordinate the allocation of supplies. An agreement was also reached to establish a centralized command in the Pacific theater called ABDA, named for the American, British, Dutch, and Australian forces in the theater. On January 1, 1942, the United States, Britain, China, the Soviet Union, and twenty-two other countries (the Allied Powers) issued the Declaration by United Nations, in which each nation pledged to defeat the Axis powers.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 85,
"text": "In 1942, Roosevelt formed a new body, the Joint Chiefs of Staff, which made the final decisions on American military strategy. Admiral Ernest J. King as Chief of Naval Operations commanded the Navy and Marines, while General George C. Marshall led the Army and was in nominal control of the Air Force, which in practice was commanded by General Hap Arnold. The Joint Chiefs were chaired by Admiral William D. Leahy, the most senior officer in the military. Roosevelt avoided micromanaging the war and let his top military officers make most decisions. Roosevelt's civilian appointees handled the draft and procurement of men and equipment, but no civilians—not even the secretaries of War or Navy—had a voice in strategy. Roosevelt avoided the State Department and conducted high-level diplomacy through his aides, especially Harry Hopkins, whose influence was bolstered by his control of the Lend Lease funds.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 86,
"text": "In August 1939, Leo Szilard and Albert Einstein sent the Einstein–Szilárd letter to Roosevelt, warning of the possibility of a German project to develop nuclear weapons. Szilard realized that the recently discovered process of nuclear fission could be used to create a nuclear chain reaction that could be used as a weapon of mass destruction. Roosevelt feared the consequences of allowing Germany to have sole possession of the technology and authorized preliminary research into nuclear weapons. After the attack on Pearl Harbor, the Roosevelt administration secured the funds needed to continue research and selected General Leslie Groves to oversee the Manhattan Project, which was charged with developing the first nuclear weapons. Roosevelt and Churchill agreed to jointly pursue the project, and Roosevelt helped ensure that American scientists cooperated with their British counterparts.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 87,
"text": "Roosevelt coined the term \"Four Policemen\" to refer to the \"Big Four\" Allied powers of World War II, the United States, the United Kingdom, the Soviet Union, and China. The \"Big Three\" of Roosevelt, Winston Churchill, and Soviet leader Joseph Stalin, together with Chinese Generalissimo Chiang Kai-shek, cooperated informally on a plan in which American and British troops concentrated in the West; Soviet troops fought on the Eastern front; and Chinese, British and American troops fought in Asia and the Pacific. The United States also continued to send aid via the Lend-Lease program to the Soviet Union and other countries. The Allies formulated strategy in a series of high-profile conferences as well as by contact through diplomatic and military channels. Beginning in May 1942, the Soviets urged an Anglo-American invasion of German-occupied France in order to divert troops from the Eastern front. Concerned that their forces were not yet ready for an invasion of France, Churchill and Roosevelt decided to delay such an invasion until at least 1943 and instead focus on a landing in North Africa, known as Operation Torch.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 88,
"text": "In November 1943, Roosevelt, Churchill, and Stalin met to discuss strategy and post-war plans at the Tehran Conference, where Roosevelt met Stalin for the first time. At the conference, Britain and the United States committed to opening a second front against Germany in 1944, while Stalin committed to entering the war against Japan at an unspecified date. Subsequent conferences at Bretton Woods and Dumbarton Oaks established the framework for the post-war international monetary system and the United Nations, an intergovernmental organization similar to the failed League of Nations. Taking up the Wilsonian mantle, Roosevelt pushed as his highest postwar priority the establishment of the United Nations. Roosevelt expected it would be controlled by Washington, Moscow, London and Beijing, and would resolve all major world problems.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 89,
"text": "Roosevelt, Churchill, and Stalin met for a second time at the February 1945 Yalta Conference in Crimea. With the end of the war in Europe approaching, Roosevelt's primary focus was on convincing Stalin to enter the war against Japan; the Joint Chiefs had estimated that an American invasion of Japan would cause as many as one million American casualties. In return for the Soviet Union's entrance into the war against Japan, the Soviet Union was promised control of Asian territories such as Sakhalin Island. The three leaders agreed to hold a conference in 1945 to establish the United Nations, and they also agreed on the structure of the United Nations Security Council, which would be charged with ensuring international peace and security. Roosevelt did not push for the immediate evacuation of Soviet soldiers from Poland, but he won the issuance of the Declaration on Liberated Europe, which promised free elections in countries that had been occupied by Germany. Germany itself would not be dismembered but would be jointly occupied by the United States, France, Britain, and the Soviet Union. Against Soviet pressure, Roosevelt and Churchill refused to consent to impose huge reparations and deindustrialization on Germany after the war. Roosevelt's role in the Yalta Conference has been controversial; critics charge that he naively trusted the Soviet Union to allow free elections in Eastern Europe, while supporters argue that there was little more that Roosevelt could have done for the Eastern European countries given the Soviet occupation and the need for cooperation with the Soviet Union during and after the war.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 90,
"text": "The Allies invaded French North Africa in November 1942, securing the surrender of Vichy French forces within days of landing. At the January 1943 Casablanca Conference, the Allies agreed to defeat Axis forces in North Africa and then launch an invasion of Sicily, with an attack on France to take place in 1944. At the conference, Roosevelt also announced that he would only accept the unconditional surrender of Germany, Japan, and Italy. In February 1943, the Soviet Union won a major victory at the Battle of Stalingrad, and in May 1943, the Allies secured the surrender of over 250,000 German and Italian soldiers in North Africa, ending the North African Campaign. The Allies launched an invasion of Sicily in July 1943, capturing the island by the end of the following month. In September 1943, the Allies secured an armistice from Italian Prime Minister Pietro Badoglio, but Germany quickly restored Mussolini to power. The Allied invasion of mainland Italy commenced in September 1943, but the Italian Campaign continued until 1945 as German and Italian troops resisted the Allied advance.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 91,
"text": "To command the invasion of France, Roosevelt chose General Dwight D. Eisenhower, who had successfully commanded a multinational coalition in North Africa and Sicily. Eisenhower chose to launch Operation Overlord on June 6, 1944. Supported by 12,000 aircraft and the largest naval force ever assembled, the Allies successfully established a beachhead in Normandy and then advanced further into France. Though reluctant to back an unelected government, Roosevelt recognized Charles de Gaulle's Provisional Government of the French Republic as the de facto government of France in July 1944. After most of France had been liberated from German occupation, Roosevelt granted formal recognition to de Gaulle's government in October 1944. Over the following months, the Allies liberated more territory from Nazi occupation and began the invasion of Germany. By April 1945, Nazi resistance was crumbling in the face of advances by both the Western Allies and the Soviet Union.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 92,
"text": "In the opening weeks of the war, Japan conquered the Philippines and the British and Dutch colonies in Southeast Asia. The Japanese advance reached its maximum extent by June 1942, when the U.S. Navy scored a decisive victory at the Battle of Midway. American and Australian forces then began a slow and costly strategy called island hopping or leapfrogging through the Pacific Islands, with the objective of gaining bases from which strategic airpower could be brought to bear on Japan and from which Japan could ultimately be invaded. In contrast to Hitler, Roosevelt took no direct part in the tactical naval operations, though he approved strategic decisions. Roosevelt gave way in part to insistent demands from the public and Congress that more effort be devoted against Japan, but he always insisted on Germany first. The strength of the Japanese navy was decimated in the Battle of Leyte Gulf, and by April 1945 the Allies had re-captured much of their lost territory in the Pacific.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 93,
"text": "The home front was subject to dynamic social changes throughout the war, though domestic issues were no longer Roosevelt's most urgent policy concern. The military buildup spurred economic growth. Unemployment fell in half from 7.7 million in spring 1940 to 3.4 million in fall 1941 and fell in half again to 1.5 million in fall 1942, out of a labor force of 54 million. There was a growing labor shortage, accelerating the second wave of the Great Migration of African Americans, farmers and rural populations to manufacturing centers. African Americans from the South went to California and other West Coast states for new jobs in the defense industry. To pay for increased government spending, in 1941 Roosevelt proposed that Congress enact an income tax rate of 99.5% on all income over $100,000; when the proposal failed, he issued an executive order imposing an income tax of 100% on income over $25,000, which Congress rescinded. The Revenue Act of 1942 instituted top tax rates as high as 94% (after accounting for the excess profits tax), greatly increased the tax base, and instituted the first federal withholding tax. In 1944, Roosevelt requested that Congress enact legislation which would tax all \"unreasonable\" profits, both corporate and individual, and thereby support his declared need for over $10 billion in revenue for the war and other government measures. Congress overrode Roosevelt's veto to pass a smaller revenue bill raising $2 billion.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 94,
"text": "In 1942, with the United States now in the conflict, war production increased dramatically but fell short of the goals established by the president, due in part to manpower shortages. The effort was also hindered by numerous strikes, especially among union workers in the coal mining and railroad industries, which lasted well into 1944. Nonetheless, between 1941 and 1945, the United States produced 2.4 million trucks, 300,000 military aircraft, 88,400 tanks, and 40 billion rounds of ammunition. The production capacity of the United States dwarfed that of other countries; for example, in 1944, the United States produced more military aircraft than the combined production of Germany, Japan, Britain, and the Soviet Union. The White House became the ultimate site for labor mediation, conciliation or arbitration. One particular battle royale occurred between Vice President Wallace, who headed the Board of Economic Warfare, and Jesse H. Jones, in charge of the Reconstruction Finance Corporation; both agencies assumed responsibility for the acquisition of rubber supplies and came to loggerheads over funding. Roosevelt resolved the dispute by dissolving both agencies. In 1943, Roosevelt established the Office of War Mobilization to oversee the home front; the agency was led by James F. Byrnes, who came to be known as the \"assistant president\" due to his influence.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 95,
"text": "Roosevelt's 1944 State of the Union Address advocated that Americans should think of basic economic rights as a Second Bill of Rights. He stated that all Americans should have the right to \"adequate medical care\", \"a good education\", \"a decent home\", and a \"useful and remunerative job\". In the most ambitious domestic proposal of his third term, Roosevelt proposed the G.I. Bill, which would create a massive benefits program for returning soldiers. Benefits included post-secondary education, medical care, unemployment insurance, job counseling, and low-cost loans for homes and businesses. The G.I. Bill passed unanimously in both houses of Congress and was signed into law in June 1944. Of the fifteen million Americans who served in World War II, more than half benefitted from the educational opportunities provided for in the G.I. Bill.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 96,
"text": "Roosevelt, a chain-smoker throughout his entire adult life, had been in declining physical health since at least 1940. In March 1944, shortly after his 62nd birthday, he underwent testing at Bethesda Hospital and was found to have high blood pressure, atherosclerosis, coronary artery disease causing angina pectoris, and congestive heart failure.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 97,
"text": "Hospital physicians and two outside specialists ordered Roosevelt to rest. His personal physician, Admiral Ross McIntire, created a daily schedule that banned business guests for lunch and incorporated two hours of rest each day. During the 1944 re-election campaign, McIntire denied several times that Roosevelt's health was poor; on October 12, for example, he announced that \"The President's health is perfectly OK. There are absolutely no organic difficulties at all.\" Roosevelt realized that his declining health could eventually make it impossible for him to continue as president, and in 1945 he told a confidant that he might resign from the presidency following the end of the war.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 98,
"text": "While some Democrats had opposed Roosevelt's nomination in 1940, the president faced little difficulty in securing his re-nomination at the 1944 Democratic National Convention. Roosevelt made it clear before the convention that he was seeking another term, and on the lone presidential ballot of the convention, Roosevelt won the vast majority of delegates, although a minority of Southern Democrats voted for Harry F. Byrd. Party leaders prevailed upon Roosevelt to drop Vice President Wallace from the ticket, believing him to be an electoral liability and a poor potential successor in case of Roosevelt's death. Roosevelt preferred Byrnes as Wallace's replacement but was convinced to support Senator Harry S. Truman of Missouri, who had earned renown for his investigation of war production inefficiency and was acceptable to the various factions of the party. On the second vice presidential ballot of the convention, Truman defeated Wallace to win the nomination.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 99,
"text": "The Republicans nominated Thomas E. Dewey, the governor of New York, who had a reputation as a liberal in his party. They accused the Roosevelt administration of domestic corruption and bureaucratic inefficiency, but Dewey's most effective gambit was to raise discreetly the age issue. He assailed the President as a \"tired old man\" with \"tired old men\" in his cabinet, pointedly suggesting that the President's lack of vigor had produced a less than vigorous economic recovery. Roosevelt, as most observers could see from his weight loss and haggard appearance, was a tired man in 1944. But upon entering the campaign in earnest in late September 1944, Roosevelt displayed enough passion and fight to allay most concerns and to deflect Republican attacks. With the war still raging, he urged voters not to \"change horses in mid-stream.\" Labor unions, which had grown rapidly in the war, fully supported Roosevelt. Roosevelt and Truman won the 1944 election by a comfortable margin, defeating Dewey and his running mate John W. Bricker with 53.4% of the popular vote and 432 out of the 531 electoral votes. The president campaigned in favor of a strong United Nations, so his victory symbolized support for the nation's future participation in the international community.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 100,
"text": "When Roosevelt returned to the United States from the Yalta Conference, many were shocked to see how old, thin and frail he looked. He spoke while seated in the well of the House, an unprecedented concession to his physical incapacity. During March 1945, he sent strongly worded messages to Stalin accusing him of breaking his Yalta commitments over Poland, Germany, prisoners of war and other issues. When Stalin accused the western Allies of plotting behind his back a separate peace with Hitler, Roosevelt replied: \"I cannot avoid a feeling of bitter resentment towards your informers, whoever they are, for such vile misrepresentations of my actions or those of my trusted subordinates.\" On March 29, 1945, Roosevelt went to the Little White House at Warm Springs, Georgia, to rest before his anticipated appearance at the founding conference of the United Nations.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 101,
"text": "In the afternoon of April 12, 1945, in Warm Springs, Georgia, while sitting for a portrait by Elizabeth Shoumatoff, Roosevelt said: \"I have a terrific headache.\" He then slumped forward in his chair, unconscious, and was carried into his bedroom. The president's attending cardiologist, Howard Bruenn, diagnosed the medical emergency as a massive intracerebral hemorrhage. At 3:35 p.m. that day, Roosevelt died at the age of 63.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 102,
"text": "The following morning, Roosevelt's body was placed in a flag-draped coffin and loaded onto the presidential train for the trip back to Washington. Along the route, thousands flocked to the tracks to pay their respects. After a White House funeral on April 14, Roosevelt was transported by train from Washington, D.C., to his place of birth at Hyde Park. On April 15 he was buried, per his wish, in the rose garden of his Springwood estate.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 103,
"text": "Roosevelt's declining physical health had been kept secret from the public. His death was met with shock and grief across the world. Germany surrendered during the 30-day mourning period, but Harry Truman (who had succeeded Roosevelt as president) ordered flags to remain at half-staff; he also dedicated Victory in Europe Day and its celebrations to Roosevelt's memory. World War II finally ended with the signed surrender of Japan in September.",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 104,
"text": "Coincidentally, on April 12, 1945, a devastating tornado outbreak occurred in the United States, which killed 128 people and injured over a thousand others. The tornado outbreak included the fourth deadliest tornado in Oklahoma history, which leveled a third of the town of Antlers. Roosevelt's death overshadowed what would have \"commanded national media attention\" for a while. Tornado expert Thomas P. Grazulis said that, \"even nearby newspapers had more information on the death of the President than on the tornado\".",
"title": "Presidency (1933–1945)"
},
{
"paragraph_id": 105,
"text": "Roosevelt was viewed as a hero by many African Americans, Catholics, and Jews, and he was highly successful in attracting large majorities of these voters into his New Deal coalition. From his first term until 1939, the Mexican Repatriation started by President Herbert Hoover continued under Roosevelt, which scholars today contend was a form of ethnic cleansing towards Mexican Americans. Roosevelt ended federal involvement in the deportations. After 1934, the number of deportations fell by approximately 50 percent. However, Roosevelt did not attempt to suppress the deportations on a local or state level. Mexican Americans were the only group explicitly excluded from New Deal benefits. The deprival of due process for Mexican Americans is cited as a precedent for Roosevelt's internment of Japanese Americans in concentration camps during World War II. Roosevelt won strong support from Chinese Americans and Filipino Americans, but not Japanese Americans, as he presided over their internment during the war. African Americans and Native Americans fared well in two New Deal relief programs, the Civilian Conservation Corps and the Indian Reorganization Act, respectively. Sitkoff reports that the WPA \"provided an economic floor for the whole black community in the 1930s, rivaling both agriculture and domestic service as the chief source\" of income.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 106,
"text": "In contrast to Presidents Harding and Coolidge, Roosevelt stopped short of joining NAACP leaders in pushing for federal anti-lynching legislation. He asserted that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen though by 1940 even his conservative Texas vice-president, Garner, supported federal action against lynching.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 107,
"text": "In his twelve years as president, Roosevelt did not appoint or nominate a single African American as secretary or assistant secretary to his cabinet. About one-hundred African Americans met with each other informally, however, to provide the administration with advice on issues related to African Americans. Although sometimes described as an \"Black Cabinet,\" Roosevelt never officially acknowledged it as such nor did he make \"appointments\" to it",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 108,
"text": "First Lady Eleanor Roosevelt vocally supported efforts designed to aid the African American community, including the Fair Labor Standards Act, which helped boost wages for nonwhite workers in the South. In 1941, Roosevelt established the Fair Employment Practices Committee (FEPC) to implement Executive Order 8802, which prohibited racial and religious discrimination in employment among defense contractors. The FEPC was the first national program directed against employment discrimination, and it played a major role in opening up new employment opportunities to non-white workers. During World War II, the proportion of African American men employed in manufacturing positions rose significantly. In response to Roosevelt's policies, African Americans increasingly defected from the Republican Party during the 1930s and 1940s, becoming an important Democratic voting bloc in several Northern states.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 109,
"text": "The attack on Pearl Harbor raised concerns in the public regarding the possibility of sabotage by Japanese Americans. This suspicion was fed by long-standing racism against Japanese immigrants, as well as the findings of the Roberts Commission, which concluded that the attack on Pearl Harbor had been assisted by Japanese spies. On February 19, 1942, President Roosevelt signed Executive Order 9066, which relocated 110,000 Japanese-American citizens and immigrants, most of whom lived on the Pacific Coast. They were forced to liquidate their properties and businesses and interned in hastily built camps in interior, harsh locations. Internment was consistent with the racial views expressed in Roosevelt's articles during the 1920s for the Macon Telegraph condemning the “the mingling of Asiatic blood with European or American blood” and praising California’s laws to bar Japanese immigrants from owning land as well his confidential suggestion in 1936 that Japanese Americans in Hawaii greeting Japanese ships or having any connection with their officers be put \"on a special list of those who would be the first to be placed in a concentration camp\" in the event of war.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 110,
"text": "Roosevelt delegated the decision for internment to Secretary of War Stimson, who in turn relied on the judgment of Assistant Secretary of War John J. McCloy. The Supreme Court upheld the constitutionality of the executive order in the 1944 case of Korematsu v. United States. A much smaller number of German and Italian citizens were arrested or placed into internment camps. Unlike the concentration camps for Japanese Americans, however, they were not sent to them on the sole basis of racial ancestry.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 111,
"text": "",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 112,
"text": "There is controversy among historians about Roosevelt's attitude to Jews and the Holocaust. Arthur M. Schlesinger Jr. says Roosevelt \"did what he could do\" to help Jews; David Wyman says Roosevelt's record on Jewish refugees and their rescue is \"very poor\" and one of the worst failures of his presidency. In 1923, as a member of the Harvard board of directors, Roosevelt decided there were too many Jewish students at Harvard University and helped institute a quota to limit the number of Jews admitted to Harvard. After Kristallnacht in 1938, Roosevelt had his ambassador to Germany recalled back to Washington. He did not loosen immigration quotas but did allow German Jews already in the U.S. on visas to stay indefinitely. According to Rafael Medoff, the U.S. president could have saved 190,000 Jewish lives by telling his State Department to fill immigration quotas to the legal limit, but his administration discouraged and disqualified Jewish refugees based on its prohibitive requirements that left less than 25% of the quotas filled.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 113,
"text": "Hitler chose to implement the \"Final Solution\"—the extermination of the European Jewish population—by January 1942, and American officials learned of the scale of the Nazi extermination campaign in the following months. Against the objections of the State Department, Roosevelt convinced the other Allied leaders to jointly issue the Joint Declaration by Members of the United Nations, which condemned the ongoing Holocaust and warned to try its perpetrators as war criminals. In 1943, Roosevelt told U.S. government officials that there should be limits on Jews in various professions to \"eliminate the specific and understandable complaints which the Germans bore towards the Jews in Germany.\" The same year, Roosevelt was personally briefed by Polish Home Army intelligence agent Jan Karski who was an eyewitness of the Holocaust; pleading for action, Karski told him that 1.8 million Jews had already been exterminated. Karski recalled that in response, Roosevelt \"did not ask one question about the Jews.\" In January 1944, Roosevelt established the War Refugee Board to aid Jews and other victims of Axis atrocities. Aside from these actions, Roosevelt believed that the best way to help the persecuted populations of Europe was to end the war as quickly as possible. Top military leaders and War Department leaders rejected any campaign to bomb the extermination camps or the rail lines leading to the camps, fearing it would be a diversion from the war effort. According to biographer Jean Edward Smith, there is no evidence that anyone ever proposed such a campaign to Roosevelt.",
"title": "Civil rights, repatriation, internment, and the Jews"
},
{
"paragraph_id": 114,
"text": "Roosevelt is widely considered to be one of the most important figures in the history of the United States, as well as one of the most influential figures of the 20th century. Historians and political scientists consistently rank Roosevelt, George Washington, and Abraham Lincoln as the three greatest presidents, although the order varies. Reflecting on Roosevelt's presidency, \"which brought the United States through the Great Depression and World War II to a prosperous future\", biographer Jean Edward Smith said in 2007, \"He lifted himself from a wheelchair to lift the nation from its knees.\"",
"title": "Legacy"
},
{
"paragraph_id": 115,
"text": "His commitment to the working class and unemployed in need of relief in the nation's longest recession made him a favorite of the blue collar workers, labor unions, and ethnic minorities. The rapid expansion of government programs that occurred during Roosevelt's term redefined the role of the government in the United States, and Roosevelt's advocacy of government social programs was instrumental in redefining liberalism for coming generations. Roosevelt firmly established the United States' leadership role on the world stage, with his role in shaping and financing World War II. His isolationist critics faded away, and even the Republicans joined in his overall policies. He also created a new understanding of the presidency, permanently increasing the power of the president at the expense of Congress.",
"title": "Legacy"
},
{
"paragraph_id": 116,
"text": "His Second Bill of Rights became, according to historian Joshua Zeitz, \"the basis of the Democratic Party's aspirations for the better part of four decades.\" After his death, his widow, Eleanor, continued to be a forceful presence in U.S. and world politics, serving as delegate to the conference which established the United Nations and championing civil rights and liberalism generally. Some junior New Dealers played leading roles in the presidencies of Truman, John Kennedy, and Lyndon Johnson. Kennedy came from a Roosevelt-hating family. Historian William Leuchtenburg says that before 1960, \"Kennedy showed a conspicuous lack of inclination to identify himself as a New Deal liberal.\" He adds, as president, \"Kennedy never wholly embraced the Roosevelt tradition and at times he deliberately severed himself from it.\" By contrast, young Lyndon Johnson had been an enthusiastic New Dealer and a favorite of Roosevelt. Johnson modelled his presidency on Roosevelt's and relied heavily on New Deal lawyer Abe Fortas, as well as James H. Rowe, Anna M. Rosenberg, Thomas Gardiner Corcoran, and Benjamin V. Cohen.",
"title": "Legacy"
},
{
"paragraph_id": 117,
"text": "During his presidency, and continuing to a lesser extent afterwards, there has been much criticism of Roosevelt, some of it intense. Critics have questioned not only his policies, positions, and the consolidation of power that occurred due to his responses to the crises of the Depression and World War II but also his breaking with tradition by running for a third term as president. Long after his death, new lines of attack criticized Roosevelt's policies regarding helping the Jews of Europe, incarcerating the Japanese on the West Coast, and opposing anti-lynching legislation.",
"title": "Legacy"
},
{
"paragraph_id": 118,
"text": "Roosevelt was criticized by conservatives for his economic policies, especially the shift in tone from individualism to collectivism with the expansion of the welfare state and regulation of the economy. Those criticisms continued decades after his death. One factor in the revisiting of these issues in later decades was the election of Ronald Reagan in 1980, who opposed the New Deal.",
"title": "Legacy"
},
{
"paragraph_id": 119,
"text": "Roosevelt's home in Hyde Park is now a National Historic Site and home to his Presidential library. Washington, D.C., hosts two memorials to the former president. The largest, the 7+1⁄2-acre (3-hectare) Roosevelt Memorial, is located next to the Jefferson Memorial on the Tidal Basin. A more modest memorial, a block of marble in front of the National Archives building suggested by Roosevelt himself, was erected in 1965. Roosevelt's leadership in the March of Dimes is one reason he is commemorated on the American dime. Roosevelt has also appeared on several U.S. Postage stamps. On April 29, 1945, seventeen days after Roosevelt's death, the carrier USS Franklin D. Roosevelt was launched and served from 1945 to 1977. London's Westminster Abbey also has a stone tablet memorial to President Roosevelt that was unveiled by Attlee and Churchill in 1948. Welfare Island was renamed after Roosevelt in September 1973.",
"title": "Legacy"
}
]
| Franklin Delano Roosevelt, commonly known by his initials FDR, was an American politician and statesman who served as the 32nd president of the United States from 1933 until his death in 1945. He was a member of the Democratic Party and is the only U.S. president to have served more than two terms in office. During his third and fourth terms he was preoccupied with World War II. A member of the prominent Roosevelt family, after attending university, Roosevelt began to practice law in New York City. He was elected a member of the New York State Senate from 1911 to 1913 and was then the assistant secretary of the Navy under President Woodrow Wilson during World War I. Roosevelt was James M. Cox's running mate on the Democratic Party's ticket in the 1920 U.S. presidential election, but Cox lost to Republican nominee Warren G. Harding. In 1921, Roosevelt contracted a paralytic illness that permanently paralyzed his legs. Partly through the encouragement of his wife, Eleanor Roosevelt, he returned to public office as governor of New York from 1929 to 1933, during which he promoted programs to combat the Great Depression besetting the U.S. In the 1932 presidential election, Roosevelt defeated Republican president Herbert Hoover in a landslide. During his first 100 days as president, Roosevelt spearheaded unprecedented federal legislation and directed the federal government during most of the Great Depression, implementing the New Deal in response to the most significant economic crisis in American history. He also built the New Deal coalition, realigning American politics into the Fifth Party System and defining American liberalism throughout the middle third of the 20th century. He created numerous programs to provide relief to the unemployed and farmers while seeking economic recovery with the National Recovery Administration and other programs. He also instituted major regulatory reforms related to finance, communications, and labor, and presided over the end of Prohibition. In 1936, Roosevelt won a landslide reelection with the economy having improved from 1933, but the economy relapsed into a deep recession in 1937 and 1938. He was unable to expand the Supreme Court in 1937, the same year the conservative coalition was formed to block the implementation of further New Deal programs and reforms. Major surviving programs and legislation implemented under Roosevelt include the Securities and Exchange Commission, the National Labor Relations Act, the Federal Deposit Insurance Corporation, and Social Security. In 1940, he ran successfully for reelection, becoming the only American president to serve for more than two terms. With World War II looming after 1938 in addition to the Japanese invasion of China and the aggression of Nazi Germany, Roosevelt gave strong diplomatic and financial support to China as well as the United Kingdom and the Soviet Union while the U.S. remained officially neutral. Following the Japanese attack on Pearl Harbor on December 7, 1941, he obtained a declaration of war on Japan the next day and on Germany and Italy a few days later. He worked closely with other national leaders in leading the Allies against the Axis powers. Roosevelt supervised the mobilization of the American economy to support the war effort and implemented a Europe first strategy. He also initiated the development of the world's first atomic bomb and worked with the other Allied leaders to lay the groundwork for the United Nations and other post-war institutions, even coining the term "United Nations". Roosevelt won reelection in 1944 but died in 1945 after his physical health seriously and steadily declined during the war years. Since then, several of his actions have come under substantial criticism, including his ordering of the internment of Japanese Americans in concentration camps. Nonetheless, historical rankings consistently place him as one of the greatest American presidents. | 2001-09-20T21:40:44Z | 2023-12-25T03:16:08Z | [
"Template:Pp",
"Template:Spaced ndash",
"Template:Citation needed",
"Template:Cite news",
"Template:Navboxes",
"Template:Sfn",
"Template:Inflation",
"Template:See also",
"Template:Reflist",
"Template:Cite web",
"Template:IMDb name",
"Template:Pp-move",
"Template:Cite magazine",
"Template:Cite encyclopedia",
"Template:C-SPAN",
"Template:Good article",
"Template:Notelist",
"Template:Cite Power Broker",
"Template:Library resources box",
"Template:Redirect",
"Template:Convert",
"Template:Circa",
"Template:LCCN",
"Template:Refend",
"Template:Sister project links",
"Template:Blockquote",
"Template:Listen",
"Template:Clear",
"Template:Cite journal",
"Template:ISBN",
"Template:Franklin D. Roosevelt",
"Template:Refbegin",
"Template:Gutenberg author",
"Template:Use American English",
"Template:Infobox officeholder",
"Template:Main",
"Template:Cite book",
"Template:Cite video",
"Template:Citation",
"Template:Librivox author",
"Template:Internet Archive author",
"Template:New York Times topic",
"Template:Authority control",
"Template:Short description",
"Template:Use mdy dates",
"Template:Efn",
"Template:Multiple image",
"Template:Further",
"Template:Inflation-year"
]
| https://en.wikipedia.org/wiki/Franklin_D._Roosevelt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.