id
stringlengths 40
40
| source
stringclasses 9
values | title
stringlengths 2
345
| clean_text
stringlengths 35
1.63M
| raw_text
stringlengths 4
1.63M
| url
stringlengths 4
498
| overview
stringlengths 0
10k
|
---|---|---|---|---|---|---|
d374e827fb36787d279ee6e631dca49a148ec2b6 | wikidoc | Coffin | Coffin
A coffin (also known as a casket in North American English) is a funerary box used in the display and containment of deceased remains – either for burial or cremation.
# Practices
Any box used to bury the dead in is a coffin. Use of the word "casket" in this sense began as a euphemism introduced by the undertaker's trade in North America; a "casket" was originally a box for jewelry. Some AmericansTemplate:Who draw a distinction between "coffins" and "caskets"; for these people, a coffin is a tapered hexagonal or octagonal (also considered to be anthropodial in shape) box used for a burial. A rectangular burial box with a split lid used for viewing the deceased is called a "casket" as seen in the picture above.
Receptacles for cremated human ashes (sometimes called cremains) are called urns.
A coffin may be buried in the ground directly, placed in a burial vault or cremated.
The above ground burial is in a mausoleum. Often it is a large cement building at a cemetery, housing hundreds of bodies, or a small personal crypt.
Some countries practice one form almost exclusively; in others, it depends on the individual cemetery. The handles and other ornaments (such as doves, stipple crosses, crucifix, masonic symbols etc.) that go on the outside of a coffin are called fittings (sometimes called 'coffin furniture', not to be confused with furniture that is coffin shaped) while organising the inside of the coffin with drapery of some kind is known as "trimming the coffin".
Cultures that practice burial have widely different styles of coffin. In some varieties of orthodox JudaismTemplate:Specify, the coffin must be plain, made of wood, and contain no metal parts nor adornments. These coffins use wooden pegs instead of nails. In China and Japan, coffins made from the scented, decay-resistant wood of cypress, sugi, thuja and incense-cedar are in high demand. In Africa, elaborate coffins are built in the shapes of various mundane objects, like automobiles or aeroplanes.
Sometimes coffins are constructed to display the dead body, as in the case of the glass-covered coffin of Haraldskær Woman on display in the Church of Saint Nicolai in Vejle, Denmark.
When a coffin or casket is used to transport a deceased person, it can also be called a pall, a term that also refers to the cloth used to cover the coffin.
# Modern coffins
Today manufacturers offer features that they claim will protect the body. For example, some may offer a protective casket that uses a gasket to seal the casket shut after the coffin is closed for the final time. Many manufacturers offer a warranty on the structural integrity of the coffin. However, no coffin will preserve the body, regardless of whether it is a wooden or metal coffin, a sealed casket, or if the deceased was embalmed beforehand. In some cases, a sealed coffin may actually speed up rather than slow down the process of decomposition. An airtight coffin, for example, fosters decomposition by anaerobic bacteria, which results in a putrefied liquification of the body, and all putrefied tissue remains inside the container, only to be exposed in the event of an exhumation. A container that allows air molecules to pass in and out, such as a simple wooden box, allows for aerobic decomposition that results in much less noxious odor and clean skeletonization.
Coffins are made of many materials, including steel, various types of wood, and other materials such as fiberglass. There is now emerging interest in eco-friendly coffins made of purely natural materials such as bamboo.
Coffins are sometimes personalized to offer college insignia or different head panels to better reflect the deceased's life choices.
# Cremation coffins
With the resurgence of cremation in the Western world, manufacturers have begun providing options for those who choose cremation. For a direct cremation a cardboard box is sometimes used. Those who wish to have a funeral visitation (sometimes called a viewing) or traditional funeral service will use a coffin of some sort.
Some choose to use a coffin made of wood or other materials like particle board. Others will rent a regular casket for the duration of the services. These caskets have a removable bed and liner which is replaced after each use. There are also rental caskets with an outer shell that looks like a traditional coffin and a cardboard box that fits inside the shell. At the end of the services the inner box is removed and the deceased is cremated inside this box.
# Casket industry
In the United States, a number of companies produce caskets. Some manufacturers do not sell directly to the public, and only work with licensed funeral homes. In that case, the funeral home usually sells the casket to a family for a deceased person as part of the funeral services offered, and in that case the price of the casket is included in the total bill for services rendered, which is often not completely itemized.
Often funeral homes will have a small showroom to present families with the available caskets that could be used for a deceased family member. In many modern funeral homes the showroom will consist of sample pieces that show the end pieces of each type of coffin that can be used. They also include samples of the lining and other materials. This allows funeral homes to showcase a larger number of coffin styles without the need for a larger showroom. Examples of such showrooms can be seen on the A&E show Family Plots, and the HBO drama Six Feet Under.
Under a U.S. federal law, 16 CFR Part 453 (known as the Funeral Rule), if a family provides a casket they purchased elsewhere, the establishment is required to accept the casket and use it in the services. If the casket is delivered direct to the funeral home from the manufacturer or store, they are required to accept delivery of the casket. The funeral home may not add any extra charges or fees to the overall bill if a family decides to purchase a casket elsewhere.
# Unusual coffins
Custom coffins are occasionally created and some companies also make set ranges with non-traditional designs.
These include painting of peaceful tropical scenes, sea-shells, sunsets and cherubs. Some manufacturers have designed them to look like gym carry bags, guitar cases, cigar humidors, and even yellow dumpster bins. Others coffins are left deliberately blank so that friends and family can inscribe final wishes and thoughts upon it to the deceased. The rock band Kiss has made a coffin called the Kiss Kasket for their most diehard fans; Dimebag Darrell, guitarist of both Pantera and Damageplan, was buried in one.
In Taiwan, coffins made of crushed oyster shells were used in the 18th and 19th centuries.
In Medieval Japan, round coffins,which resembled barrels in shape and were usually made by coopers. In the 1961 Kurosawa film Yojimbo, the protagonist, anticipating a shortage of coffins due to an impending battle (planned by Yojimbo) persuades several coopers to start making more coffins
# Coffins built by Inmates
Before 1995 some prisons still used cardboard coffins to bury inmates who died while incarcerated. At the Louisiana State Penitentiary in 1995 inmates asked to start building coffins for their dead. They also built a horse drawn carriage to pull the coffin to the cemetery. | Coffin
A coffin (also known as a casket in North American English) is a funerary box used in the display and containment of deceased remains – either for burial or cremation.
# Practices
Any box used to bury the dead in is a coffin. Use of the word "casket" in this sense began as a euphemism introduced by the undertaker's trade in North America; a "casket" was originally a box for jewelry.[1] Some AmericansTemplate:Who draw a distinction between "coffins" and "caskets"; for these people, a coffin is a tapered hexagonal or octagonal (also considered to be anthropodial in shape) box used for a burial. A rectangular burial box with a split lid used for viewing the deceased is called a "casket" as seen in the picture above.
Receptacles for cremated human ashes (sometimes called cremains) are called urns.
A coffin may be buried in the ground directly, placed in a burial vault or cremated.
The above ground burial is in a mausoleum. Often it is a large cement building at a cemetery, housing hundreds of bodies, or a small personal crypt.
Some countries practice one form almost exclusively;[citation needed] in others, it depends on the individual cemetery. The handles and other ornaments (such as doves, stipple crosses, crucifix, masonic symbols etc.) that go on the outside of a coffin are called fittings (sometimes called 'coffin furniture', not to be confused with furniture that is coffin shaped) while organising the inside of the coffin with drapery of some kind is known as "trimming the coffin".
Cultures that practice burial have widely different styles of coffin. In some varieties of orthodox JudaismTemplate:Specify, the coffin must be plain, made of wood, and contain no metal parts nor adornments. These coffins use wooden pegs instead of nails. In China and Japan, coffins made from the scented, decay-resistant wood of cypress, sugi, thuja and incense-cedar are in high demand.[citation needed] In Africa, elaborate coffins are built in the shapes of various mundane objects, like automobiles or aeroplanes.[citation needed]
Sometimes coffins are constructed to display the dead body, as in the case of the glass-covered coffin of Haraldskær Woman on display in the Church of Saint Nicolai in Vejle, Denmark.
When a coffin or casket is used to transport a deceased person, it can also be called a pall, a term that also refers to the cloth used to cover the coffin.
# Modern coffins
Today manufacturers offer features that they claim will protect the body. For example, some may offer a protective casket that uses a gasket to seal the casket shut after the coffin is closed for the final time. Many manufacturers offer a warranty on the structural integrity of the coffin. However, no coffin will preserve the body, regardless of whether it is a wooden or metal coffin, a sealed casket, or if the deceased was embalmed beforehand. In some cases, a sealed coffin may actually speed up rather than slow down the process of decomposition. An airtight coffin, for example, fosters decomposition by anaerobic bacteria, which results in a putrefied liquification of the body, and all putrefied tissue remains inside the container, only to be exposed in the event of an exhumation. A container that allows air molecules to pass in and out, such as a simple wooden box, allows for aerobic decomposition that results in much less noxious odor and clean skeletonization.
Coffins are made of many materials, including steel, various types of wood, and other materials such as fiberglass. There is now emerging interest in eco-friendly coffins made of purely natural materials such as bamboo.[2]
Coffins are sometimes personalized to offer college insignia or different head panels to better reflect the deceased's life choices.
# Cremation coffins
With the resurgence of cremation in the Western world, manufacturers have begun providing options for those who choose cremation. For a direct cremation a cardboard box is sometimes used. Those who wish to have a funeral visitation (sometimes called a viewing) or traditional funeral service will use a coffin of some sort.
Some choose to use a coffin made of wood or other materials like particle board. Others will rent a regular casket for the duration of the services. These caskets have a removable bed and liner which is replaced after each use. There are also rental caskets with an outer shell that looks like a traditional coffin and a cardboard box that fits inside the shell. At the end of the services the inner box is removed and the deceased is cremated inside this box.
# Casket industry
In the United States, a number of companies produce caskets. Some manufacturers do not sell directly to the public, and only work with licensed funeral homes. In that case, the funeral home usually sells the casket to a family for a deceased person as part of the funeral services offered, and in that case the price of the casket is included in the total bill for services rendered, which is often not completely itemized.
Often funeral homes will have a small showroom to present families with the available caskets that could be used for a deceased family member. In many modern funeral homes the showroom will consist of sample pieces that show the end pieces of each type of coffin that can be used. They also include samples of the lining and other materials. This allows funeral homes to showcase a larger number of coffin styles without the need for a larger showroom. Examples of such showrooms can be seen on the A&E show Family Plots, and the HBO drama Six Feet Under.
Under a U.S. federal law, 16 CFR Part 453 (known as the Funeral Rule), if a family provides a casket they purchased elsewhere, the establishment is required to accept the casket and use it in the services. If the casket is delivered direct to the funeral home from the manufacturer or store, they are required to accept delivery of the casket. The funeral home may not add any extra charges or fees to the overall bill if a family decides to purchase a casket elsewhere.
# Unusual coffins
Custom coffins are occasionally created and some companies also make set ranges with non-traditional designs.
These include painting of peaceful tropical scenes, sea-shells, sunsets and cherubs. Some manufacturers have designed them to look like gym carry bags, guitar cases, cigar humidors, and even yellow dumpster bins. Others coffins are left deliberately blank so that friends and family can inscribe final wishes and thoughts upon it to the deceased. The rock band Kiss has made a coffin called the Kiss Kasket for their most diehard fans; Dimebag Darrell, guitarist of both Pantera and Damageplan, was buried in one.
In Taiwan, coffins made of crushed oyster shells were used in the 18th and 19th centuries.
In Medieval Japan, round coffins,which resembled barrels in shape and were usually made by coopers. In the 1961 Kurosawa film Yojimbo, the protagonist, anticipating a shortage of coffins due to an impending battle (planned by Yojimbo) persuades several coopers to start making more coffins
# Coffins built by Inmates
Before 1995 some prisons still used cardboard coffins to bury inmates who died while incarcerated. At the Louisiana State Penitentiary in 1995 inmates asked to start building coffins for their dead. They also built a horse drawn carriage to pull the coffin to the cemetery.[3] | https://www.wikidoc.org/index.php/Coffin | |
61d6dc8469ac20e02daf2645204e43df304f5762 | wikidoc | Coilin | Coilin
Coilin is a protein that in humans is encoded by the COIL gene. Coilin got its name from the coiled shape of the CB in which it is found. It was first identified using human autoimmune serum.
# Function
Coilin protein is one of the main molecular components of Cajal bodies (CBs). Cajal bodies are nuclear suborganelles of varying number and composition that are involved in the post-transcriptional modification of small nuclear and small nucleolar RNAs. In addition to its structural role, coilin acts as glue to connect the CB to the nucleolus. The N-terminus of the coilin protein directs its self-oligomerization while the C-terminus influences the number of nuclear bodies assembled per cell. Differential methylation and phosphorylation of coilin likely influences its localization among nuclear bodies and the composition and assembly of Cajal bodies. This gene has pseudogenes on chromosome 4 and chromosome 14.
To study CBs, coilin can be combined with GFP (Green Fluorescent Protein) to form Coilin-GFP hybrid protein. The hybrid protein can then be used to locate CBs underneath a microscope, usually near the nucleolus of the cell. Other proteins that make up the CB include snRNPs and nucleolar snoRNPs.
Coilin has been shown to interact with ataxin 1, nucleolar phosphoprotein p130, SMN, and SNRPB. | Coilin
Coilin is a protein that in humans is encoded by the COIL gene.[1][2] Coilin got its name from the coiled shape of the CB in which it is found. It was first identified using human autoimmune serum.
# Function
Coilin protein is one of the main molecular components of Cajal bodies (CBs). Cajal bodies are nuclear suborganelles of varying number and composition that are involved in the post-transcriptional modification of small nuclear and small nucleolar RNAs. In addition to its structural role, coilin acts as glue to connect the CB to the nucleolus. The N-terminus of the coilin protein directs its self-oligomerization while the C-terminus influences the number of nuclear bodies assembled per cell. Differential methylation and phosphorylation of coilin likely influences its localization among nuclear bodies and the composition and assembly of Cajal bodies. This gene has pseudogenes on chromosome 4 and chromosome 14.[2]
To study CBs, coilin can be combined with GFP (Green Fluorescent Protein) to form Coilin-GFP hybrid protein. The hybrid protein can then be used to locate CBs underneath a microscope, usually near the nucleolus of the cell. Other proteins that make up the CB include snRNPs and nucleolar snoRNPs.
Coilin has been shown to interact with ataxin 1,[3][4] nucleolar phosphoprotein p130,[5] SMN,[6][7] and SNRPB.[7] | https://www.wikidoc.org/index.php/Coilin | |
18ce9176d5c6bb6e8998dbd9d05a52bc3e3cd076 | wikidoc | Colugo | Colugo
Colugos are arboreal gliding mammals found in South-east Asia. There are just two extant species, each in its own genus, which make up the entire family Cynocephalidae and order Dermoptera. Though they are the most capable of all mammal gliders, they cannot actually fly. They are also known as cobegos or flying lemurs (misleadingly, since they are not lemurs and cannot fly).
# Characteristics
Colugos are fairly large for a tree-dwelling mammal: at about 35 to 40 cm (14 to 16 in) in length and 1 or 2 kilograms (2 or 4 pounds) in weight, they are comparable to a medium-sized possum or a very large squirrel. They have moderately long, slender limbs of equal length front and rear, a medium-length tail, and a relatively light build. The head is small, with large, front-focused eyes for excellent binocular vision, and small, rounded ears. When born, the Colugo weighs only about 35g (1.2oz) and do not reach adult size for 2–3 years.
Their most distinctive feature, however, is the membrane of skin that extends between their limbs and gives them the ability to glide long distances between trees. Of all the gliding mammals, the colugos have the most extensive adaptation to flight. Their gliding membrane, or patagium, is as large as is geometrically possible: it runs from the shoulder blades to the fore-paw, from the tip of the rear-most finger to the tip of the toes, and from the hind legs to the tip of the tail; unlike in other known gliding mammals even the spaces between the fingers and toes are webbed to increase the total surface area, as in the wings of bats. As a result, colugos were traditionally considered being close to the ancestors of bats, but are now usually seen as the closest living relatives to primates.
They are surprisingly clumsy climbers. Lacking opposable thumbs and not being especially strong, they proceed upwards in a series of slow hops, gripping onto the bark of trees with their small, sharp claws. They are as comfortable hanging underneath a branch as sitting on top of it. In the air, however, they are very capable, and can glide as far as 70 metres (230 feet) from one tree to another with minimal loss of height.
Colugos are shy, nocturnal, and restricted to the tropical rainforests of Southeast Asia. In consequence, remarkably little is known about their habits, although they are believed to be generally solitary, except for mothers nursing young. They are certainly herbivores, and are thought to eat mostly leaves, shoots, flowers and sap, and probably fruit as well. They have well-developed stomachs and long intestines, capable of extracting nutriment from leaves.
The incisor teeth of colugos are highly distinctive; they are comb-like in shape, with up to twenty tines on each tooth. The second upper incisors have two roots, another unique feature among mammals. The function of these adaptations is not currently known. The dental formula of colugos is:Template:Dentition2
Although they are placental mammals, colugos are marsupial-like in their breeding habits. The young are born after just 60 days of gestation in a tiny and undeveloped form, and spend their first six months or so of life clinging to the mother's belly. To protect them and transport them she curls her tail up to fold the gliding membrane into a warm, secure quasi-pouch. Breeding is fairly slow as the young do not reach full size until they are two or three years old.
# Status
Both species are threatened by habitat destruction, and the Philippine Flying Lemur is classified by the IUCN as vulnerable. In addition to the ongoing clearing of its rainforest habitat, it is hunted for its meat and fur. It is also hunted by the gravely endangered Philippine Eagle: some studies suggest that colugos account for 90% of the eagle's diet. It is not known how the diurnal eagles catch so many of the nocturnal colugos, which are thought to spend the greater part of the day curled up in tree hollows or hanging inconspicuously underneath a branch.
# Classification
- ORDER DERMOPTERA
Family Cynocephalidae
Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
†Dermotherium
†Dermotherium major
†Dermotherium chimaera
- Family Cynocephalidae
Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
†Dermotherium
†Dermotherium major
†Dermotherium chimaera
- Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
- Philippine Flying Lemur, Cynocephalus volans
- Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
- Sunda Flying Lemur, Galeopterus variegatus
- †Dermotherium
†Dermotherium major
†Dermotherium chimaera
- †Dermotherium major
- †Dermotherium chimaera
The Mixodectidae appear to be fossil Dermoptera. However although other Paleogene mammals have been interpreted as related to Dermopterans, the evidence for this is uncertain and many of them are no longer interpreted as being gliding mammals. At present, the fossil record of definitive dermopterans is limited to two species of the Eocene and Oligocene cynocephalid genus Dermotherium.
Recent molecular phylogenetic studies have demonstrated that colugos belong to the clade Euarchonta along with the treeshrews (order Scandentia) and the primates. In this taxonomy, the Euarchonta are sister to the Glires (lagomorphs and rodents), and the two groups are combined into the clade Euarchontoglires.
## Synonyms
The names Colugidae, Galeopithecidae and Galeopteridae are synonyms for Cynocephalidae. Colugo, Dermopterus, Galeolemur, Galeopithecus, Galeopus, and Pleuropterus are synonyms for Cynocephalus. | Colugo
Colugos are arboreal gliding mammals found in South-east Asia. There are just two extant species, each in its own genus,[1] which make up the entire family Cynocephalidae and order Dermoptera. Though they are the most capable of all mammal gliders, they cannot actually fly. They are also known as cobegos or flying lemurs (misleadingly, since they are not lemurs and cannot fly).
# Characteristics
Colugos are fairly large for a tree-dwelling mammal: at about 35 to 40 cm (14 to 16 in) in length and 1 or 2 kilograms (2 or 4 pounds) in weight, they are comparable to a medium-sized possum or a very large squirrel. They have moderately long, slender limbs of equal length front and rear, a medium-length tail, and a relatively light build. The head is small, with large, front-focused eyes for excellent binocular vision, and small, rounded ears. When born, the Colugo weighs only about 35g (1.2oz) and do not reach adult size for 2–3 years.[2]
Their most distinctive feature, however, is the membrane of skin that extends between their limbs and gives them the ability to glide long distances between trees. Of all the gliding mammals, the colugos have the most extensive adaptation to flight. Their gliding membrane, or patagium, is as large as is geometrically possible: it runs from the shoulder blades to the fore-paw, from the tip of the rear-most finger to the tip of the toes, and from the hind legs to the tip of the tail;[3] unlike in other known gliding mammals even the spaces between the fingers and toes are webbed to increase the total surface area, as in the wings of bats. As a result, colugos were traditionally considered being close to the ancestors of bats, but are now usually seen as the closest living relatives to primates.
They are surprisingly clumsy climbers. Lacking opposable thumbs and not being especially strong, they proceed upwards in a series of slow hops, gripping onto the bark of trees with their small, sharp claws. They are as comfortable hanging underneath a branch as sitting on top of it. In the air, however, they are very capable, and can glide as far as 70 metres (230 feet) from one tree to another with minimal loss of height.
Colugos are shy, nocturnal, and restricted to the tropical rainforests of Southeast Asia. In consequence, remarkably little is known about their habits, although they are believed to be generally solitary, except for mothers nursing young. They are certainly herbivores, and are thought to eat mostly leaves, shoots, flowers and sap, and probably fruit as well. They have well-developed stomachs and long intestines, capable of extracting nutriment from leaves.
The incisor teeth of colugos are highly distinctive; they are comb-like in shape, with up to twenty tines on each tooth. The second upper incisors have two roots, another unique feature among mammals[3]. The function of these adaptations is not currently known. The dental formula of colugos is:Template:Dentition2
Although they are placental mammals, colugos are marsupial-like in their breeding habits. The young are born after just 60 days of gestation in a tiny and undeveloped form, and spend their first six months or so of life clinging to the mother's belly. To protect them and transport them she curls her tail up to fold the gliding membrane into a warm, secure quasi-pouch. Breeding is fairly slow as the young do not reach full size until they are two or three years old[3].
# Status
Both species are threatened by habitat destruction, and the Philippine Flying Lemur is classified by the IUCN as vulnerable. In addition to the ongoing clearing of its rainforest habitat, it is hunted for its meat and fur. It is also hunted by the gravely endangered Philippine Eagle: some studies suggest that colugos account for 90% of the eagle's diet. It is not known how the diurnal eagles catch so many of the nocturnal colugos, which are thought to spend the greater part of the day curled up in tree hollows or hanging inconspicuously underneath a branch.
# Classification
- ORDER DERMOPTERA
Family Cynocephalidae
Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
†Dermotherium
†Dermotherium major
†Dermotherium chimaera
- Family Cynocephalidae
Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
†Dermotherium
†Dermotherium major
†Dermotherium chimaera
- Cynocephalus
Philippine Flying Lemur, Cynocephalus volans
- Philippine Flying Lemur, Cynocephalus volans
- Galeopterus
Sunda Flying Lemur, Galeopterus variegatus
- Sunda Flying Lemur, Galeopterus variegatus
- †Dermotherium
†Dermotherium major
†Dermotherium chimaera
- †Dermotherium major
- †Dermotherium chimaera
The Mixodectidae appear to be fossil Dermoptera. However although other Paleogene mammals have been interpreted as related to Dermopterans, the evidence for this is uncertain and many of them are no longer interpreted as being gliding mammals. At present, the fossil record of definitive dermopterans is limited to two species of the Eocene and Oligocene cynocephalid genus Dermotherium.[4]
Recent molecular phylogenetic studies have demonstrated that colugos belong to the clade Euarchonta along with the treeshrews (order Scandentia) and the primates. In this taxonomy, the Euarchonta are sister to the Glires (lagomorphs and rodents), and the two groups are combined into the clade Euarchontoglires.[5]
Template:Clade
## Synonyms
The names Colugidae, Galeopithecidae and Galeopteridae are synonyms for Cynocephalidae. Colugo, Dermopterus, Galeolemur, Galeopithecus, Galeopus, and Pleuropterus are synonyms for Cynocephalus. | https://www.wikidoc.org/index.php/Colugo | |
66bd4f84f3e4a8d009b0d240e554c1e2d1dd8168 | wikidoc | Condom | Condom
# Overview
A condom is a device most commonly used during sexual intercourse. It is put on a man's erect penis and physically blocks ejaculated semen from entering the body of a sexual partner. Condoms are used to prevent pregnancy and transmission of sexually transmitted diseases (STDs—such as gonorrhea, syphilis, and HIV). Because condoms are waterproof, elastic, and durable, they are also used in a variety of secondary applications. These range from creating waterproof microphones to protecting rifle barrels from clogging.
Most condoms are made from latex, but some are made from other materials. A female condom is also available. As a method of contraception, male condoms have the advantage of being inexpensive, easy to use, having few side-effects, and of offering protection against sexually transmitted diseases. With proper knowledge and application technique—and use at every act of intercourse—users of male condoms experience a 2% per-year pregnancy rate.
Condoms have been used for over 500 years. In the early twentieth century, with the invention of disposible latex condoms, they became one of the most popular methods of contraception. While widely accepted in modern times, condoms have generated some controversy. Improper disposal of condoms contributes to litter problems, and the Roman Catholic Church generally opposes condom use.
# History
## Antiquity through Middle Ages
Whether condoms were used in ancient civilizations is debated by archaeologists and historians. The oldest claimed representation of condom use is a painting in the French cave Grotte des Combarrelles; the paintings in this cave are 12,000–15,000 years old. Societies in the ancient civilizations of Egypt, Greece, and Rome preferred small families and are known to have practices a variety of birth control methods. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets). The writings of these societies contain "veiled references" to male-controlled contraceptive methods that might have been condoms, but most historians interpret them as referring to coitus interruptus or anal sex.
The loincloths worn by Egyptian and Greek laborers were very spare, sometimes consisting of little more than a covering for the glans of the penis. Records of these types of loincloths being worn by men in higher classes have made some historians speculate they were worn during intercourse; others, however, are doubtful of such interpretations. Historians may also cite one legend of Minos, related by Antoninus Liberalis in 150 AD, as suggestive of condom use in ancient societies. This legend describes a curse that caused Minos' semen to contain serpents and scorpions. To protect his sexual partner from these animals, Minos used a goat's bladder as a female condom.
Contraceptives fell out of use in Europe after the decline of the Roman Empire in the 400s; the use of contraceptive pessaries, for example, is not documented again until the fifteenth century. If condoms were used during the Roman Empire, knowledge of them may have been lost during its decline. A contributing factor to the loss of contraceptive knowledge was the rise of the Christian religion, which considered all forms of birth control to be sins. In the writings of Muslims and Jews during the Middle Ages, there are some references to attempts at male-controlled contraception, including suggestions to cover the penis in tar or soak it in onion juice. Some of these writings might describe condom use, but they are "oblique", "veiled", and "vague".
## Renaissance
In 16th century Italy, Gabriele Falloppio authored the first-known published description of condom use for disease prevention. He recommended soaking cloth sheaths in a chemical solution and allowing them to dry prior to use. He claimed to have performed an experimental trial of the linen sheath on 1100 men. His report of the experiment, published two years after his death, indicated protection against syphilis.
The oldest condoms found (rather than just pictures or descriptions) are from 1640, discovered in Dudley Castle in England. They were made of animal intestine, and it is believed they were used for STD prevention. In 19th century Japan, both leather condoms and condoms made of tortoise shells or horns were available. Similar devices made from oiled silk paper have also been described in China.
The often-reported invention of the condom by "Dr. Condom" or the "Earl of Condom" is believed to be fallacious (see etymology section below). However in the 18th Century, there are numerous literary references to condom use and sales, including in the memoirs of Giacomo Casanova.
## 19th century to present
The rubber vulcanization process was patented by Charles Goodyear in 1844, and the first rubber condom was produced in 1855. These early rubber condoms were 1-2mm thick and had seams down the sides. Although they were reusable, these early rubber condoms were also expensive.
Distribution of condoms in the United States was limited by passage of the Comstock Act in 1873. This law prohibited transport through the postal service of any instructional material or devices intended to prevent pregnancy. Condoms were available by prescription, although legally they were only supposed to be prescribed to prevent disease rather than pregnancy. The Comstock Act remained in force until it was largely overturned by the U.S. Supreme Court in 1936.
In 1912, a German named Julius Fromm developed a new manufacturing technique for condoms: dipping glass molds into the raw rubber solution. This enabled the production of thinner condoms with no seams. Fromm's Act was the first branded line of condoms, and Fromms is still a popular line of condoms in Germany today. By the 1930s, the manufacturing process had improved to produce single-use condoms almost as thin and inexpensive as those currently available.
Condoms were not made available to U.S. soldiers in World War I, and a significant number of returning soldiers carried sexually transmitted infections. During World War II, however, condoms were heavily promoted to soldiers, with one film exhorting "Don't forget — put it on before you put it in." In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to be utilized to this day.
## Etymology of the term
Etymological theories for the word "condom" abound. It has been claimed to be from the Latin word condon, meaning receptacle. One author argues that "condom" is derived from the Latin word condamina, meaning house. It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove.
Folk etymology claims that the word "condom" is derived from a purported "Dr. Condom" or "Quondam", who made the devices for King Charles II of England. There is no verifiable evidence that any such "Dr. Condom" existed. It is also hypothesized that a British army officer named Cundum popularized the device between 1680 and 1717.
William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown".
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters. Additionally, condoms may be referred to using the manufacturer's name.
# Varieties
Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes, from oversized to snug and they also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavoured condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms are also widely available.
## Latex
Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electrical current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.
Latex condoms used with oil-based lubricants (e.g. vaseline) are likely to slip off due to loss of elasticity caused by the oils.
Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, spermicidally lubricated condoms have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary-tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.
Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, they recommend using a nonoxynol-9 lubricated condom over no condom at all. As of 2005, nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9, Planned Parenthood has discontinued the distribution of condoms so lubricated, and the Food and Drug Administration has proposed a warning regarding this issue.
## Polyurethane
Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick. Polyurethane is also the material of many female condoms.
Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.
However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, and are more expensive.
## Lambskin
Condoms made from one of the oldest condom materials, labeled "lambskin" (made from lamb intestines) are still available. They have a greater ability to transmit body warmth and tactile sensation, when compared to synthetic condoms, and are less allergenic than latex. However, there is an increased risk of transmitting STDs compared to latex because of pores in the material, which are thought to be large enough to allow infectious agents to pass through, albeit blocking the passage of sperm.
## Experimental
The Invisible Condom, developed at Université Laval in Québec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. The invisible condom is in the clinical trial phase, and has not yet been approved for use.
As reported on Swiss television news Schweizer Fernsehen on November 29, 2006, the German scientist Jan Vinzenz Krause of the Institut für Kondom-Beratung ("Institute for Condom Consultation") in Germany recently developed a spray-on condom and is test-marketing it. Krause says that one of the advantages to his spray-on condom, which is reported to dry in about 5 seconds, is that it is perfectly formed to each penis.
## Collection condom
A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life.
# Effectiveness
## In preventing pregnancy
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms improperly, inconsistently, or both. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10–18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.
Several factors account for typical use effectiveness being lower than perfect use effectiveness:
- mistakes on the part of those providing instructions on how to use the method
- mistakes on the part of the user
- conscious user non-compliance with instructions.
For instance, someone might be given incorrect information on what lubricants are safe to use with condoms, mistakenly put the condom on improperly, or simply not bother to use a condom.
## In preventing STDs
Condoms are widely recommended for the prevention of sexually transmitted diseases (STDs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of HIV, genital herpes, genital warts, syphilis, chlamydia, gonorrhea, and other diseases.
According to a 2000 report by the National Institutes of Health, correct and consistent use of latex condoms reduces the risk of HIV/AIDS transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. The same review also found condom use significantly reduces the risk of gonorrhea for men.
A 2006 study reports that proper condom use decreases the risk of transmission for human papilloma virus by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2 also known as genital herpes, in both men and women.
Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases can be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use.
## Causes of failure
Condom users may experience slipping off the penis after ejaculation, breakage due to faulty methods of application or physical damage (such as tears caused when opening the package), or breakage or slippage due to latex degradation (typically from being past the expiration date or being stored improperly). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–2% of women will test positive for semen residue after intercourse with a condom.
Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins - such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.
Standard condoms will fit almost any penis, although many condom manufacturers offer "snug" or "magnum" sizes. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.
Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are at increased risk of a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.
Among couples that intend condoms to be their form of birth control, pregnancy may occur when the couple does not use a condom. The couple may have run out of condoms, or be traveling and not have a condom with them, or simply dislike the feel of condoms and decide to "take a chance." This type of behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).
Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers report clients sabotaging condoms in retaliation for being coerced into condom use. Placing pinholes in the tip of the condom is believed to significantly impact their effectiveness.
# Female condoms
"Female condoms" or "femidoms" are also available. They are larger and wider than male condoms but equivalent in length. They have a flexible ring-shaped opening, and are designed to be inserted into the vagina. They also contain an inner ring which aids insertion and helps keep the condom from sliding out of the vagina during coitus. One line of female condoms is made from polyurethane or nitrile polymer. A competing manufacturer makes a line of female condoms out of latex. The latex female condom has been available for several years in Africa, Asia, and South America, although one more clinical trial is required before it can be submitted for FDA approval in the United States.
# Use
Male condoms are usually packaged inside a foil wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle.
Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement.
## Prevalence
The prevalence of condom use varies greatly between countries. Japan has the highest rate of condom usage in the world, with condoms accounting for almost 80% of contraceptive use. In the average developed country, 22% of contraceptive users rely on condoms as their primary method of birth control. In the average less-developed country, only 5-6% of contraceptive users choose condoms. In a few countries, such as Somalia, condoms are illegal.
## Role in sex education
Condoms are often used in sexual education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active."
In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sexual education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 75% of American parents want their children to receive comprehensive sexuality education including condom use.
## Infertility treatment
Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.
Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Many men prefer collection condoms to masturbation, and some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervial or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.
Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.
## Other uses
Condoms excel as multipurpose containers because they are waterproof, elastic, durable, and will not arouse suspicion if found. Ongoing military utilization begun during World War II includes:
- Tying a non-lubricated condom around the muzzle of the rifle barrel in order to prevent barrel fouling by keeping out detritus.
- The OSS used condoms for a plethora of applications, from storing corrosive fuel additives and wire garrotes (with the T-handles removed) to holding the acid component of a self-destructing film canister, to finding use in improvised explosives.
- Navy SEALs have used doubled condoms, sealed with neoprene cement, to protect non-electric firing assemblies for underwater demolitions—leading to the term "Dual Waterproof Firing Assemblies."
Other uses of condoms include:
- Condoms can be used to hold water in emergency survival situations.
- Condoms have also been used in many cases to smuggle cocaine and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous; if the condom breaks, the drugs inside can cause an overdose.
- In Soviet gulags, condoms were used to smuggle alcohol into the camps by prisoners who worked outside during daylight. While outside, the prisoner would ingest an empty condom attached to a thin piece of rubber tubing, the end of which was wedged between his teeth. The smuggler would then use a syringe to fill the tubing and condom with up to three litres of raw alcohol, which the prisoner would then smuggle back into the camp. When back in the barracks, the other prisoners would suspend him upside down until all the spirit had been drained out. Aleksandr Solzhenitsyn records that the three litres of raw fluid would be diluted to make seven litres of crude vodka, and that although such prisoners risked an extremely painful and unpleasant death if the condom burst inside them, the rewards granted them by other prisoners encouraged them to run the risk.
- In his book entitled Last Chance to See, Douglas Adams reported having used a condom to protect a microphone he used to make an underwater recording. According to one of his travelling companions, this is standard BBC practice when a waterproof microphone is needed but cannot be procured.
- Condoms are used by engineers to keep soil samples dry during soil tests.
- Condoms are used in the field by engineers to initially protect sensoring equipment embedded in the steel or aluminium nose-cones of CPT (Cone Penetration Test) probes when entering the surface to conduct soil resistance tests to determine the bearing strength of soil.
- Condoms are used as a one way valve by paramedics when performing a chest decompression in the field. The decompression needle is inserted through the condom, and inserted into the chest. The condom folds over the hub allowing air to exit the chest, but preventing it from entering.
# Debate and criticism
## Disposal and environmental impact
Experts recommend condoms be disposed of in a trash receptacle. Flushing down the toilet may clog plumbing or cause other problems.
While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.
Condoms made of polyurethane, a plastic material, do not break down at all. The plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.
## Position of the Roman Catholic Church
The Roman Catholic Church directly condemns any artificial birth control or sexual acts aside from intercourse, between married heterosexual partners. However, the use of condoms to combat STDs is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, to date statements from the Vatican have argued that condom-promotion programs encourage promiscuity, thereby actually increasing STD transmission. Papal study of the issue is ongoing, and in 2006 a study on the use of condoms to combat AIDS was prepared for review by Pope Benedict XVI.
The Roman Catholic Church is the largest organized body of any world religion. This church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial.
## Health issues
Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, however cornstarch is currently the most popular dusting powder. Talc is known to be toxic if it enters the abdominal cavity (i.e. via the vagina). Cornstarch is generally believed to be safe, however some researchers have raised concerns over its use.
Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold. | Condom
Template:BirthControl infobox
# Overview
A condom is a device most commonly used during sexual intercourse. It is put on a man's erect penis and physically blocks ejaculated semen from entering the body of a sexual partner. Condoms are used to prevent pregnancy and transmission of sexually transmitted diseases (STDs—such as gonorrhea, syphilis, and HIV). Because condoms are waterproof, elastic, and durable, they are also used in a variety of secondary applications. These range from creating waterproof microphones to protecting rifle barrels from clogging.
Most condoms are made from latex, but some are made from other materials. A female condom is also available. As a method of contraception, male condoms have the advantage of being inexpensive, easy to use, having few side-effects, and of offering protection against sexually transmitted diseases.[1][2] With proper knowledge and application technique—and use at every act of intercourse—users of male condoms experience a 2% per-year pregnancy rate.[3]
Condoms have been used for over 500 years.[4][5] In the early twentieth century, with the invention of disposible latex condoms, they became one of the most popular methods of contraception. While widely accepted in modern times, condoms have generated some controversy. Improper disposal of condoms contributes to litter problems, and the Roman Catholic Church generally opposes condom use.
# History
## Antiquity through Middle Ages
Whether condoms were used in ancient civilizations is debated by archaeologists and historians.[6] The oldest claimed representation of condom use is a painting in the French cave Grotte des Combarrelles;[6] the paintings in this cave are 12,000–15,000 years old.[4] Societies in the ancient civilizations of Egypt, Greece, and Rome preferred small families and are known to have practices a variety of birth control methods.[7] However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets).[8] The writings of these societies contain "veiled references" to male-controlled contraceptive methods that might have been condoms, but most historians interpret them as referring to coitus interruptus or anal sex.[9]
The loincloths worn by Egyptian and Greek laborers were very spare, sometimes consisting of little more than a covering for the glans of the penis. Records of these types of loincloths being worn by men in higher classes have made some historians speculate they were worn during intercourse;[10] others, however, are doubtful of such interpretations.[11] Historians may also cite one legend of Minos, related by Antoninus Liberalis in 150 AD, as suggestive of condom use in ancient societies. This legend describes a curse that caused Minos' semen to contain serpents and scorpions. To protect his sexual partner from these animals, Minos used a goat's bladder as a female condom.[12][11]
Contraceptives fell out of use in Europe after the decline of the Roman Empire in the 400s; the use of contraceptive pessaries, for example, is not documented again until the fifteenth century. If condoms were used during the Roman Empire, knowledge of them may have been lost during its decline.[13] A contributing factor to the loss of contraceptive knowledge was the rise of the Christian religion, which considered all forms of birth control to be sins.[14] In the writings of Muslims and Jews during the Middle Ages, there are some references to attempts at male-controlled contraception, including suggestions to cover the penis in tar or soak it in onion juice. Some of these writings might describe condom use, but they are "oblique", "veiled", and "vague".[15]
## Renaissance
In 16th century Italy, Gabriele Falloppio authored the first-known published description of condom use for disease prevention. He recommended soaking cloth sheaths in a chemical solution and allowing them to dry prior to use.[5] He claimed to have performed an experimental trial of the linen sheath on 1100 men. His report of the experiment, published two years after his death, indicated protection against syphilis.[11]
The oldest condoms found (rather than just pictures or descriptions) are from 1640, discovered in Dudley Castle in England. They were made of animal intestine, and it is believed they were used for STD prevention.[4] In 19th century Japan, both leather condoms and condoms made of tortoise shells or horns were available.[5] Similar devices made from oiled silk paper have also been described in China.[11]
The often-reported invention of the condom by "Dr. Condom" or the "Earl of Condom" is believed to be fallacious (see etymology section below). However in the 18th Century, there are numerous literary references to condom use and sales, including in the memoirs of Giacomo Casanova.[11]
## 19th century to present
The rubber vulcanization process was patented by Charles Goodyear in 1844, and the first rubber condom was produced in 1855.[16] These early rubber condoms were 1-2mm thick and had seams down the sides.[5] Although they were reusable, these early rubber condoms were also expensive.
Distribution of condoms in the United States was limited by passage of the Comstock Act in 1873. This law prohibited transport through the postal service of any instructional material or devices intended to prevent pregnancy. Condoms were available by prescription, although legally they were only supposed to be prescribed to prevent disease rather than pregnancy.[4] The Comstock Act remained in force until it was largely overturned by the U.S. Supreme Court in 1936.
In 1912, a German named Julius Fromm developed a new manufacturing technique for condoms: dipping glass molds into the raw rubber solution. This enabled the production of thinner condoms with no seams. Fromm's Act was the first branded line of condoms, and Fromms is still a popular line of condoms in Germany today.[16] By the 1930s, the manufacturing process had improved to produce single-use condoms almost as thin and inexpensive as those currently available.[5]
Condoms were not made available to U.S. soldiers in World War I, and a significant number of returning soldiers carried sexually transmitted infections. During World War II, however, condoms were heavily promoted to soldiers, with one film exhorting "Don't forget — put it on before you put it in."[4] In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to be utilized to this day.
## Etymology of the term
Etymological theories for the word "condom" abound. It has been claimed to be from the Latin word condon, meaning receptacle.[4] One author argues that "condom" is derived from the Latin word condamina, meaning house.[17] It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove.[18]
Folk etymology claims that the word "condom" is derived from a purported "Dr. Condom" or "Quondam", who made the devices for King Charles II of England. There is no verifiable evidence that any such "Dr. Condom" existed.[18] It is also hypothesized that a British army officer named Cundum popularized the device between 1680 and 1717.[19]
William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology."[20] Modern dictionaries may also list the etymology as "unknown".[21]
Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters.[22] Additionally, condoms may be referred to using the manufacturer's name.
# Varieties
Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes, from oversized to snug and they also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavoured condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms are also widely available.
## Latex
Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking.[23] In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electrical current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing.[24]
Latex condoms used with oil-based lubricants (e.g. vaseline) are likely to slip off due to loss of elasticity caused by the oils.[25]
Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, spermicidally lubricated condoms have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary-tract infections in women.[26] In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms.[27]
Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission.[28] The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, they recommend using a nonoxynol-9 lubricated condom over no condom at all.[29] As of 2005, nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9, Planned Parenthood has discontinued the distribution of condoms so lubricated,[30] and the Food and Drug Administration has proposed a warning regarding this issue.[31]
## Polyurethane
Template:Seealso
Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick.[32] Polyurethane is also the material of many female condoms.
Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor.[33] Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes.[34]
However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex,[33][35] and are more expensive.
## Lambskin
Condoms made from one of the oldest condom materials, labeled "lambskin" (made from lamb intestines) are still available. They have a greater ability to transmit body warmth and tactile sensation, when compared to synthetic condoms, and are less allergenic than latex. However, there is an increased risk of transmitting STDs compared to latex because of pores in the material, which are thought to be large enough to allow infectious agents to pass through, albeit blocking the passage of sperm.[36]
## Experimental
The Invisible Condom, developed at Université Laval in Québec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. The invisible condom is in the clinical trial phase, and has not yet been approved for use.[37]
As reported on Swiss television news Schweizer Fernsehen on November 29, 2006, the German scientist Jan Vinzenz Krause of the Institut für Kondom-Beratung ("Institute for Condom Consultation") in Germany recently developed a spray-on condom and is test-marketing it. Krause says that one of the advantages to his spray-on condom, which is reported to dry in about 5 seconds, is that it is perfectly formed to each penis.[38][39]
## Collection condom
A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life.
# Effectiveness
## In preventing pregnancy
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms improperly, inconsistently, or both. Rates are generally presented for the first year of use.[3] Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.[40]
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10–18% per year.[41] The perfect use pregnancy rate of condoms is 2% per year.[3] Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.[27]
Several factors account for typical use effectiveness being lower than perfect use effectiveness:
- mistakes on the part of those providing instructions on how to use the method
- mistakes on the part of the user
- conscious user non-compliance with instructions.
For instance, someone might be given incorrect information on what lubricants are safe to use with condoms, mistakenly put the condom on improperly, or simply not bother to use a condom.
## In preventing STDs
Template:Seealso
Condoms are widely recommended for the prevention of sexually transmitted diseases (STDs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of HIV, genital herpes, genital warts, syphilis, chlamydia, gonorrhea, and other diseases.[42]
According to a 2000 report by the National Institutes of Health, correct and consistent use of latex condoms reduces the risk of HIV/AIDS transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. The same review also found condom use significantly reduces the risk of gonorrhea for men.[43]
A 2006 study reports that proper condom use decreases the risk of transmission for human papilloma virus by approximately 70%.[44] Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2 also known as genital herpes, in both men and women.[45]
Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases can be transmitted by direct contact.[46] The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use.[24]
## Causes of failure
Condom users may experience slipping off the penis after ejaculation,[47] breakage due to faulty methods of application or physical damage (such as tears caused when opening the package), or breakage or slippage due to latex degradation (typically from being past the expiration date or being stored improperly). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%.[43] Even if no breakage or slippage is observed, 1–2% of women will test positive for semen residue after intercourse with a condom.[48][49]
Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins - such failures generally pose no risk to the user.[50] One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse.[51]
Standard condoms will fit almost any penis, although many condom manufacturers offer "snug" or "magnum" sizes. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive.[25]
Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are at increased risk of a second such failure.[52] An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage.[53] A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage.[25]
Among couples that intend condoms to be their form of birth control, pregnancy may occur when the couple does not use a condom. The couple may have run out of condoms, or be traveling and not have a condom with them, or simply dislike the feel of condoms and decide to "take a chance." This type of behavior is the primary cause of typical use failure (as opposed to method or perfect use failure).[54]
Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent.[55] Some commercial sex workers report clients sabotaging condoms in retaliation for being coerced into condom use.[56] Placing pinholes in the tip of the condom is believed to significantly impact their effectiveness.[49][57]
# Female condoms
"Female condoms" or "femidoms" are also available. They are larger and wider than male condoms but equivalent in length. They have a flexible ring-shaped opening, and are designed to be inserted into the vagina. They also contain an inner ring which aids insertion and helps keep the condom from sliding out of the vagina during coitus. One line of female condoms is made from polyurethane or nitrile polymer. A competing manufacturer makes a line of female condoms out of latex. The latex female condom has been available for several years in Africa, Asia, and South America, although one more clinical trial is required before it can be submitted for FDA approval in the United States.[58]
# Use
Male condoms are usually packaged inside a foil wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle.[59]
Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement.[2]
## Prevalence
The prevalence of condom use varies greatly between countries. Japan has the highest rate of condom usage in the world, with condoms accounting for almost 80% of contraceptive use. In the average developed country, 22% of contraceptive users rely on condoms as their primary method of birth control. In the average less-developed country, only 5-6% of contraceptive users choose condoms.[60] In a few countries, such as Somalia, condoms are illegal.[61]
## Role in sex education
Condoms are often used in sexual education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active."[62]
In the United States, teaching about condoms in public schools is opposed by some religious organizations.[63] Planned Parenthood, which advocates family planning and sexual education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 75% of American parents want their children to receive comprehensive sexuality education including condom use.[64]
## Infertility treatment
Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse.
Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Many men prefer collection condoms to masturbation, and some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervial or intrauterine insemination.[65] Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them.[57]
Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates.[66]
## Other uses
Condoms excel as multipurpose containers because they are waterproof, elastic, durable, and will not arouse suspicion if found. Ongoing military utilization begun during World War II includes:
- Tying a non-lubricated condom around the muzzle of the rifle barrel in order to prevent barrel fouling by keeping out detritus.[67]
- The OSS used condoms for a plethora of applications, from storing corrosive fuel additives and wire garrotes (with the T-handles removed) to holding the acid component of a self-destructing film canister, to finding use in improvised explosives.[68]
- Navy SEALs have used doubled condoms, sealed with neoprene cement, to protect non-electric firing assemblies for underwater demolitions—leading to the term "Dual Waterproof Firing Assemblies."[69]
Other uses of condoms include:
- Condoms can be used to hold water in emergency survival situations.[70]
- Condoms have also been used in many cases to smuggle cocaine and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous; if the condom breaks, the drugs inside can cause an overdose.[71]
- In Soviet gulags, condoms were used to smuggle alcohol into the camps by prisoners who worked outside during daylight. While outside, the prisoner would ingest an empty condom attached to a thin piece of rubber tubing, the end of which was wedged between his teeth. The smuggler would then use a syringe to fill the tubing and condom with up to three litres of raw alcohol, which the prisoner would then smuggle back into the camp. When back in the barracks, the other prisoners would suspend him upside down until all the spirit had been drained out. Aleksandr Solzhenitsyn records that the three litres of raw fluid would be diluted to make seven litres of crude vodka, and that although such prisoners risked an extremely painful and unpleasant death if the condom burst inside them, the rewards granted them by other prisoners encouraged them to run the risk.[72]
- In his book entitled Last Chance to See, Douglas Adams reported having used a condom to protect a microphone he used to make an underwater recording. According to one of his travelling companions, this is standard BBC practice when a waterproof microphone is needed but cannot be procured.
- Condoms are used by engineers to keep soil samples dry during soil tests.[73]
- Condoms are used in the field by engineers to initially protect sensoring equipment embedded in the steel or aluminium nose-cones of CPT (Cone Penetration Test) probes when entering the surface to conduct soil resistance tests to determine the bearing strength of soil.[74]
- Condoms are used as a one way valve by paramedics when performing a chest decompression in the field. The decompression needle is inserted through the condom, and inserted into the chest. The condom folds over the hub allowing air to exit the chest, but preventing it from entering.[75]
# Debate and criticism
## Disposal and environmental impact
Experts recommend condoms be disposed of in a trash receptacle. Flushing down the toilet may clog plumbing or cause other problems.[59]
While biodegradable,[59] latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food.[76]
Condoms made of polyurethane, a plastic material, do not break down at all. The plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass.[59] Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem.[77]
## Position of the Roman Catholic Church
The Roman Catholic Church directly condemns any artificial birth control or sexual acts aside from intercourse, between married heterosexual partners. However, the use of condoms to combat STDs is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, to date statements from the Vatican have argued that condom-promotion programs encourage promiscuity, thereby actually increasing STD transmission.[78] Papal study of the issue is ongoing, and in 2006 a study on the use of condoms to combat AIDS was prepared for review by Pope Benedict XVI.[79]
The Roman Catholic Church is the largest organized body of any world religion.[80] This church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa,[81] but its opposition to condom use in these programs has been highly controversial.[82]
## Health issues
Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, however cornstarch is currently the most popular dusting powder.[83] Talc is known to be toxic if it enters the abdominal cavity (i.e. via the vagina). Cornstarch is generally believed to be safe, however some researchers have raised concerns over its use.[83][84]
Nitrosamines, which are potentially carcinogenic in humans,[85] are believed to be present in a substance used to improve elasticity in latex condoms.[86] A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low.[87] However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold.[86][88] | https://www.wikidoc.org/index.php/Condom | |
ac80d4c86717765ac752e34ad47a7562af513c1d | wikidoc | Copper | Copper
# Overview
Copper (Template:PronEng) is a chemical element with the symbol Cu (Template:Lang-la) and atomic number 29.
It is a ductile metal with excellent electrical conductivity, and finds extensive use as an electrical conductor, heat conductor, as a building material, and as a component of various alloys.
Copper is an essential trace nutrient to all high plants and animals. In animals, including humans, it is found primarily in the bloodstream, as a co-factor in various enzymes, and in copper-based pigments. However, in sufficient amounts, copper can be poisonous and even fatal to organisms.
Copper has played a significant part in the history of mankind, which has used the easily accessible uncompounded metal for thousands of years. Civilizations in places such as Iraq, China, Egypt, Greece, India and the Sumerian cities all have early evidence of using copper. During the Roman Empire, copper was principally mined on Cyprus, hence the origin of the name of the metal as Cyprium, "metal of Cyprus", later shortened to Cuprum. A number of countries, such as Chile and the United States, still have sizable reserves of the metal which are extracted through large open pit mines. High demand relative to supply has caused a price spike in the 2000s.
# History
Copper, as native copper, is one of the few metals to naturally occur as an uncompounded mineral. Copper was known to some of the oldest civilizations on record, and has a history of use that is at least 10,000 years old. A copper pendant was found in what is now northern Iraq that dates to 8700 BC. By 5000 BC, there are signs of copper smelting, the refining of copper from simple copper compounds such as malachite or azurite. Among archaeological sites in Anatolia, Çatal Höyük (~6000 BC) features native copper artifacts and smelted lead beads, but no smelted copper. But Can Hasan (~5000 BC) had access to smelted copper; this site has yielded the oldest known cast copper artifact, a copper mace head.
Copper smelting appears to have been developed independently in several parts of the world. In addition to its development in Anatolia by 5000 BC, it was developed in China before 2800 BC, in the Andes around 2000 BC, in Central America around 600 AD, and in West Africa around 900 AD. Copper is found extensively in the Indus Valley Civilization by the 3rd millennium BC. In Europe, Ötzi the Iceman, a well-preserved male dated to 3200 BC, was found with a copper-tipped axe whose metal was 99.7% pure. High levels of arsenic in his hair suggest he was involved in copper smelting. There are copper and bronze artifacts from Sumerian cities that date to 3000 BC, and Egyptian artifacts of copper and copper-tin alloys nearly as old. In one pyramid, a copper plumbing system was found that is 5000 years old. The Egyptians found that adding a small amount of tin made the metal easier to cast, so bronze alloys were found in Egypt almost as soon as copper was found. In the Americas production in the Old Copper Complex, located in present day Michigan and Wisconsin, was dated back to between 6000 to 3000 BC.
By 2000 BC, Europe was using copper-tin alloys or bronze. The use of bronze became so pervasive in a certain era of civilization (approximately 2500 BC to 600 BC in Europe) that it has been named the Bronze Age. The transitional period in certain regions between the preceding Neolithic period and the Bronze Age is termed the Chalcolithic ("copper-stone"), with some high-purity copper tools being used alongside stone tools. Brass was known to the Greeks, but only became a significant supplement to bronze during the Roman empire.
In Greek the metal was known by the name chalkos (χαλκός). Copper was a very important resource for the Romans, Greeks and other ancient peoples. In Roman times, it became known as aes Cyprium (aes being the generic Latin term for copper alloys such as bronze and other metals, and Cyprium because so much of it was mined in Cyprus). From this, the phrase was simplified to cuprum and then eventually Anglicized into the English copper. Copper was associated with the goddess Aphrodite/Venus in mythology and alchemy, owing to its lustrous beauty, its ancient use in producing mirrors, and its association with Cyprus, which was sacred to the goddess.
## Britain and Ireland
During the Bronze Age, copper was mined in Britain and Ireland mainly in the following locations:
- South West County Cork
- West Wales (e.g. Cwmwystwyth)
- North Wales (e.g. Great Orme)
- Anglesey (Parys Mountain)
- Cheshire (Alderley Edge)
- The Staffordshire Moorlands (e.g. Ecton Mine)
- Isle of Man, which is between England and Northern Ireland
At Great Orme in North Wales, such working extended for a depth of 70 metres. At Alderley Edge in Cheshire, carbon dates have established mining at around 2280 to 1890 BC (at 95% probability).
## United States
Copper mining in the United States began with marginal workings by Native Americans and some development by early Spaniards. Native copper is known to have been extracted from sites on Isle Royale with primitive stone tools between 800 and 1600CE. Europeans were mining copper in Connecticut as early as 1709. Perhaps the oldest operating large-scale copper mine was the historic Elizabeth Mine in Vermont. Dating to the 1700s, "the Liz" produced copper until it was closed in 1958. Westward movement also brought an expansion of copper exploitation with developments of significant deposits in Michigan and Arizona during the 1850s and then in Montana during the 1860s.
Native copper was mined extensively in Michigan's Keweenaw Peninsula with the heart of extraction at the productive Quincy Mine. Arizona had many notable deposits including the Copper Queen in Bisbee and the United Verde in Jerome. The Anaconda in Butte, Montana became the nation's chief copper supplier by 1886.
Copper is mined in many other areas of the United States, including Utah, Nevada and Tennessee. Copper is the state mineral for Utah.
# Isotopes
There are two stable isotopes, 63Cu and 65Cu, along with a couple dozen radioisotopes. The vast majority of radioisotopes have half lives on the order of minutes or less; the longest lived, 67Cu, has a half life of 61.8 hours. See also isotopes of copper.
# Notable characteristics
Copper has a high electrical and thermal conductivity, second only to silver among pure metals at room temperature.
Copper is a reddish-coloured metal; it has its characteristic colour because of its band structure. In its liquefied state, a pure copper surface without ambient light appears somewhat greenish, a characteristic shared with gold. When liquid copper is in bright ambient light, it retains some of its pinkish luster.
Copper occupies the same family of the periodic table as silver and gold, since they each have one s-orbital electron on top of a filled electron shell. This similarity in electron structure makes them similar in many characteristics. All have very high thermal and electrical conductivity, and all are malleable metals.
## Corrosion
Pure water and air
Copper is a metal that does not react with water (H2O), but the oxygen of the air will react slowly at room temperature to form a layer of copper oxide on copper metal.
It is important to note that in contrast to the oxidation of iron by wet air that the layer formed by the reaction of air with copper has a protective effect against further corrosion. On old copper roofs a green layer of copper carbonate can often be seen.
Sulfide media
Copper metal does react with hydrogen sulfide and sulfide containing solutions. A series of different copper sulfides can form on the surface of the copper metal.
Note that the copper sulfide area of the plot is very complex due to the existence of many different sulfides, a close up is also provided to make the graph more clear. It is clear that the copper is now able to corrode even without the need for oxygen as the copper is now less noble than hydrogen. This can be observed in every day life when copper metal surfaces tarnish after exposure to air which contains sulfur compounds.
Ammonia media
Copper does react with oxygen-containing ammonia solutions because the ammonia forms water-soluble copper complexes. The formation of these complexes causes the corrosion to become more thermodynamically favored than the corrosion of copper in an identical solution that does not contain the ammonia.
Chloride media
Copper does react with a combination of oxygen and hydrochloric acid to form a series of copper chlorides. It is interesting to note that if copper(II) chloride (green/blue) is boiled with copper metal (with little or no oxygen present) then white copper(I) chloride will be formed.
## Mechanical Properties
A single crystal copper consists of a few microns of small crystals. In this form of crystal (c), the yield stress is high and crystal undergoes a large amount of elastic deformation before going into the plastic deformation region. The plastic deformation region has an unpredictable outcome. The stress level decreases significantly as necking begins to occur.
Polycrystal copper has many crystal of different geometries combined. The plastic deformation of polycrystal is similar to mild steel. Copper has a high ductility and will continue to elongate as stress is applied. It is very useful in copper wire drawing.
Numerous copper alloys exist, many with important historical and contemporary uses. Speculum metal and bronze are alloys of copper and tin. Brass is an alloy of copper and zinc. Monel metal, also called cupronickel, is an alloy of copper and nickel. While the metal "bronze" usually refers to copper-tin alloys, it also is a generic term for any alloy of copper, such as aluminium bronze, silicon bronze, and manganese bronze.
## Germicidal effect
Copper is germicidal, via the oligodynamic effect. For example, brass doorknobs disinfect themselves of many bacteria within a period of eight hours. This effect is useful in many applications.
# Occurrence and modern industry
In 2005, Chile was the top mine producer of copper with at least one-third world share followed by the USA, Indonesia and Peru, reports the British Geological Survey.
Copper can be found as native copper in mineral form. Minerals such as the sulfides: chalcopyrite (CuFeS2), bornite (Cu5FeS4), covellite (CuS), chalcocite (Cu2S) are sources of copper, as are the carbonates: azurite (Cu3(CO3)2(OH)2) and malachite (Cu2CO3(OH)2) and the oxide: cuprite (Cu2O).
Most copper ore is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0 percent copper. Examples include: Chuquicamata in Chile and El Chino Mine in New Mexico. The average abundance of copper found within crustal rocks is approximately 68 ppm by mass, and 22 ppm by atoms.
The Intergovernmental Council of Copper Exporting Countries (CIPEC), defunct since 1992, once tried to play a similar role for copper as OPEC does for oil, but never achieved the same influence, not least because the second-largest producer, the United States, was never a member. Formed in 1967, its principal members were Chile, Peru, Zaire, and Zambia.
The copper price has quintupled from the 60-year low in 1999, rising from US$0.60 per pound in June 1999 to US$3.75 per pound in May 2006, where it dropped to US$2.40 in February 2007 then rebounded to US$3.50 in April 2007.
The Earth has an estimated 61 years of copper reserves remaining. Environmental analyst, Lester Brown, however, has suggested copper might run out within 25 years based on a reasonable extrapolation of 2% growth per year.
# Compounds
Common oxidation states of copper include the less stable copper(I) state, Cu+; and the more stable copper(II) state, Cu2+, which forms blue or blue-green salts and solutions. Under unusual conditions, a 3+ state and even an extremely rare 4+ state can be obtained. Using old nomenclature for the naming of salts, copper(I) is called cuprous, and copper(II) is cupric. In oxidation copper is mildly basic.
Copper(II) carbonate is green from which arises the unique appearance of copper-clad roofs or domes on some buildings. Copper(II) sulfate forms a blue crystalline pentahydrate which is perhaps the most familiar copper compound in the laboratory. It is used as a fungicide, known as Bordeaux mixture.
There are two stable copper oxides, copper(II) oxide (CuO) and copper(I) oxide (Cu2O). Copper oxides are used to make yttrium barium copper oxide (YBa2Cu3O7-δ) or YBCO which forms the basis of many unconventional superconductors.
- Copper(I) compounds: copper(I) chloride, copper(I) bromide, copper(I) iodide, copper(I) oxide.
- Copper(II) compounds: copper(II) acetate, copper(II) carbonate, copper(II) chloride, copper(II) hydroxide, copper(II) nitrate, copper(II) oxide, copper(II) sulfate, copper(II) sulfide, copper(II) tetrafluoroborate, copper(II) triflate.
- Copper(III) compounds, rare: potassium hexafluorocuprate (K3CuF6)
- Copper(IV) compounds, extremely rare: caesium hexafluorocuprate (Cs2CuF6)
### Tests for copper(II) ion
Add aqueous sodium hydroxide. A blue precipitate of copper(II) hydroxide should form.
Ionic equation:
The full equation shows that the reaction is due to hydroxide ions deprotonating the hexaaquacopper (II) complex:
Adding aqueous ammonia causes the same precipitate to form. It then dissolves upon adding excess ammonia, to form a deep blue ammonia complex, tetraamminecopper(II).
Ionic equation:
A more delicate test than the ammonia is the ferrocyanide of potassium, which gives a brown precipitate with copper salts.
# Applications
Copper is malleable and ductile, a good conductor of heat and, when very pure, a good conductor of electricity.
The purity of copper is expressed as 4N for 99.99% pure or 7N for 99.99999% pure. The numeral gives the number of nines after the decimal point when expressed as a decimal (e.g. 4N means 0.9999, or 99.99%).
It is used extensively, in products such as:
## Piping
- including, but not limited to, extreme water supply.
## Electronics
- Copper wire.
- Electromagnets.
- Printed circuit boards.
- Lead free solder, alloyed with tin.
- Electrical machines, especially electromagnetic motors, generators and transformers.
- Electrical relays, electrical busbars and electrical switches.
- Vacuum tubes, cathode ray tubes, and the magnetrons in microwave ovens.
- Wave guides for microwave radiation.
- Integrated circuits, increasingly replacing aluminium because of its superior electrical conductivity.
- As a material in the manufacture of computer heat sinks, as a result of its superior heat dissipation capacity to aluminium.
## Architecture
- Copper has been used as water-proof roofing material since ancient times, giving many old buildings their greenish roofs and domes. Initially copper oxide forms, replaced by cuprous and cupric sulfide, and finally by copper carbonate. The final carbonate patina is highly resistant to corrosion.
- Statuary: The Statue of Liberty, for example, contains 179,220 pounds (81.3 tonnes) of copper.
- Alloyed with nickel, e.g. cupronickel and Monel, used as corrosive assistant materials in shipbuilding.
- Watt's steam engine.
- Copper nails were used in making oast cowls.
## Household products
- Copper plumbing fittings and compression tubes.
- Doorknobs and other fixtures in houses.
- Roofing, guttering, and rainspouts on buildings.
- In cookware, such as frying pans.
- Most flatware (knives, forks, spoons) contains some copper (nickel silver).
- Sterling silver, if it is to be used in dinnerware, must contain a few percent copper.
- Copper water heating cylinders
## Coinage
- As a component of coins, often as cupronickel alloy.
- Coins in the following countries all contain copper: European Union (Euro), United States, United Kingdom (sterling), Australia and New Zealand.
- Ironically, U.S. Nickels are 75.0% copper by weight and only 25.0% nickel.
## Biomedical applications
- As a biostatic surface in hospitals, and to line parts of ships to protect against barnacles and mussels, originally used pure, but superseded by Muntz Metal. Bacteria will not grow on a copper surface because it is biostatic. Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires' disease is suppressed by copper tubing in air-conditioning systems.
- Copper(II) sulfate is used as a fungicide and as algae control in domestic lakes and ponds. It is used in gardening powders and sprays to kill mildew.
- Copper-62-PTSM, a complex containing radioactive copper-62, is used as a Positron emission tomography radiotracer for heart blood flow measurements.
- Copper-64 can be used as a Positron emission tomography radiotracer for medical imaging. When complexed with a chelate it can be used to treat cancer through radiation therapy.
## Chemical applications
- Compounds, such as Fehling's solution, have applications in chemistry.
- As a component in ceramic glazes, and to color glass.
## Other
- Musical instruments, especially brass instruments and cymbals.
- Class D Fire Extinguisher, used in powder form to extinguish lithium fires by covering the burning metal and performing similar to a heat sink.
- Textile fibers to create antimicrobial protective fabrics.
# Biological role
Copper is essential in all plants and animals. Copper is carried mostly in the bloodstream on a plasma protein called ceruloplasmin. When copper is first absorbed in the gut it is transported to the liver bound to albumin. Copper is found in a variety of enzymes, including the copper centers of cytochrome c oxidase and the enzyme superoxide dismutase (containing copper and zinc). In addition to its enzymatic roles, copper is used for biological electron transport. The blue copper proteins that participate in electron transport include azurin and plastocyanin. The name "blue copper" comes from their intense blue color arising from a ligand-to-metal charge transfer (LMCT) absorption band around 600 nm.
Most molluscs and some arthropods such as the horseshoe crab use the copper-containing pigment hemocyanin rather than iron-containing hemoglobin for oxygen transport, so their blood is blue when oxygenated rather than red.
It is believed that zinc and copper compete for absorption in the digestive tract so that a diet that is excessive in one of these minerals may result in a deficiency in the other. The RDA for copper in normal healthy adults is 0.9 mg/day. On the other hand, professional research on the subject recommends 3.0 mg/day. Because of its role in facilitating iron uptake, copper deficiency can often produce anemia-like symptoms.
## Toxicity
All copper compounds, unless otherwise known, should be treated as if they were toxic. Thirty grams of copper sulfate is potentially lethal in humans. The suggested safe level of copper in drinking water for humans varies depending on the source, but tends to be pegged at 1.5 to 2 mg/L. The DRI Tolerable Upper Intake Level for adults of dietary copper from all sources is 10 mg/day. In toxicity, copper can inhibit the enzyme dihydrophil hydratase, an enzyme involved in haemopoiesis.Template:Facts
Symptoms of copper poisoning are very similar to those produced by arsenic. Fatal cases are generally terminated by convulsions, palsy, and insensibility.Template:Facts
In cases of suspected copper poisoning, Ovalbumin is to be administered in either of its forms which can be most readily obtained, as milk or whites of eggs. Vinegar should not be given. The inflammatory symptoms are to be treated on general principles, and so are the nervous.Template:Facts
A significant portion of the toxicity of copper comes from its ability to accept and donate single electrons as it changes oxidation state. This catalyzes the production of very reactive radical ions such as hydroxyl radical in a manner similar to fenton chemistry. This catalytic activity of copper is used by the enzymes that it is associated with and is thus only toxic when unsequestered and unmediated. This increase in unmediated reactive radicals is generally termed oxidative stress and is an active area of research in a variety of diseases where copper may play an important but more subtle role than in acute toxicity.
An inherited condition called Wilson's disease causes the body to retain copper, since it is not excreted by the liver into the bile. This disease, if untreated, can lead to brain and liver damage. In addition, studies have found that people with mental illnesses such as schizophrenia had heightened levels of copper in their systems. However it is unknown at this stage whether the copper contributes to the mental illness, whether the body attempts to store more copper in response to the illness, or whether the high levels of copper are the result of the mental illness.Template:Facts
Too much copper in water has also been found to damage marine life. The observed effect of these higher concentrations on fish and other creatures is damage to gills, liver, kidneys, and the nervous system. It also interferes with the sense of smell in fish, thus preventing them from choosing good mates or finding their way to mating areas.Template:Facts
## Miscellaneous hazards
The metal, when powdered, is a fire hazard. At concentrations higher than 1 mg/L, copper can stain clothes and items washed in water. | Copper
Template:Infobox copper
# Overview
Copper (Template:PronEng) is a chemical element with the symbol Cu (Template:Lang-la) and atomic number 29.
It is a ductile metal with excellent electrical conductivity, and finds extensive use as an electrical conductor, heat conductor, as a building material, and as a component of various alloys.
Copper is an essential trace nutrient to all high plants and animals. In animals, including humans, it is found primarily in the bloodstream, as a co-factor in various enzymes, and in copper-based pigments. However, in sufficient amounts, copper can be poisonous and even fatal to organisms.
Copper has played a significant part in the history of mankind, which has used the easily accessible uncompounded metal for thousands of years. Civilizations in places such as Iraq, China, Egypt, Greece, India and the Sumerian cities all have early evidence of using copper. During the Roman Empire, copper was principally mined on Cyprus, hence the origin of the name of the metal as Cyprium, "metal of Cyprus", later shortened to Cuprum. A number of countries, such as Chile and the United States, still have sizable reserves of the metal which are extracted through large open pit mines. High demand relative to supply has caused a price spike in the 2000s.
# History
Copper, as native copper, is one of the few metals to naturally occur as an uncompounded mineral. Copper was known to some of the oldest civilizations on record, and has a history of use that is at least 10,000 years old. A copper pendant was found in what is now northern Iraq that dates to 8700 BC. By 5000 BC, there are signs of copper smelting, the refining of copper from simple copper compounds such as malachite or azurite. Among archaeological sites in Anatolia, Çatal Höyük (~6000 BC) features native copper artifacts and smelted lead beads, but no smelted copper. But Can Hasan (~5000 BC) had access to smelted copper; this site has yielded the oldest known cast copper artifact, a copper mace head.
Copper smelting appears to have been developed independently in several parts of the world. In addition to its development in Anatolia by 5000 BC, it was developed in China before 2800 BC, in the Andes around 2000 BC, in Central America around 600 AD, and in West Africa around 900 AD.[1] Copper is found extensively in the Indus Valley Civilization by the 3rd millennium BC.[2] In Europe, Ötzi the Iceman, a well-preserved male dated to 3200 BC, was found with a copper-tipped axe whose metal was 99.7% pure. High levels of arsenic in his hair suggest he was involved in copper smelting. There are copper and bronze artifacts from Sumerian cities that date to 3000 BC, and Egyptian artifacts of copper and copper-tin alloys nearly as old. In one pyramid, a copper plumbing system was found that is 5000 years old. The Egyptians found that adding a small amount of tin made the metal easier to cast, so bronze alloys were found in Egypt almost as soon as copper was found. In the Americas production in the Old Copper Complex, located in present day Michigan and Wisconsin, was dated back to between 6000 to 3000 BC.[3]
By 2000 BC, Europe was using copper-tin alloys or bronze. The use of bronze became so pervasive in a certain era of civilization (approximately 2500 BC to 600 BC in Europe) that it has been named the Bronze Age. The transitional period in certain regions between the preceding Neolithic period and the Bronze Age is termed the Chalcolithic ("copper-stone"), with some high-purity copper tools being used alongside stone tools. Brass was known to the Greeks, but only became a significant supplement to bronze during the Roman empire.
In Greek the metal was known by the name chalkos (χαλκός). Copper was a very important resource for the Romans, Greeks and other ancient peoples. In Roman times, it became known as aes Cyprium (aes being the generic Latin term for copper alloys such as bronze and other metals, and Cyprium because so much of it was mined in Cyprus). From this, the phrase was simplified to cuprum and then eventually Anglicized into the English copper. Copper was associated with the goddess Aphrodite/Venus in mythology and alchemy, owing to its lustrous beauty, its ancient use in producing mirrors, and its association with Cyprus, which was sacred to the goddess.
## Britain and Ireland
During the Bronze Age, copper was mined in Britain and Ireland mainly in the following locations:
- South West County Cork
- West Wales (e.g. Cwmwystwyth)
- North Wales (e.g. Great Orme)
- Anglesey (Parys Mountain)
- Cheshire (Alderley Edge)
- The Staffordshire Moorlands (e.g. Ecton Mine)
- Isle of Man, which is between England and Northern Ireland
At Great Orme in North Wales, such working extended for a depth of 70 metres.[4] At Alderley Edge in Cheshire, carbon dates have established mining at around 2280 to 1890 BC (at 95% probability).[5]
## United States
Copper mining in the United States began with marginal workings by Native Americans and some development by early Spaniards. Native copper is known to have been extracted from sites on Isle Royale with primitive stone tools between 800 and 1600CE. Europeans were mining copper in Connecticut as early as 1709. Perhaps the oldest operating large-scale copper mine was the historic Elizabeth Mine in Vermont. Dating to the 1700s, "the Liz" produced copper until it was closed in 1958. Westward movement also brought an expansion of copper exploitation with developments of significant deposits in Michigan and Arizona during the 1850s and then in Montana during the 1860s.
Native copper was mined extensively in Michigan's Keweenaw Peninsula with the heart of extraction at the productive Quincy Mine. Arizona had many notable deposits including the Copper Queen in Bisbee and the United Verde in Jerome. The Anaconda in Butte, Montana became the nation's chief copper supplier by 1886.
Copper is mined in many other areas of the United States, including Utah, Nevada and Tennessee. Copper is the state mineral for Utah.
# Isotopes
There are two stable isotopes, 63Cu and 65Cu, along with a couple dozen radioisotopes. The vast majority of radioisotopes have half lives on the order of minutes or less; the longest lived, 67Cu, has a half life of 61.8 hours. See also isotopes of copper.
# Notable characteristics
Copper has a high electrical and thermal conductivity, second only to silver among pure metals at room temperature.[6]
Copper is a reddish-coloured metal; it has its characteristic colour because of its band structure. In its liquefied state, a pure copper surface without ambient light appears somewhat greenish, a characteristic shared with gold. When liquid copper is in bright ambient light, it retains some of its pinkish luster.
Copper occupies the same family of the periodic table as silver and gold, since they each have one s-orbital electron on top of a filled electron shell. This similarity in electron structure makes them similar in many characteristics. All have very high thermal and electrical conductivity, and all are malleable metals.
## Corrosion
Pure water and air
Copper is a metal that does not react with water (H2O), but the oxygen of the air will react slowly at room temperature to form a layer of copper oxide on copper metal.
It is important to note that in contrast to the oxidation of iron by wet air that the layer formed by the reaction of air with copper has a protective effect against further corrosion. On old copper roofs a green layer of copper carbonate can often be seen.
Sulfide media
Copper metal does react with hydrogen sulfide and sulfide containing solutions. A series of different copper sulfides can form on the surface of the copper metal.
Note that the copper sulfide area of the plot is very complex due to the existence of many different sulfides, a close up is also provided to make the graph more clear. It is clear that the copper is now able to corrode even without the need for oxygen as the copper is now less noble than hydrogen. This can be observed in every day life when copper metal surfaces tarnish after exposure to air which contains sulfur compounds.
Ammonia media
Copper does react with oxygen-containing ammonia solutions because the ammonia forms water-soluble copper complexes. The formation of these complexes causes the corrosion to become more thermodynamically favored than the corrosion of copper in an identical solution that does not contain the ammonia.
Chloride media
Copper does react with a combination of oxygen and hydrochloric acid to form a series of copper chlorides. It is interesting to note that if copper(II) chloride (green/blue) is boiled with copper metal (with little or no oxygen present) then white copper(I) chloride will be formed.
## Mechanical Properties
A single crystal copper consists of a few microns of small crystals. In this form of crystal (c), the yield stress is high and crystal undergoes a large amount of elastic deformation before going into the plastic deformation region. The plastic deformation region has an unpredictable outcome. The stress level decreases significantly as necking begins to occur.
Polycrystal copper has many crystal of different geometries combined. The plastic deformation of polycrystal is similar to mild steel. Copper has a high ductility and will continue to elongate as stress is applied. It is very useful in copper wire drawing.
Numerous copper alloys exist, many with important historical and contemporary uses. Speculum metal and bronze are alloys of copper and tin. Brass is an alloy of copper and zinc. Monel metal, also called cupronickel, is an alloy of copper and nickel. While the metal "bronze" usually refers to copper-tin alloys, it also is a generic term for any alloy of copper, such as aluminium bronze, silicon bronze, and manganese bronze.
## Germicidal effect
Copper is germicidal, via the oligodynamic effect. For example, brass doorknobs disinfect themselves of many bacteria within a period of eight hours.[8] This effect is useful in many applications.
# Occurrence and modern industry
In 2005, Chile was the top mine producer of copper with at least one-third world share followed by the USA, Indonesia and Peru, reports the British Geological Survey.
Copper can be found as native copper in mineral form. Minerals such as the sulfides: chalcopyrite (CuFeS2), bornite (Cu5FeS4), covellite (CuS), chalcocite (Cu2S) are sources of copper, as are the carbonates: azurite (Cu3(CO3)2(OH)2) and malachite (Cu2CO3(OH)2) and the oxide: cuprite (Cu2O).
Most copper ore is mined or extracted as copper sulfides from large open pit mines in porphyry copper deposits that contain 0.4 to 1.0 percent copper. Examples include: Chuquicamata in Chile and El Chino Mine in New Mexico. The average abundance of copper found within crustal rocks is approximately 68 ppm by mass, and 22 ppm by atoms.
The Intergovernmental Council of Copper Exporting Countries (CIPEC), defunct since 1992, once tried to play a similar role for copper as OPEC does for oil, but never achieved the same influence, not least because the second-largest producer, the United States, was never a member. Formed in 1967, its principal members were Chile, Peru, Zaire, and Zambia.
The copper price has quintupled from the 60-year low in 1999, rising from US$0.60 per pound in June 1999 to US$3.75 per pound in May 2006, where it dropped to US$2.40 in February 2007 then rebounded to US$3.50 in April 2007.[9]
The Earth has an estimated 61 years of copper reserves remaining.[10] Environmental analyst, Lester Brown, however, has suggested copper might run out within 25 years based on a reasonable extrapolation of 2% growth per year.[11]
# Compounds
Common oxidation states of copper include the less stable copper(I) state, Cu+; and the more stable copper(II) state, Cu2+, which forms blue or blue-green salts and solutions. Under unusual conditions, a 3+ state and even an extremely rare 4+ state can be obtained. Using old nomenclature for the naming of salts, copper(I) is called cuprous, and copper(II) is cupric. In oxidation copper is mildly basic.
Copper(II) carbonate is green from which arises the unique appearance of copper-clad roofs or domes on some buildings. Copper(II) sulfate forms a blue crystalline pentahydrate which is perhaps the most familiar copper compound in the laboratory. It is used as a fungicide, known as Bordeaux mixture.
There are two stable copper oxides, copper(II) oxide (CuO) and copper(I) oxide (Cu2O). Copper oxides are used to make yttrium barium copper oxide (YBa2Cu3O7-δ) or YBCO which forms the basis of many unconventional superconductors.
- Copper(I) compounds: copper(I) chloride, copper(I) bromide, copper(I) iodide, copper(I) oxide.
- Copper(II) compounds: copper(II) acetate, copper(II) carbonate, copper(II) chloride, copper(II) hydroxide, copper(II) nitrate, copper(II) oxide, copper(II) sulfate, copper(II) sulfide, copper(II) tetrafluoroborate, copper(II) triflate.
- Copper(III) compounds, rare: potassium hexafluorocuprate (K3CuF6)
- Copper(IV) compounds, extremely rare: caesium hexafluorocuprate (Cs2CuF6)
Template:Seealso
### Tests for copper(II) ion
Add aqueous sodium hydroxide. A blue precipitate of copper(II) hydroxide should form.
Ionic equation:
The full equation shows that the reaction is due to hydroxide ions deprotonating the hexaaquacopper (II) complex:
Adding aqueous ammonia causes the same precipitate to form. It then dissolves upon adding excess ammonia, to form a deep blue ammonia complex, tetraamminecopper(II).
Ionic equation:
A more delicate test than the ammonia is the ferrocyanide of potassium, which gives a brown precipitate with copper salts.
# Applications
Copper is malleable and ductile, a good conductor of heat and, when very pure, a good conductor of electricity.
The purity of copper is expressed as 4N for 99.99% pure or 7N for 99.99999% pure. The numeral gives the number of nines after the decimal point when expressed as a decimal (e.g. 4N means 0.9999, or 99.99%).
It is used extensively, in products such as:
## Piping
- including, but not limited to, extreme water supply.
## Electronics
- Copper wire.
- Electromagnets.
- Printed circuit boards.
- Lead free solder, alloyed with tin.
- Electrical machines, especially electromagnetic motors, generators and transformers.
- Electrical relays, electrical busbars and electrical switches.
- Vacuum tubes, cathode ray tubes, and the magnetrons in microwave ovens.
- Wave guides for microwave radiation.
- Integrated circuits, increasingly replacing aluminium because of its superior electrical conductivity.
- As a material in the manufacture of computer heat sinks, as a result of its superior heat dissipation capacity to aluminium.
## Architecture
- Copper has been used as water-proof roofing material since ancient times, giving many old buildings their greenish roofs and domes. Initially copper oxide forms, replaced by cuprous and cupric sulfide, and finally by copper carbonate. The final carbonate patina is highly resistant to corrosion.[12]
- Statuary: The Statue of Liberty, for example, contains 179,220 pounds (81.3 tonnes) of copper.
- Alloyed with nickel, e.g. cupronickel and Monel, used as corrosive assistant materials in shipbuilding.
- Watt's steam engine.
- Copper nails were used in making oast cowls.
## Household products
- Copper plumbing fittings and compression tubes.
- Doorknobs and other fixtures in houses.
- Roofing, guttering, and rainspouts on buildings.
- In cookware, such as frying pans.
- Most flatware (knives, forks, spoons) contains some copper (nickel silver).
- Sterling silver, if it is to be used in dinnerware, must contain a few percent copper.
- Copper water heating cylinders
## Coinage
- As a component of coins, often as cupronickel alloy.
- Coins in the following countries all contain copper: European Union (Euro),[13] United States,[14] United Kingdom (sterling),[15] Australia[16] and New Zealand.[17]
- Ironically, U.S. Nickels are 75.0% copper by weight and only 25.0% nickel.[14]
## Biomedical applications
- As a biostatic surface in hospitals, and to line parts of ships to protect against barnacles and mussels, originally used pure, but superseded by Muntz Metal. Bacteria will not grow on a copper surface because it is biostatic. Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires' disease is suppressed by copper tubing in air-conditioning systems.
- Copper(II) sulfate is used as a fungicide and as algae control in domestic lakes and ponds. It is used in gardening powders and sprays to kill mildew.
- Copper-62-PTSM, a complex containing radioactive copper-62, is used as a Positron emission tomography radiotracer for heart blood flow measurements.
- Copper-64 can be used as a Positron emission tomography radiotracer for medical imaging. When complexed with a chelate it can be used to treat cancer through radiation therapy.
## Chemical applications
- Compounds, such as Fehling's solution, have applications in chemistry.
- As a component in ceramic glazes, and to color glass.
## Other
- Musical instruments, especially brass instruments and cymbals.
- Class D Fire Extinguisher, used in powder form to extinguish lithium fires by covering the burning metal and performing similar to a heat sink.
- Textile fibers to create antimicrobial protective fabrics.[18]
# Biological role
Copper is essential in all plants and animals. Copper is carried mostly in the bloodstream on a plasma protein called ceruloplasmin. When copper is first absorbed in the gut it is transported to the liver bound to albumin. Copper is found in a variety of enzymes, including the copper centers of cytochrome c oxidase and the enzyme superoxide dismutase (containing copper and zinc). In addition to its enzymatic roles, copper is used for biological electron transport. The blue copper proteins that participate in electron transport include azurin and plastocyanin. The name "blue copper" comes from their intense blue color arising from a ligand-to-metal charge transfer (LMCT) absorption band around 600 nm.
Most molluscs and some arthropods such as the horseshoe crab use the copper-containing pigment hemocyanin rather than iron-containing hemoglobin for oxygen transport, so their blood is blue when oxygenated rather than red.[19]
It is believed that zinc and copper compete for absorption in the digestive tract so that a diet that is excessive in one of these minerals may result in a deficiency in the other. The RDA for copper in normal healthy adults is 0.9 mg/day. On the other hand, professional research on the subject recommends 3.0 mg/day.[20] Because of its role in facilitating iron uptake, copper deficiency can often produce anemia-like symptoms.
## Toxicity
All copper compounds, unless otherwise known, should be treated as if they were toxic. Thirty grams of copper sulfate is potentially lethal in humans. The suggested safe level of copper in drinking water for humans varies depending on the source, but tends to be pegged at 1.5 to 2 mg/L. The DRI Tolerable Upper Intake Level for adults of dietary copper from all sources is 10 mg/day. In toxicity, copper can inhibit the enzyme dihydrophil hydratase, an enzyme involved in haemopoiesis.Template:Facts
Symptoms of copper poisoning are very similar to those produced by arsenic. Fatal cases are generally terminated by convulsions, palsy, and insensibility.Template:Facts
In cases of suspected copper poisoning, Ovalbumin is to be administered in either of its forms which can be most readily obtained, as milk or whites of eggs. Vinegar should not be given. The inflammatory symptoms are to be treated on general principles, and so are the nervous.Template:Facts
A significant portion of the toxicity of copper comes from its ability to accept and donate single electrons as it changes oxidation state. This catalyzes the production of very reactive radical ions such as hydroxyl radical in a manner similar to fenton chemistry.[21] This catalytic activity of copper is used by the enzymes that it is associated with and is thus only toxic when unsequestered and unmediated. This increase in unmediated reactive radicals is generally termed oxidative stress and is an active area of research in a variety of diseases where copper may play an important but more subtle role than in acute toxicity.
An inherited condition called Wilson's disease causes the body to retain copper, since it is not excreted by the liver into the bile. This disease, if untreated, can lead to brain and liver damage. In addition, studies have found that people with mental illnesses such as schizophrenia had heightened levels of copper in their systems. However it is unknown at this stage whether the copper contributes to the mental illness, whether the body attempts to store more copper in response to the illness, or whether the high levels of copper are the result of the mental illness.Template:Facts
Too much copper in water has also been found to damage marine life. The observed effect of these higher concentrations on fish and other creatures is damage to gills, liver, kidneys, and the nervous system. It also interferes with the sense of smell in fish, thus preventing them from choosing good mates or finding their way to mating areas.Template:Facts
## Miscellaneous hazards
The metal, when powdered, is a fire hazard. At concentrations higher than 1 mg/L, copper can stain clothes and items washed in water. | https://www.wikidoc.org/index.php/Copper | |
6caf527c96b565361a9ec49d1972ffed05eb7c10 | wikidoc | Coptis | Coptis
Coptis (Gold Thread or Goldenthread) is a genus of between 10–15 species of flowering plants in the family Ranunculaceae, native to Asia and North America.
- Coptis aspleniifolia
- Coptis chinensis
- Coptis deltoidea
- Coptis groenlandica
- Coptis japonica
- Coptis laciniata
- Coptis occidentalis
- Coptis omeiensis
- Coptis quinquefolia
- Coptis quinquesecta
- Coptis teeta
- Coptis trifolia
# Uses
Coptis teeta is used as a medicinal herb in the Himalayan regions of India, used as a bitter tonic for dyspepsia. It is also known to help insomnia in Chinese Medicine
Made into a paste, salve, powder, or infusion, it is said to improve digestion, restore appetite, and relieve inflammation of the stomach. It is also employed to assist the treatment of alcoholism. | Coptis
Coptis (Gold Thread or Goldenthread) is a genus of between 10–15 species of flowering plants in the family Ranunculaceae, native to Asia and North America.
- Coptis aspleniifolia
- Coptis chinensis
- Coptis deltoidea
- Coptis groenlandica
- Coptis japonica
- Coptis laciniata
- Coptis occidentalis
- Coptis omeiensis
- Coptis quinquefolia
- Coptis quinquesecta
- Coptis teeta
- Coptis trifolia
## Uses
Coptis teeta is used as a medicinal herb in the Himalayan regions of India, used as a bitter tonic for dyspepsia. It is also known to help insomnia in Chinese Medicine
Made into a paste, salve, powder, or infusion, it is said to improve digestion, restore appetite, and relieve inflammation of the stomach. It is also employed to assist the treatment of alcoholism.
# External links
- Flora of North America: Coptis
- Flora of China: Coptis
Template:Vegetable-stub
Template:Alt-med-stub
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Coptis | |
23f1c41a8326884adeafaacdd053185b4927aed7 | wikidoc | Dermis | Dermis
The dermis is a layer of skin beneath the epidermis that consists of connective tissue, and cushions the body from stress and strain.The dermis is the lower layer of skin; contains main blood vessels. The dermis is tightly connected to the epidermis by a basement membrane, and harbors many nerve endings that provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, and blood vessels. The blood vessels in the dermis provide nourishment and waste removal to its own cells, as well as the Stratum basale of the epidermis.
# Structure
The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
## Papillary region
The papillary region is composed of loose areolar connective tissue. It is named for its fingerlike projections called papillae, which extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's surface. These are called friction ridges, because they help the hand or foot to grasp by increasing friction. Friction ridges occur in patterns (see fingerprint) that are genetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.
## Reticular region
The reticular region lies deep to the papillary region and is usually much thicker. It is composed of dense irregular connective tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Located within the reticular region are also the hair roots, sebaceous glands, sweat glands, receptors, nails, and blood vessels.
Tattoo ink is injected into the dermis. Stretch marks are also located in the dermis.
# Additional images
- Cross-section of all skin layers. | Dermis
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The dermis is a layer of skin beneath the epidermis that consists of connective tissue, and cushions the body from stress and strain.The dermis is the lower layer of skin; contains main blood vessels. The dermis is tightly connected to the epidermis by a basement membrane, and harbors many nerve endings that provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, and blood vessels. The blood vessels in the dermis provide nourishment and waste removal to its own cells, as well as the Stratum basale of the epidermis.
# Structure
The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
## Papillary region
The papillary region is composed of loose areolar connective tissue. It is named for its fingerlike projections called papillae, which extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's surface. These are called friction ridges, because they help the hand or foot to grasp by increasing friction. Friction ridges occur in patterns (see fingerprint) that are genetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.
## Reticular region
The reticular region lies deep to the papillary region and is usually much thicker. It is composed of dense irregular connective tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Located within the reticular region are also the hair roots, sebaceous glands, sweat glands, receptors, nails, and blood vessels.
Tattoo ink is injected into the dermis. Stretch marks are also located in the dermis.
# Additional images
- Cross-section of all skin layers.
# External links
Find out How to get rid of stretch mark after pregnancy.
- Diagrams at cdc.gov
- Diagram at cdc.gov
- Histology image: 08201oba – Histology Learning System at Boston University - "Integument: three layers, reticular dermis, papillary dermis "
Template:Integumentary system
bg:Дерма
ca:Dermis
cs:Škára
de:Dermis
eo:Dermo
ms:Lapisan dermis
nl:Dermis
sk:Zamša
fi:Verinahka
sv:Läderhuden
th:หนังแท้
Template:WH
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Corium | |
6a780beaabbd0aca8a73a9be858714fa307eb4cd | wikidoc | Cornea | Cornea
The cornea is the transparent front part of the eye that covers the iris, pupil, and anterior chamber, providing most of an eye's optical power. Together with the lens, the cornea refracts light, and as a result helps the eye to focus, accounting for approximately 80% of its production to 20% of the lens focusing power. The cornea contributes more to the total refraction than the lens does, but, whereas the curvature of the lens can be adjusted to "tune" the focus depending upon the object's distance, the curvature of the cornea is fixed.
The cornea has unmyelinated nerve endings sensitive to touch, temperature and chemicals; a touch of the cornea causes an involuntary reflex to close the eyelid. Because transparency is of prime importance the cornea does not have blood vessels; it receives nutrients via diffusion from the tear fluid at the outside and the aqueous humour at the inside and also from neurotrophins supplied by nerve fibres that innervate it. In humans, the cornea has a diameter of about 11.5 mm and a thickness of 0.5 mm - 0.6 mm in the center and 0.6 mm - 0.8 mm at the periphery. Transparency, avascularity, and immunologic privilege makes the cornea a very special tissue.
In humans, the refractive power of the cornea is approximately 43 dioptres, roughly two-thirds of the eye's total refractive power.
Medical terms related to the cornea often start with the prefix "kerat-".
# Layers
The human cornea, like that of other primates, has five layers. The corneas of cats, dogs, and other carnivores have only four. From the anterior to posterior they are:
- Corneal epithelium: a thin epithelial multicellular layer of fast-growing and easily-regenerated cells, kept moist with tears. Irregularity or edema of the corneal epithelium disrupts the smoothness of the air-tear film interface, the most significant component of the total refractive power of the eye, thereby reducing visual acuity.
- Bowman's layer (also erroneously known as the anterior limiting membrane, when in fact it is not a membrane but a condensed layer of collagen): a tough layer that protects the corneal stroma, consisting of irregularly-arranged collagen fibers. This layer is absent in carnivores.
- Corneal stroma (also substantia propria): a thick, transparent middle layer, consisting of regularly-arranged collagen fibers along with sparsely populated keratocytes. The corneal stroma consists of approximately 200 layers of type I collagen fibrils. There are 2 theories of how transparency in the cornea comes about:
The lattice arrangements of the collagen fibrils in the stroma. The light scatter by individual fibrils is cancelled by destructive interference from the scattered light from other individual fibrils.(Maurice)
The spacing of the neighbouring collagen fibrils in the stroma must be < 200 nm for there to be transparency. (Goldman and Benedek)
- The lattice arrangements of the collagen fibrils in the stroma. The light scatter by individual fibrils is cancelled by destructive interference from the scattered light from other individual fibrils.(Maurice)
- The spacing of the neighbouring collagen fibrils in the stroma must be < 200 nm for there to be transparency. (Goldman and Benedek)
- Descemet's membrane (also posterior limiting membrane): a thin acellular layer that serves as the modified basement membrane of the corneal endothelium.
- Corneal endothelium: a simple squamous or low cuboidal monolayer of mitochondria-rich cells responsible for regulating fluid and solute transport between the aqueous and corneal stromal compartments. (The term endothelium is a misnomer here. The corneal endothelium is bathed by aqueous humour, not by blood or lymph, and has a very different origin, function, and appearance from vascular endothelia.)
# Innervation
The cornea is one of the most sensitive tissues of the body, it is densely innervated with sensory nerve fibres via the ophthalmic division of the trigeminal nerve by way of 70 - 80 long ciliary nerves and short ciliary nerves.
The nerves enter the cornea via three levels, scleral, episcleral and conjunctival. Most of the bundles give rise by subdivision to a network in the stroma, from which fibres supply the different regions. The three networks are midstromal, subepithelial/Bowman's layer, and epithelium. The receptive fields of each nerve ending are very large, and may overlap.
Corneal nerves have been observed to terminate in a logarithmic spiral pattern.
# Diseases and disorders
# Treatment and management of corneal diseases and disorders
## Surgical procedures involving the cornea
Various refractive eye surgery techniques change the shape of the cornea in order to reduce the need for corrective lenses or otherwise improve the refractive state of the eye. In many of the techniques used today, reshaping of the cornea is performed by photoablation using the excimer laser.
If the corneal stroma develops visually significant opacity, irregularity, or edema, a cornea of a deceased donor can be transplanted. Because there are no blood vessels in the cornea, there are also few problems with rejection of the new cornea.
There are also synthetic corneas (keratoprostheses) in development. Most are merely plastic inserts, but there are also composed of biocompatible synthetic materials that encourage tissue ingrowth into the synthetic cornea, thereby promoting biointegration.
## Non-surgical procedures involving the cornea
Orthokeratology is a method using specialized hard or rigid gas-permeable contact lenses to transiently reshape the cornea in order to improve the refractive state of the eye or reduce the need for eyeglasses and contact lenses. | Cornea
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The cornea is the transparent front part of the eye that covers the iris, pupil, and anterior chamber, providing most of an eye's optical power.[1] Together with the lens, the cornea refracts light, and as a result helps the eye to focus, accounting for approximately 80% of its production to 20% of the lens focusing power.[2] The cornea contributes more to the total refraction than the lens does, but, whereas the curvature of the lens can be adjusted to "tune" the focus depending upon the object's distance, the curvature of the cornea is fixed.
The cornea has unmyelinated nerve endings sensitive to touch, temperature and chemicals; a touch of the cornea causes an involuntary reflex to close the eyelid. Because transparency is of prime importance the cornea does not have blood vessels; it receives nutrients via diffusion from the tear fluid at the outside and the aqueous humour at the inside and also from neurotrophins supplied by nerve fibres that innervate it. In humans, the cornea has a diameter of about 11.5 mm and a thickness of 0.5 mm - 0.6 mm in the center and 0.6 mm - 0.8 mm at the periphery. Transparency, avascularity, and immunologic privilege makes the cornea a very special tissue.
In humans, the refractive power of the cornea is approximately 43 dioptres, roughly two-thirds of the eye's total refractive power.[3]
Medical terms related to the cornea often start with the prefix "kerat-".
# Layers
The human cornea, like that of other primates, has five layers. The corneas of cats, dogs, and other carnivores have only four.[4] From the anterior to posterior they are:
- Corneal epithelium: a thin epithelial multicellular layer of fast-growing and easily-regenerated cells, kept moist with tears. Irregularity or edema of the corneal epithelium disrupts the smoothness of the air-tear film interface, the most significant component of the total refractive power of the eye, thereby reducing visual acuity.
- Bowman's layer (also erroneously known as the anterior limiting membrane, when in fact it is not a membrane but a condensed layer of collagen): a tough layer that protects the corneal stroma, consisting of irregularly-arranged collagen fibers. This layer is absent in carnivores.[4]
- Corneal stroma (also substantia propria): a thick, transparent middle layer, consisting of regularly-arranged collagen fibers along with sparsely populated keratocytes. The corneal stroma consists of approximately 200 layers of type I collagen fibrils. There are 2 theories of how transparency in the cornea comes about:
The lattice arrangements of the collagen fibrils in the stroma. The light scatter by individual fibrils is cancelled by destructive interference from the scattered light from other individual fibrils.(Maurice)
The spacing of the neighbouring collagen fibrils in the stroma must be < 200 nm for there to be transparency. (Goldman and Benedek)
- The lattice arrangements of the collagen fibrils in the stroma. The light scatter by individual fibrils is cancelled by destructive interference from the scattered light from other individual fibrils.(Maurice)
- The spacing of the neighbouring collagen fibrils in the stroma must be < 200 nm for there to be transparency. (Goldman and Benedek)
- Descemet's membrane (also posterior limiting membrane): a thin acellular layer that serves as the modified basement membrane of the corneal endothelium.
- Corneal endothelium: a simple squamous or low cuboidal monolayer of mitochondria-rich cells responsible for regulating fluid and solute transport between the aqueous and corneal stromal compartments. (The term endothelium is a misnomer here. The corneal endothelium is bathed by aqueous humour, not by blood or lymph, and has a very different origin, function, and appearance from vascular endothelia.)
# Innervation
The cornea is one of the most sensitive tissues of the body, it is densely innervated with sensory nerve fibres via the ophthalmic division of the trigeminal nerve by way of 70 - 80 long ciliary nerves and short ciliary nerves.
The nerves enter the cornea via three levels, scleral, episcleral and conjunctival. Most of the bundles give rise by subdivision to a network in the stroma, from which fibres supply the different regions. The three networks are midstromal, subepithelial/Bowman's layer, and epithelium. The receptive fields of each nerve ending are very large, and may overlap.
Corneal nerves have been observed to terminate in a logarithmic spiral pattern.[5]
# Diseases and disorders
# Treatment and management of corneal diseases and disorders
## Surgical procedures involving the cornea
Various refractive eye surgery techniques change the shape of the cornea in order to reduce the need for corrective lenses or otherwise improve the refractive state of the eye. In many of the techniques used today, reshaping of the cornea is performed by photoablation using the excimer laser.
If the corneal stroma develops visually significant opacity, irregularity, or edema, a cornea of a deceased donor can be transplanted. Because there are no blood vessels in the cornea, there are also few problems with rejection of the new cornea.
There are also synthetic corneas (keratoprostheses) in development. Most are merely plastic inserts, but there are also composed of biocompatible synthetic materials that encourage tissue ingrowth into the synthetic cornea, thereby promoting biointegration.
## Non-surgical procedures involving the cornea
Orthokeratology is a method using specialized hard or rigid gas-permeable contact lenses to transiently reshape the cornea in order to improve the refractive state of the eye or reduce the need for eyeglasses and contact lenses. | https://www.wikidoc.org/index.php/Cornea | |
091f9ab5324d24522008e88f69d1b4eddb90384b | wikidoc | Corojo | Corojo
Corojo is a type of tobacco, primarily used in the making of cigars, originally grown in the Vuelta Abajo region of Cuba.
# Origin
Corojo was originally developed and grown by Diego Rodriguez at his farm or vega, Santa Ines del Corojo and takes its name from the farm. It was used as a wrapper extensively for many years on Cuban cigars, but its susceptibility to various diseases, Blue Mold in particular, caused the Cuban genetic engineers to develop various hybrid forms that would not only be disease-resistant, but would also display excellent wrapper qualities.
# Hybrid or Pure?
Today, both hybrid and pure strains of Corojo are used in the production of cigars. Most of the pure Corojo leaf is currently grown in Honduras' Jamastran Valley, while the hybrid varieties are more widely grown and used.
# Reference
The basis for summary article is Bernardo, Mark. A Tale of Two Seeds, Smoke Magazine, Spring, 2004 issue (vol. 9, issue 2). | Corojo
Corojo is a type of tobacco, primarily used in the making of cigars, originally grown in the Vuelta Abajo region of Cuba.
# Origin
Corojo was originally developed and grown by Diego Rodriguez at his farm or vega, Santa Ines del Corojo and takes its name from the farm. It was used as a wrapper extensively for many years on Cuban cigars, but its susceptibility to various diseases, Blue Mold in particular, caused the Cuban genetic engineers to develop various hybrid forms that would not only be disease-resistant, but would also display excellent wrapper qualities.
# Hybrid or Pure?
Today, both hybrid and pure strains of Corojo are used in the production of cigars. Most of the pure Corojo leaf is currently grown in Honduras' Jamastran Valley, while the hybrid varieties are more widely grown and used.
# Reference
The basis for summary article is Bernardo, Mark. A Tale of Two Seeds, Smoke Magazine, Spring, 2004 issue (vol. 9, issue 2).
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Corojo | |
5baf6510841be985e7092fbd4e53e0ba66a095a2 | wikidoc | Uterus | Uterus
# Overview
The uterus or womb is the major female reproductive organ of most mammals, including humans. One end, the cervix, opens into the vagina; the other is connected on both sides to the fallopian tubes. The term uterus is commonly used within the medical and related professions, whilst womb is in more common usage. The plural of uterus is uteri.
# Function
The main function of the uterus is to accept a fertilized ovum which becomes implanted into the endometrium, and derives nourishment from blood vessels which develop exclusively for this purpose. The fertilized ovum becomes an embryo, develops into a fetus and gestates until childbirth. Due to anatomical barriers such as the pelvis, the uterus is pushed partially into the abdomen due to its expansion during pregnancy. Even in pregnancy the mass of a human uterus amounts to only about a kilogram (2.2 pounds).
# Forms in mammals
In mammals, the four main forms in which it is found are:
# Anatomy
The uterus is located inside the pelvis immediately dorsal (and usually somewhat rostral) to the urinary bladder and ventral to the rectum. Outside of pregnancy, its size in humans is several centimeters in diameter.
## Regions
From outside to inside, the path to the uterus is as follows:
- Vulva
- Vagina
- Cervix uteri - "neck of uterus"
External orifice of the uterus
Canal of the cervix
Internal orifice of the uterus
- External orifice of the uterus
- Canal of the cervix
- Internal orifice of the uterus
- corpus uteri - "Body of uterus"
Cavity of the body of the uterus
Fundus (uterus)
- Cavity of the body of the uterus
- Fundus (uterus)
## Layers
The layers, from innermost to outermost, are as follows:
## Support
The uterus is primarily supported by the pelvic diaphragm and the urogenital diaphragm. Secondarily, it is supported by ligaments and the peritoneum (broad ligament of the uterus)
## Major ligaments
It is held in place by several peritoneal ligaments, of which the following are the most important (there are two of each):
Other named ligaments near the uterus, i.e. the broad ligament, the round ligament, the suspensory ligament of the ovary, the infundibulopelvic ligament, have no role in the support of the uterus.
## Position
Under normal circumstances the uterus is both "anteflexed" and "anteverted." The meaning of these terms are described below:
# Development
The bilateral Müllerian ducts form during early fetal life. In males, MIF secreted from the testes leads to their regression. In females these ducts give rise to the Fallopian tubes and the uterus. In humans the lower segments of the two ducts fuse to form a single uterus, however, in cases of uterine malformations this development may be disturbed. The different uterine forms in various mammals are due to various degrees of fusion of the two Müllerian ducts.
# Pathology
Some pathological states include:
- Prolapse of the uterus
- Carcinoma of the cervix – malignant neoplasm
- Carcinoma of the uterus – malignant neoplasm
- Ectopic pregnancy
- Fibroids – benign neoplasms
- Adenomyosis – ectopic growth of endometrial tissue within the myometrium
- Pyometra
- Uterine malformation
- Uterine Didelphys – split or doubled vagina/uterus
- Retroverted uterus
- Rokitansky syndrome
- Myoma
# Additional images
CT images of a pregnant patient thought to have appendicitis. Benefits of CT scan felt to outweigh the risks. See the gravid uterus
Additional images
- Error creating thumbnail: File missing
Schematic frontal view of female anatomy
- Uterus and uterine tubes.
- Anatomical model of a human pregnancy
- Sectional plan of the gravid uterus in the third and fourth month.
- Fetus in utero, between fifth and sixth months.
- Vessels of the uterus and its appendages, rear view.
- Uterus and right broad ligament, seen from behind.
- Female pelvis and its contents, seen from above and in front.
- Sagittal section of the lower part of a female trunk, right segment.
- Posterior half of uterus and upper part of vagina.
- The arteries of the internal organs of generation of the female, seen from behind.
- Median sagittal section of female pelvis. | Uterus
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Template:Editor help
# Overview
The uterus or womb is the major female reproductive organ of most mammals, including humans. One end, the cervix, opens into the vagina; the other is connected on both sides to the fallopian tubes. The term uterus is commonly used within the medical and related professions, whilst womb is in more common usage. The plural of uterus is uteri.
# Function
The main function of the uterus is to accept a fertilized ovum which becomes implanted into the endometrium, and derives nourishment from blood vessels which develop exclusively for this purpose. The fertilized ovum becomes an embryo, develops into a fetus and gestates until childbirth. Due to anatomical barriers such as the pelvis, the uterus is pushed partially into the abdomen due to its expansion during pregnancy. Even in pregnancy the mass of a human uterus amounts to only about a kilogram (2.2 pounds).
# Forms in mammals
In mammals, the four main forms in which it is found are:
# Anatomy
The uterus is located inside the pelvis immediately dorsal (and usually somewhat rostral) to the urinary bladder and ventral to the rectum. Outside of pregnancy, its size in humans is several centimeters in diameter.
## Regions
From outside to inside, the path to the uterus is as follows:
- Vulva
- Vagina
- Cervix uteri - "neck of uterus"
External orifice of the uterus
Canal of the cervix
Internal orifice of the uterus
- External orifice of the uterus
- Canal of the cervix
- Internal orifice of the uterus
- corpus uteri - "Body of uterus"
Cavity of the body of the uterus
Fundus (uterus)
- Cavity of the body of the uterus
- Fundus (uterus)
## Layers
The layers, from innermost to outermost, are as follows:
## Support
The uterus is primarily supported by the pelvic diaphragm and the urogenital diaphragm. Secondarily, it is supported by ligaments and the peritoneum (broad ligament of the uterus) [1]
## Major ligaments
It is held in place by several peritoneal ligaments, of which the following are the most important (there are two of each):
Other named ligaments near the uterus, i.e. the broad ligament, the round ligament, the suspensory ligament of the ovary, the infundibulopelvic ligament, have no role in the support of the uterus.
## Position
Under normal circumstances the uterus is both "anteflexed" and "anteverted." The meaning of these terms are described below:
# Development
The bilateral Müllerian ducts form during early fetal life. In males, MIF secreted from the testes leads to their regression. In females these ducts give rise to the Fallopian tubes and the uterus. In humans the lower segments of the two ducts fuse to form a single uterus, however, in cases of uterine malformations this development may be disturbed. The different uterine forms in various mammals are due to various degrees of fusion of the two Müllerian ducts.
# Pathology
Some pathological states include:
- Prolapse of the uterus
- Carcinoma of the cervix – malignant neoplasm
- Carcinoma of the uterus – malignant neoplasm
- Ectopic pregnancy
- Fibroids – benign neoplasms
- Adenomyosis – ectopic growth of endometrial tissue within the myometrium
- Pyometra
- Uterine malformation
- Uterine Didelphys – split or doubled vagina/uterus
- Retroverted uterus
- Rokitansky syndrome
- Myoma
# Additional images
CT images of a pregnant patient thought to have appendicitis. Benefits of CT scan felt to outweigh the risks. See the gravid uterus
-
-
Additional images
- Error creating thumbnail: File missing
Schematic frontal view of female anatomy
- Uterus and uterine tubes.
-
- Anatomical model of a human pregnancy
- Sectional plan of the gravid uterus in the third and fourth month.
- Fetus in utero, between fifth and sixth months.
- Vessels of the uterus and its appendages, rear view.
- Uterus and right broad ligament, seen from behind.
- Female pelvis and its contents, seen from above and in front.
- Sagittal section of the lower part of a female trunk, right segment.
- Posterior half of uterus and upper part of vagina.
- The arteries of the internal organs of generation of the female, seen from behind.
- Median sagittal section of female pelvis. | https://www.wikidoc.org/index.php/Corpus_uteri | |
b1799bd69b7db5c85f017be5b6a6f2c06892163a | wikidoc | Corset | Corset
A corset is a garment worn to mold and shape the torso into a desired shape for aesthetic or medical purposes (either for the duration of wearing it, or with a more lasting effect).
Both men and women are known to wear corsets.
Many garments sold as "corsets" during recent years are not technically corsets in the traditional sense. While modern "corsets" and "corset tops" often feature lacing and/or boning and generally mimic a historical style of corsets, they have very little if any effect on the shape of the wearer's body.
In recent years, the term "corset" has also been borrowed by the fashion industry to refer to tops which, to varying degrees, mimic the look of traditional corsets without actually acting as one; such tops are frequently seen in stores which cater to fans of gothic fashion or emo fashion. Many such tops feature lacing or boning and are fairly tight-fitting; however, genuine corsets are usually made by a corsetmaker and should ideally be fitted especially for the wearer.
# Etymology
The word corset is derived from the Old French word cors, the diminutive of body, which itself derives from corpus - the Latin for body.
# Uses
## Fashion
The most common and well-known use of corsets is to slim the body and make it conform to a fashionable silhouette. For women this most frequently emphasizes a curvy figure, by reducing the waist, and thereby exaggerating the bust and hips (see photo). However, in some periods, corsets have been worn to achieve a tubular straight-up-and-down shape, which involves minimizing the bust and hips.
For men, corsets are more customarily used to slim the figure. However, there was a period from around 1820 to 1835 when an hourglass figure (a small, nipped-in look to the waist) was also desirable for men; this was sometimes achieved by wearing a corset.
An overbust corset encloses the torso, extending from just under the arms to the hips. An underbust corset begins just under the breasts and extends down to the hips. Some corsets extend over the hips and, in very rare instances, reach the knees (example). A shorter kind of corset, which covers the waist area (from low on the ribs to just above the hips), is called a 'waist cincher'. A corset may also include garters to hold up stockings (alternatively a separate garter belt may be worn for that).
Normally a corset supports the visible dress, and spreads the pressure from large dresses, such as the crinoline and bustle. Sometimes a corset cover is used to protect outer clothes from the corset and to smooth the lines of the corset.
## Medical
People with spinal problems such as scoliosis or with internal injuries may be fitted with a form of corset in order to immobilize and protect the torso. However, this may be harmful if not medically indicated.
Andy Warhol was shot in 1968 and never fully recovered, and wore a corset for the rest of his life.
## Fetish
Aside from fashion and medical uses, corsets are also used in sexual fetishism, most notably in BDSM activities. In BDSM, a submissive can be forced to wear a corset which would be laced very tight and give some degree of restriction to the wearer. A dominant can also wear a corset, but for entirely different reasons such as aesthetics.
# Construction
Corsets are typically constructed of a flexible material (like cloth, particularly coutil, or leather) stiffened with boning (also called ribs or stays) inserted into channels in the cloth or leather. In the 19th century, steel and whalebone were favored for the boning. Featherbone was used as a less expensive substitute for whalebone and was constructed from flattened strips of goose quill woven together with yarn to form a long strip (Doyle, 1997:232). Plastic is now the most commonly used material for lightweight corsets, whereas spring or spiral steel is preferred for stronger corsets. Other materials used for boning include ivory, wood, and cane. (By contrast, a girdle is usually made of elasticized fabric, without boning.)
The craft of corset construction is known as corsetry, as is the general wearing of them. Someone who makes corsets is a corsetier or corsetière (French terms for a man and for a woman, respectively), or sometimes simply a corsetmaker. (The word corsetry is sometimes also used as a collective plural form of corset.)
Corsets are held together by lacing, usually (though not always) at the back. Tightening or loosening the lacing produces corresponding changes in the firmness of the corset. Depending on the desired effect and time period, corsets can be laced from the top down, from the bottom up or two laces were used to lace up from the bottom and down from the top to meet in the middle. It is difficult — although not impossible — for a back-laced corset-wearer to do his or her own lacing. In the Victorian heyday of corsets, a well-to-do woman would be laced by her maid, and a gentleman by his valet. However, many corsets also had a buttoned or hooked front opening called a busk. Once the lacing was adjusted comfortably, it was possible to leave the lacing as adjusted and take the corset on and off using the front opening (this method can potentially damage the busk if the lacing is not significantly loosened beforehand). Self-lacing is also almost impossible with tightlacing, which strives for the utmost possible reduction of the waist. Modern tightlacers, lacking servants, are usually laced by spouses and partners.
# Waist reduction
By wearing a tightly-laced corset for extended periods, known as tightlacing, men and women can learn to tolerate extreme waist constriction and eventually reduce their natural waist size. Tightlacers usually aim for 40 to 43 centimeters (16 to 17 inches) waists. Until 1998, the Guinness Book of World Records listed Ethel Granger as having the smallest waist on record at 32.5 centimeters (13 inches). After 1998, the category changed to "smallest waist on a living person" and Cathie Jung took the title with a 37.5 centimeters (15 inches) waist. Other women, such as Polaire, also have achieved such reductions.
These are extreme cases, however. Corsets were and are still usually designed for support, with freedom of body movement an important consideration in their design. Present day corset-wearers usually tighten the corset just enough to reduce their waists by 5 to 10 centimeters (2 to 4 inches); it is very difficult for a slender woman to achieve as much as 15 centimeters (6 inches), although larger women can do so more easily.
# Corset comfort
In the past, a woman's corset was usually worn over a garment called a chemise or shift, a sleeveless low-necked gown made of washable material (usually cotton or linen). It absorbed perspiration and kept the corset and the gown clean. In modern times, an undershirt or corset liner may be worn.
Moderate lacing is not incompatible with vigorous activity. Indeed, during the second half of the nineteenth century, when corset wearing was common, there were sport corsets specifically designed to wear while bicycling, playing tennis, or horseback riding, as well as for maternity wear.
Many people now believe that all corsets are uncomfortable and that wearing them restricted women's lives, citing Victorian literature devoted to sensible or hygienic dress. However, these writings were most apt to protest against the misuse of corsets for tightlacing; they were less vehement against corsets per se. Many reformers recommended "Emancipation bodices", which were essentially tightly-fitted vests, like full-torso corsets without boning. See Victorian dress reform.
Some modern day corset-wearers will testify that corsets can be comfortable, once one is accustomed to wearing them. A properly fitted corset should be comfortable. Women active in the historical reenactment groups (such as Society for Creative Anachronism) commonly wear corsets as part of period costume, without complaint.
# Modern history
The corset fell from fashion in the 1920s in Europe and America, replaced by girdles and elastic brassieres, but survived as an article of costume. Originally an item of lingerie, the corset has become a popular item of outerwear in the fetish, BDSM and goth subcultures.
In the fetish and BDSM literature, there is often much emphasis on tightlacing. In this case, the corset may still be underwear rather than outerwear.
There was a brief revival of the corset in the late 1940s and early 1950s, in the form of the waist cincher sometimes called a "waspie". This was used to give the hourglass figure dictated by Christian Dior's 'New Look'. However, use of the waist cincher was restricted to haute couture, and most women continued to use girdles. This revival was brief, as the New Look gave way to a less dramatically-shaped silhouette.
Since the late 1980s, the corset has experienced periodic revivals, which have usually originated in haute couture and which have occasionally trickled through to mainstream fashion. These revivals focus on the corset as an item of outerwear rather than underwear. The strongest of these revivals was seen in the Autumn 2001 fashion collections and coincided with the release of the film Moulin Rouge!, the costumes for which featured many corsets as characteristic of the era.
Similarly other films have used these garments as costume features, generally to suggest a period effect, as in Van Helsing where Anna Valerious (Kate Beckinsale) wears an ornate underbust corset as part of her costume. Sometimes this is used for humorous purposes, as when in Pirates of the Caribbean: The Curse of the Black Pearl Elizabeth Swann (Keira Knightley) almost suffocates from wearing a tight corset. One distinctive feature has been to portray them in combination with catsuits, as in Star Trek: Voyager where Seven of Nine (Jeri Ryan) throughout the series wears catsuits with contained built-in corsets, or Underworld, where Selene (Kate Beckinsale) wears a black leather corset over matching latex catsuit.
The majority of garments sold as "corsets" (or sometimes "corset tops") during these recent revivals cannot really be counted as corsets at all, in the traditional sense of the word. While they often feature lacing and boning and generally mimic a historical style of corsets, they have little or no effect on the shape of the wearer's body; traditional corsets generally require custom fitting by a tailor who specialises in corsetry.
# Special types
There are some special types of corsets and corset-like devices which incorporate boning.
## Corset dress and Post-Edwardian long line corset
A corset dress (also known as hobble corset because it produces similar restrictive effects to a hobble skirt) is a long corset. It is like an ordinary corset, but it is long enough to cover the legs, partially or totally. It thus looks like a dress, hence the name. A person wearing a corset dress can have great difficulty in walking up and down the stairs (especially if wearing high-heeled footwear) and may be unable to sit down if the boning is too stiff.
## Neck corset
A neck corset is a type of posture collar incorporating stays and it is generally not considered to be a corset.
## Chastity corset
A chastity corset is a corset that incorporates chastity device, usually a Florentine design chastity belt. Such corsets are usually lockable. A chastity device part can be either attachable to the bottom of the corset or it can be integral part of the corset. In its lower part, former type of the chastity corset is like bodysuit without crotch opening.
# Advantages and disadvantages of corsets
There are several advantages and disadvantages to wearing a corset.
## Advantages
### Health benefits
- Corsets promote good posture.
- Corsets can reduce pain and improve function for people with back problems or other muscular/skeletal disorders, such as Lordosis.
- Some large-breasted women find corsets more comfortable than brassieres, because the weight of the breasts is carried by the whole corset rather than the brassiere's shoulder straps. Straps can chafe or cut the skin. However, if a bra is properly fitted, the weight of the breasts is carried by the band and not by the shoulders, thus eliminating this problem for even women with very large breasts.
- Some large-breasted women find some corsets more comfortable and practical than brassieres, because a corset can support the Lordosis in back (a common balance problem on large-breasted women), and because the shoulders are free. And some find the broad-shoulder straps on big brassieres unsightly.
### Personal, social and aesthetic advantages
- Corsets can give a straight masterful posture.
- The straight posture accentuates the bosom.
- Corsets can instantly reduce the waistline by 5-10cm (2-4").
- Corsets can spread the weight of big gowns.
- The corsets can show social status, as the corset-wearers are different from other people. In the old days, the upper-class wore corsets to demonstrate distinction from lower classes. Today, some subcultures wear corsets to be opposite to the conformity.
- Some corset-wearers enjoy the feeling of being "hugged" by the corset.
- Due to their tightness and close proximity to the body, corsets can make the wearer feel very warm. They have historically been worn in cooler climates.
- Some corset-wearers believe the shallow breathing imposed by the garment may charm men.
### Long-term advantages
- The abdominal pressure maintained by frequent corset use can help wearers reduce body fat by inhibiting the appetite without conscious dieting, slimming drugs, or cosmetic surgery.
- Training with corsets can reduce waistline by 18cm (7") or more. See: Tightlacing
## Disadvantages
### Health risks
- Glénard's disease is the most common illness caused by prolonged corset use. It is characterized by lack of abdominal muscle tone and visceral displacement.
- Wearing tight-laced corsets over a long period of time may cause the lower ribs (floating ribs) to become deformed and pushed inwards. This can lead to organ failure, dehydration, or broken ribs.
- Improper corset use may deform the stomach and liver.
- Developing children are far more vulnerable to the potential health risks of corset use. As such, corsets should only be worn by fully-formed adults, never by growing children.
### Difficulties finding a corset
- Low-quality corsets. Finding a well-fitting, good-quality corset among the many imitations can be challenging. The potential wearer must try on and inspect any corset being considered for purchase for quality and fit. An ill-fitting corset will chafe, impede digestion, and ultimately cause damage to the ribs and pinch nerves.
### The difficulties in getting used to corsets
- Fainting. If the wearer is unaccustomed to shallow breathing, the muscles soon tire and work too slowly, severely reducing oxygen supply. Would-be wearers must train up their breathing gradually.
- It is important that the corset lengthens the waist, like a redresseur corset, for better shallow breathing. A waist cincher is too short to accomplish this.
# Beginning to wear a corset
Corsets must be broken in/molded to the owner's body for the proper fit and reduction of stress on the seams that may lead to ripping. A corset must mold itself to the body of the wearer, so buying a custom corset is recommended. It takes a full day for a corset to mold to the wearers body. It is started by lacing the corset on loosely and tightening the laces every few hours at least. This allows the corset to gradually mold to the body using body heat, making an overall better feeling corset. One may even need to take off the corset and let it cool before resuming to mold it in order to have in mold better. It is highly unadvisable to wear someone else's corset, as it is molded to their body and was made to fit them! Wearing a corset made for someone else may result in pain and the corset to umold and cause the original wearer discomfort. If one is not a corset wearer, it may take up to a week to feel fully comfortable in a corset. That doesn't mean that the corset should hurt, because it never should, it just may take up to a week for a corset to become less uncomfortable.
## Corsets for beginners
Corsets for beginners (also known as starter or beginner corsets) should be easy to adjust to for someone who has never been corseted, and give the correct position of the ribs. Three types of corsets are recommended for beginners:
- The common corset, which goes from the hip (close to the pubis) and has a wasp waist. All corsets from Spirella Co. are of this type..
- The underbust hourglass corset for tightlacing, with a waist reduction of no more than 4" unless the wearer's initial waist is larger than 38", in which case a 6" reduction is acceptable. However, only a short wasp waist can fit a beginner.
- Historical corsets specifically for beginners - pair of stays and redresseur corsets. Redresseur corsets fell out of fashion in 1919.
To be avoided by beginners:
- Waist cinchers and waist training belts are not recommended, as they do not offer proper support of the stomach.
- Many historical corsets were designed with the assumption that wearers had used corsets for years, and so are harmful for beginners. The wasp waist in these corsets is too long, forcing the ribs to bend down rather than up as correct. Fashionable women of the past had a long waists; longer than modern natural waists.
# References and further reading
- Doyle, R. "Waisted Efforts, A Illustrated Guide To Corset Making". Sartorial Press Publications, 1997. ISBN 0-9683039-0-0
- Valerie Steele, The Corset: A Cultural History. Yale University Press, 2001, ISBN 0-300-09953-3
- Larry Utley, Autumn Carey-Adamme, Fetish Fashion: Undressing the Corset Green Candy Press, 2002. ISBN 1-931160-06-6
- Norah Waugh, Corsets and Crinolines. Routledge (December 1, 1990), ISBN 0-87830-526-2
- Category:Corset
- Categorie:Corset
- ↑ /~altarboy/nt000605.htm
- ↑ Toleration of the corset
- ↑ English:
Civilisation in relation to the abdominal viscera, with remarks on the corset
Toleration_of_the_corset, page 1054
A greater number of U. S. patents. Start by no. 137985 from 1873. Some later numbers are: 214352; 662999; 874386; 878446.
The Corset: Questions of Pressure and Displacement The abdominal wall is thinned and weakened by the pressure of stays. The pelvic floor is bulbed downward by tight lacing one third of an inch (0.9 cm.).
French:
Le Corset; Étude physiologique et pratique
De la sangle pelvienne contre l’entéroptose
German:
Die Körperpflege der Frau, Dr. C. H. Stratz 1907
- Civilisation in relation to the abdominal viscera, with remarks on the corset
- Toleration_of_the_corset, page 1054
- A greater number of U. S. patents. Start by no. 137985 from 1873. Some later numbers are: 214352; 662999; 874386; 878446.
- The Corset: Questions of Pressure and Displacement The abdominal wall is thinned and weakened by the pressure of stays. The pelvic floor is bulbed downward by tight lacing one third of an inch (0.9 cm.).
- Le Corset; Étude physiologique et pratique
- De la sangle pelvienne contre l’entéroptose
- Die Körperpflege der Frau, Dr. C. H. Stratz 1907
- ↑ Engel. — " Wiener med, Wochenschrift," 1860 (529), "Die wirkungen d. Schnürleibes." quoted Engel found the stomach displaced in the following remarkable manner: It was shoved to the left. Its long axis, from a horizontal or oblique direction, was changed to a vertical, so that the lesser curvature ran down directly to the left of the spinal column. see The Corsetand Image:EstomacCorset page068 .png
- ↑ #Observations_on_corset_pressure "The liver may be displaced upward or downward according as the pressure is applied below or above. The precise situation where the pressure is applied will vary with the prevailing fashion of dress; but most commonly in this country the displacement is downward, and this may be to such an extent that the lower margin reaches the ilium, and the liver appears to fill up the whole of the right side and front of the abdomen."
- ↑ #reshape%20my%20ribs The rigid front ribs cannot be reduced without fracturing and resetting them.
- ↑ A corset that is poorly designed and/or laced too tightly can cause damage to the back and may pinch nerves in the pelvis and legs
- ↑ Spirella | Corset
A corset is a garment worn to mold and shape the torso into a desired shape for aesthetic or medical purposes (either for the duration of wearing it, or with a more lasting effect).
Both men and women are known to wear corsets.
Many garments sold as "corsets" during recent years are not technically corsets in the traditional sense. While modern "corsets" and "corset tops" often feature lacing and/or boning and generally mimic a historical style of corsets, they have very little if any effect on the shape of the wearer's body.
In recent years, the term "corset" has also been borrowed by the fashion industry to refer to tops which, to varying degrees, mimic the look of traditional corsets without actually acting as one; such tops are frequently seen in stores which cater to fans of gothic fashion or emo fashion[citation needed]. Many such tops feature lacing or boning and are fairly tight-fitting; however, genuine corsets are usually made by a corsetmaker and should ideally be fitted especially for the wearer.
# Etymology
The word corset is derived from the Old French word cors, the diminutive of body, which itself derives from corpus - the Latin for body[1].
# Uses
## Fashion
The most common and well-known use of corsets is to slim the body and make it conform to a fashionable silhouette. For women this most frequently emphasizes a curvy figure, by reducing the waist, and thereby exaggerating the bust and hips (see photo). However, in some periods, corsets have been worn to achieve a tubular straight-up-and-down shape, which involves minimizing the bust and hips.
For men, corsets are more customarily used to slim the figure. However, there was a period from around 1820 to 1835 when an hourglass figure (a small, nipped-in look to the waist) was also desirable for men; this was sometimes achieved by wearing a corset.
An overbust corset encloses the torso, extending from just under the arms to the hips. An underbust corset begins just under the breasts and extends down to the hips. Some corsets extend over the hips and, in very rare instances, reach the knees (example). A shorter kind of corset, which covers the waist area (from low on the ribs to just above the hips), is called a 'waist cincher'. A corset may also include garters to hold up stockings (alternatively a separate garter belt may be worn for that).
Normally a corset supports the visible dress, and spreads the pressure from large dresses, such as the crinoline and bustle. Sometimes a corset cover is used to protect outer clothes from the corset and to smooth the lines of the corset.
## Medical
People with spinal problems such as scoliosis or with internal injuries may be fitted with a form of corset in order to immobilize and protect the torso. However, this may be harmful if not medically indicated.
Andy Warhol was shot in 1968 and never fully recovered, and wore a corset for the rest of his life.
## Fetish
Aside from fashion and medical uses, corsets are also used in sexual fetishism, most notably in BDSM activities. In BDSM, a submissive can be forced to wear a corset which would be laced very tight and give some degree of restriction to the wearer. A dominant can also wear a corset, but for entirely different reasons such as aesthetics.
# Construction
Corsets are typically constructed of a flexible material (like cloth, particularly coutil, or leather) stiffened with boning (also called ribs or stays) inserted into channels in the cloth or leather. In the 19th century, steel and whalebone were favored for the boning. Featherbone was used as a less expensive substitute for whalebone and was constructed from flattened strips of goose quill woven together with yarn to form a long strip (Doyle, 1997:232). Plastic is now the most commonly used material for lightweight corsets, whereas spring or spiral steel is preferred for stronger corsets. Other materials used for boning include ivory, wood, and cane. (By contrast, a girdle is usually made of elasticized fabric, without boning.)
The craft of corset construction is known as corsetry, as is the general wearing of them. Someone who makes corsets is a corsetier or corsetière (French terms for a man and for a woman, respectively), or sometimes simply a corsetmaker. (The word corsetry is sometimes also used as a collective plural form of corset.)
Corsets are held together by lacing, usually (though not always) at the back. Tightening or loosening the lacing produces corresponding changes in the firmness of the corset. Depending on the desired effect and time period, corsets can be laced from the top down, from the bottom up or two laces were used to lace up from the bottom and down from the top to meet in the middle. It is difficult — although not impossible — for a back-laced corset-wearer to do his or her own lacing. In the Victorian heyday of corsets, a well-to-do woman would be laced by her maid, and a gentleman by his valet. However, many corsets also had a buttoned or hooked front opening called a busk. Once the lacing was adjusted comfortably, it was possible to leave the lacing as adjusted and take the corset on and off using the front opening (this method can potentially damage the busk if the lacing is not significantly loosened beforehand). Self-lacing is also almost impossible with tightlacing, which strives for the utmost possible reduction of the waist. Modern tightlacers, lacking servants, are usually laced by spouses and partners.
# Waist reduction
By wearing a tightly-laced corset for extended periods, known as tightlacing, men and women can learn to tolerate extreme waist constriction and eventually reduce their natural waist size. Tightlacers usually aim for 40 to 43 centimeters (16 to 17 inches) waists. Until 1998, the Guinness Book of World Records listed Ethel Granger as having the smallest waist on record at 32.5 centimeters (13 inches). After 1998, the category changed to "smallest waist on a living person" and Cathie Jung took the title with a 37.5 centimeters (15 inches) waist. Other women, such as Polaire, also have achieved such reductions.
These are extreme cases, however. Corsets were and are still usually designed for support, with freedom of body movement an important consideration in their design. Present day corset-wearers usually tighten the corset just enough to reduce their waists by 5 to 10 centimeters (2 to 4 inches); it is very difficult for a slender woman to achieve as much as 15 centimeters (6 inches), although larger women can do so more easily.
# Corset comfort
In the past, a woman's corset was usually worn over a garment called a chemise or shift, a sleeveless low-necked gown made of washable material (usually cotton or linen). It absorbed perspiration and kept the corset and the gown clean. In modern times, an undershirt or corset liner may be worn.
Moderate lacing is not incompatible with vigorous activity. Indeed, during the second half of the nineteenth century, when corset wearing was common, there were sport corsets specifically designed to wear while bicycling, playing tennis, or horseback riding, as well as for maternity wear.
Many people now believe that all corsets are uncomfortable and that wearing them restricted women's lives, citing Victorian literature devoted to sensible or hygienic dress.[citation needed] However, these writings were most apt to protest against the misuse of corsets for tightlacing; they were less vehement against corsets per se. Many reformers recommended "Emancipation bodices", which were essentially tightly-fitted vests, like full-torso corsets without boning. See Victorian dress reform.
Some modern day corset-wearers will testify that corsets can be comfortable, once one is accustomed to wearing them. A properly fitted corset should be comfortable. Women active in the historical reenactment groups (such as Society for Creative Anachronism) commonly wear corsets as part of period costume, without complaint.[citation needed]
# Modern history
The corset fell from fashion in the 1920s in Europe and America, replaced by girdles and elastic brassieres, but survived as an article of costume. Originally an item of lingerie, the corset has become a popular item of outerwear in the fetish, BDSM and goth subcultures.
In the fetish and BDSM literature, there is often much emphasis on tightlacing. In this case, the corset may still be underwear rather than outerwear.
There was a brief revival of the corset in the late 1940s and early 1950s, in the form of the waist cincher sometimes called a "waspie". This was used to give the hourglass figure dictated by Christian Dior's 'New Look'. However, use of the waist cincher was restricted to haute couture, and most women continued to use girdles. This revival was brief, as the New Look gave way to a less dramatically-shaped silhouette.
Since the late 1980s, the corset has experienced periodic revivals, which have usually originated in haute couture and which have occasionally trickled through to mainstream fashion. These revivals focus on the corset as an item of outerwear rather than underwear. The strongest of these revivals was seen in the Autumn 2001 fashion collections and coincided with the release of the film Moulin Rouge!, the costumes for which featured many corsets as characteristic of the era.
Similarly other films have used these garments as costume features, generally to suggest a period effect, as in Van Helsing where Anna Valerious (Kate Beckinsale) wears an ornate underbust corset as part of her costume. Sometimes this is used for humorous purposes, as when in Pirates of the Caribbean: The Curse of the Black Pearl Elizabeth Swann (Keira Knightley) almost suffocates from wearing a tight corset. One distinctive feature has been to portray them in combination with catsuits, as in Star Trek: Voyager where Seven of Nine (Jeri Ryan) throughout the series wears catsuits with contained built-in corsets, or Underworld, where Selene (Kate Beckinsale) wears a black leather corset over matching latex catsuit.
The majority of garments sold as "corsets" (or sometimes "corset tops") during these recent revivals cannot really be counted as corsets at all, in the traditional sense of the word. While they often feature lacing and boning and generally mimic a historical style of corsets, they have little or no effect on the shape of the wearer's body; traditional corsets generally require custom fitting by a tailor who specialises in corsetry.
# Special types
There are some special types of corsets and corset-like devices which incorporate boning.
## Corset dress and Post-Edwardian long line corset
Template:Seealso
A corset dress (also known as hobble corset because it produces similar restrictive effects to a hobble skirt) is a long corset. It is like an ordinary corset, but it is long enough to cover the legs, partially or totally. It thus looks like a dress, hence the name. A person wearing a corset dress can have great difficulty in walking up and down the stairs (especially if wearing high-heeled footwear) and may be unable to sit down if the boning is too stiff.
## Neck corset
A neck corset is a type of posture collar incorporating stays and it is generally not considered to be a corset.
## Chastity corset
A chastity corset is a corset that incorporates chastity device, usually a Florentine design chastity belt.[2] Such corsets are usually lockable. A chastity device part can be either attachable to the bottom of the corset or it can be integral part of the corset. In its lower part, former type of the chastity corset is like bodysuit without crotch opening.
# Advantages and disadvantages of corsets
There are several advantages and disadvantages to wearing a corset.
## Advantages
### Health benefits
- Corsets promote good posture.
- Corsets can reduce pain and improve function for people with back problems or other muscular/skeletal disorders, such as Lordosis.
- Some large-breasted women find corsets more comfortable than brassieres, because the weight of the breasts is carried by the whole corset rather than the brassiere's shoulder straps. Straps can chafe or cut the skin. However, if a bra is properly fitted, the weight of the breasts is carried by the band and not by the shoulders, thus eliminating this problem for even women with very large breasts.
- Some large-breasted women find some corsets more comfortable and practical than brassieres, because a corset can support the Lordosis in back (a common balance problem on large-breasted women), and because the shoulders are free. And some find the broad-shoulder straps on big brassieres unsightly.
### Personal, social and aesthetic advantages
- Corsets can give a straight masterful posture.[3]
- The straight posture accentuates the bosom.
- Corsets can instantly reduce the waistline by 5-10cm (2-4").
- Corsets can spread the weight of big gowns.
- The corsets can show social status, as the corset-wearers are different from other people. In the old days, the upper-class wore corsets to demonstrate distinction from lower classes. Today, some subcultures wear corsets to be opposite to the conformity.
- Some corset-wearers enjoy the feeling of being "hugged" by the corset.
- Due to their tightness and close proximity to the body, corsets can make the wearer feel very warm. They have historically been worn in cooler climates.
- Some corset-wearers believe the shallow breathing imposed by the garment may charm men.
### Long-term advantages
- The abdominal pressure maintained by frequent corset use can help wearers reduce body fat by inhibiting the appetite without conscious dieting, slimming drugs, or cosmetic surgery. [4]
- Training with corsets can reduce waistline by 18cm (7") or more. See: Tightlacing
## Disadvantages
### Health risks
- Glénard's disease is the most common illness caused by prolonged corset use. It is characterized by lack of abdominal muscle tone and visceral displacement.[5]
- Wearing tight-laced corsets over a long period of time may cause the lower ribs (floating ribs) to become deformed and pushed inwards. This can lead to organ failure, dehydration, or broken ribs.
- Improper corset use may deform the stomach[6] and liver.[7]
- Developing children are far more vulnerable to the potential health risks of corset use. As such, corsets should only be worn by fully-formed adults, never by growing children.
### Difficulties finding a corset
- Low-quality corsets. Finding a well-fitting, good-quality corset among the many imitations can be challenging. The potential wearer must try on and inspect any corset being considered for purchase for quality and fit. An ill-fitting corset will chafe, impede digestion, and ultimately cause damage to the ribs[8] and pinch nerves.[9]
### The difficulties in getting used to corsets
- Fainting. If the wearer is unaccustomed to shallow breathing, the muscles soon tire and work too slowly, severely reducing oxygen supply. Would-be wearers must train up their breathing gradually.
- It is important that the corset lengthens the waist, like a redresseur corset, for better shallow breathing. A waist cincher is too short to accomplish this.
# Beginning to wear a corset
Corsets must be broken in/molded to the owner's body for the proper fit and reduction of stress on the seams that may lead to ripping. A corset must mold itself to the body of the wearer, so buying a custom corset is recommended. It takes a full day for a corset to mold to the wearers body. It is started by lacing the corset on loosely and tightening the laces every few hours at least. This allows the corset to gradually mold to the body using body heat, making an overall better feeling corset. One may even need to take off the corset and let it cool before resuming to mold it in order to have in mold better. It is highly unadvisable to wear someone else's corset, as it is molded to their body and was made to fit them! Wearing a corset made for someone else may result in pain and the corset to umold and cause the original wearer discomfort. If one is not a corset wearer, it may take up to a week to feel fully comfortable in a corset. That doesn't mean that the corset should hurt, because it never should, it just may take up to a week for a corset to become less uncomfortable.
## Corsets for beginners
Corsets for beginners (also known as starter or beginner corsets) should be easy to adjust to for someone who has never been corseted, and give the correct position of the ribs. Three types of corsets are recommended for beginners:
- The common corset, which goes from the hip (close to the pubis) and has a wasp waist. All corsets from Spirella Co. are of this type.[10].
- The underbust hourglass corset for tightlacing, with a waist reduction of no more than 4" unless the wearer's initial waist is larger than 38", in which case a 6" reduction is acceptable. However, only a short wasp waist can fit a beginner.
- Historical corsets specifically for beginners - pair of stays and redresseur corsets. Redresseur corsets fell out of fashion in 1919.
To be avoided by beginners:
- Waist cinchers and waist training belts are not recommended, as they do not offer proper support of the stomach.
- Many historical corsets were designed with the assumption that wearers had used corsets for years, and so are harmful for beginners. The wasp waist in these corsets is too long, forcing the ribs to bend down rather than up as correct. Fashionable women of the past had a long waists; longer than modern natural waists.
# References and further reading
Template:Wikisource
Template:Wikibooks
- Doyle, R. "Waisted Efforts, A Illustrated Guide To Corset Making". Sartorial Press Publications, 1997. ISBN 0-9683039-0-0
- Valerie Steele, The Corset: A Cultural History. Yale University Press, 2001, ISBN 0-300-09953-3
- Larry Utley, Autumn Carey-Adamme, Fetish Fashion: Undressing the Corset Green Candy Press, 2002. ISBN 1-931160-06-6
- Norah Waugh, Corsets and Crinolines. Routledge (December 1, 1990), ISBN 0-87830-526-2
- Category:Corset
- Categorie:Corset
- ↑ http://www.etymonline.com/index.php?term=corset
- ↑ http://www.tpe.com/~altarboy/nt000605.htm
- ↑ Toleration of the corset
- ↑ http://romantasy.com/index.html?BodyURL=ZXQ/cyboutique/Workshops/WaistTraining.shtml
- ↑ English:
Civilisation in relation to the abdominal viscera, with remarks on the corset
Toleration_of_the_corset, page 1054
A greater number of U. S. patents. Start by no. 137985 from 1873. Some later numbers are: 214352; 662999; 874386; 878446.
The Corset: Questions of Pressure and Displacement The abdominal wall is thinned and weakened by the pressure of stays. The pelvic floor is bulbed downward by tight lacing one third of an inch (0.9 cm.).
French:
Le Corset; Étude physiologique et pratique
De la sangle pelvienne contre l’entéroptose
German:
Die Körperpflege der Frau, Dr. C. H. Stratz 1907
- Civilisation in relation to the abdominal viscera, with remarks on the corset
- Toleration_of_the_corset, page 1054
- A greater number of U. S. patents. Start by no. 137985 from 1873. Some later numbers are: 214352; 662999; 874386; 878446.
- The Corset: Questions of Pressure and Displacement The abdominal wall is thinned and weakened by the pressure of stays. The pelvic floor is bulbed downward by tight lacing one third of an inch (0.9 cm.).
- Le Corset; Étude physiologique et pratique
- De la sangle pelvienne contre l’entéroptose
- Die Körperpflege der Frau, Dr. C. H. Stratz 1907
- ↑ Engel. — " Wiener med, Wochenschrift," 1860 (529), "Die wirkungen d. Schnürleibes." quoted Engel found the stomach displaced in the following remarkable manner: It was shoved to the left. Its long axis, from a horizontal or oblique direction, was changed to a vertical, so that the lesser curvature ran down directly to the left of the spinal column. see The Corsetand Image:EstomacCorset page068 .png
- ↑ http://en.wikisource.org/wiki/The_Corset#Observations_on_corset_pressure "The liver may be displaced upward or downward according as the pressure is applied below or above. The precise situation where the pressure is applied will vary with the prevailing fashion of dress; but most commonly in this country the displacement is downward, and this may be to such an extent that the lower margin reaches the ilium, and the liver appears to fill up the whole of the right side and front of the abdomen."
- ↑ http://www.waspcreations.com/faq.htm#reshape%20my%20ribs The rigid front ribs cannot be reduced without fracturing and resetting them.
- ↑ A corset that is poorly designed and/or laced too tightly can cause damage to the back and may pinch nerves in the pelvis and legs
- ↑ Spirella
# External links
- Template:Dmoz
- Selection of 19th and early 20th century texts about corsets
- The Secret History of the Corset and Crinoline A seminar by the Victoria and Albert Museum
- Corset.dk - The Corset Site News and links about corsets and tight-lacing
- Overview of the corset
- Long Island Staylace Association-Laced Corsetry & Stays Site It is a support site for corsetry enthusiasts, but many information.
Template:Corsetry
Template:Lingerie | https://www.wikidoc.org/index.php/Corset | |
d044f7026992b3e85ea3208398dbad99356b12f2 | wikidoc | Cortef | Cortef
Cortef is a steroid tablet containing hydrocortisone, which is a glucocorticoid. Glucocorticoids are adrenocortical steroids, both naturally occurring and synthetic, which are readily absorbed from the gastrointestinal tract. Hydrocortisone USP a is white, ordorless, crystalline powder with a melting point of about 215 degrees Celsius. It is slightly soluble in water and in ether, and is sparingly soluble in acetone and alcohol. It is also slightly soluble in chloroform. The chemical name for hydrocortisone is pregn-4-ene-3,20-dione,11,17,21-trihydroxy-(11b)-. Its molecular weight is 362.46g/mol.
Cortef tablets are available for oral administration in three strengths: 5mg, 10mg, or 20mg of hydrocortisone. The inactive ingredients include calcium stearate, cornstarch, lactose, mineral oil, sorbic acid, and sucrose. Cortef is combination of naturally occurring glucocorticoids (hydrocortisone and cortisone), which also have salt-retaining properties. Glucocorticoids are frequently used as replacement therapy in adrenocorical deficiency states. Their synthetic analogs are primarily used for their potent anti-inflammatory effects in disorders of many organ systems.
Indications and Usage
Cortef is used for the following:
1. Endocrine disorders
- Primary or secondary adrenocortical insufficiency
- congenital adrenal hyperplasia
- nonsuppurative thyroiditis
- hypercalcemia associated with cancer
2. Rheumatic Disorders
- As adjunctive therapy for thort-term administration for:
Psoriatic arthritis
Rheumatoid arthritis
Ankylosing spondylitis
Acute and subacute bursitis
Acute nonspecific tenosynovitis
Acute gouty arthritis
Post-traumatic osteoarthritis
Epicondylitis
- Psoriatic arthritis
- Rheumatoid arthritis
- Ankylosing spondylitis
- Acute and subacute bursitis
- Acute nonspecific tenosynovitis
- Acute gouty arthritis
- Post-traumatic osteoarthritis
- Epicondylitis
3. Collagen Diseases
- During an exacerbation or as maintenance therapy in selected cases of:
Systemic lupus erythematosus
Systemic dermatomyositis
Acute rheumatic carditis
- Systemic lupus erythematosus
- Systemic dermatomyositis
- Acute rheumatic carditis
# Notes and references
- (pdf) | Cortef
Cortef is a steroid tablet containing hydrocortisone, which is a glucocorticoid. Glucocorticoids are adrenocortical steroids, both naturally occurring and synthetic, which are readily absorbed from the gastrointestinal tract. Hydrocortisone USP a is white, ordorless, crystalline powder with a melting point of about 215 degrees Celsius. It is slightly soluble in water and in ether, and is sparingly soluble in acetone and alcohol. It is also slightly soluble in chloroform. The chemical name for hydrocortisone is pregn-4-ene-3,20-dione,11,17,21-trihydroxy-(11b)-. Its molecular weight is 362.46g/mol.
Cortef tablets are available for oral administration in three strengths: 5mg, 10mg, or 20mg of hydrocortisone. The inactive ingredients include calcium stearate, cornstarch, lactose, mineral oil, sorbic acid, and sucrose. Cortef is combination of naturally occurring glucocorticoids (hydrocortisone and cortisone), which also have salt-retaining properties. Glucocorticoids are frequently used as replacement therapy in adrenocorical deficiency states. Their synthetic analogs are primarily used for their potent anti-inflammatory effects in disorders of many organ systems.
Indications and Usage
Cortef is used for the following:
1. Endocrine disorders
- Primary or secondary adrenocortical insufficiency
- congenital adrenal hyperplasia
- nonsuppurative thyroiditis
- hypercalcemia associated with cancer
2. Rheumatic Disorders
- As adjunctive therapy for thort-term administration for:
Psoriatic arthritis
Rheumatoid arthritis
Ankylosing spondylitis
Acute and subacute bursitis
Acute nonspecific tenosynovitis
Acute gouty arthritis
Post-traumatic osteoarthritis
Epicondylitis
- Psoriatic arthritis
- Rheumatoid arthritis
- Ankylosing spondylitis
- Acute and subacute bursitis
- Acute nonspecific tenosynovitis
- Acute gouty arthritis
- Post-traumatic osteoarthritis
- Epicondylitis
3. Collagen Diseases
- During an exacerbation or as maintenance therapy in selected cases of:
Systemic lupus erythematosus
Systemic dermatomyositis
Acute rheumatic carditis
- Systemic lupus erythematosus
- Systemic dermatomyositis
- Acute rheumatic carditis
# Notes and references
- http://www.pfizer.com/pfizer/download/uspi_cortef.pdf (pdf)
- http://www.drugs.com/cons/Cortef.html
Template:WH
Template:WS | https://www.wikidoc.org/index.php/Cortef | |
c074695a8972ebda9fca0a8616e3d2cd307eefd0 | wikidoc | Kidney | Kidney
The kidneys are complex organs that have numerous of biological roles. Their primary role is to maintain the homeostatic balance of bodily fluids. They primarily do this by filtering and secreting metabolites (such as urea) and minerals from the blood and excreting them, along with water, as urine. Because the kidneys are poised to sense plasma concentrations of compounds such as sodium, potassium, hydrogen ion, oxygen, and glucose, they are important regulators of blood pressure, glucose metabolism, and erythropoeisis.
The medical field that studies the kidneys and diseases of the kidney is called nephrology. The prefix nephro- meaning kidney is from the Ancient Greek word nephros (νεφρός); the adjective renal meaning related to the kidney is from Latin rēnēs, meaning kidneys.
In humans, the kidneys are located in the posterior part of the abdomen. There is one on each side of the spine; the right kidney sits just below the liver, the left below the diaphragm and adjacent to the spleen. Above each kidney is an adrenal gland (also called the suprarenal gland). The asymmetry within the abdominal cavity caused by the liver results in the right kidney being slightly lower than the left one while the left kidney is located slightly more medial.
The kidneys are retroperitoneal. They are approximately at the vertebral level T12 to L3. The upper parts of the kidneys are partially protected by the eleventh and twelfth ribs, and each whole kidney is surrounded by two layers of fat (the perirenal and pararenal fat) which help to cushion it. Congenital absence of one or both kidneys, known as unilateral or bilateral renal agenesis, can occur.
# Anatomy
In a normal human adult, each kidney is about 10 cm long, 5.5 cm in width and about 3 cm thick, weighing 150 grams. Together, kidneys weigh about 0.5% of a person's total body weight. The kidneys are "bean-shaped" organs, and have a concave side facing inwards (medially). On this medial aspect of each kidney is an opening, called the hilum, which admits the renal artery, the renal vein, nerves, and the ureter.
The outer portion of the kidney is called the renal cortex, which sits directly beneath the kidney's loose connective tissue/fibrous capsule. Deep to the cortex lies the renal medulla, which is divided into 10-20 renal pyramids in humans. Each pyramid together with the associated overlying cortex forms a renal lobe. The tip of each pyramid (called a papilla) empties into a calyx, and the calices empty into the renal pelvis. The pelvis transmits urine to the urinary bladder via the ureter. People are born with two kidneys but are able to live with only one.
The poles are the highest and lowest points of the kidney. Since the kidneys are located on different heights, the upper pole of the right kidney is at the same level as the hilum of the left kidney. This also happens to be at the same level as the transpyloric plane.
## Blood supply
Each kidney receives its blood supply from the renal artery, two of which branch from the abdominal aorta. Upon entering the hilum of the kidney, the renal artery divides into smaller interlobar arteries situated between the renal papillae. At the outer medulla, the interlobar arteries branch into arcuate arteries, which course along the border between the renal medulla and cortex, giving off still smaller branches, the cortical radial arteries (sometimes called interlobular arteries). Branching off these cortical arteries are the afferent arterioles supplying the glomerular capillaries, which drain into efferent arterioles. Efferent arterioles divide into peritubular capillaries that provide an extensive blood supply to the cortex. Blood from these capillaries collects in renal venules and leaves the kidney via the renal vein. Efferent arterioles of glomeruli closest to the medulla (those that belong to juxtamedullary nephrons) send branches into the medulla, forming the vasa recta. Blood supply is intimately linked to blood pressure.
## Innervation
The kidney is innervated by the renal and ureteric nerve, which arises from the renal plexus. It is sympathetic, parasympathetic and visceral afferent. The renal plexus, in turn, is innervated by thoracic splanchnic nerves, especially the caudal ones.
## Nephron
The basic functional unit of the kidney is the nephron, of which there are more than a million within the cortex and medulla of each normal adult human kidney. Nephrons regulate water and solute within the cortex and medulla of each normal adult human kidney. Nephrons regulate water and soluble matter (especially electrolytes) in the body by first filtering the blood under pressure, and then reabsorbing some necessary fluid and molecules back into the blood while secreting other, unneeded molecules. Reabsorption and secretion are accomplished with both cotransport and countertransport mechanisms established in the nephrons and associated collecting ducts.
## Collecting duct system
The fluid flows from the nephron into the collecting duct system. This segment of the nephron is crucial to the process of water conservation by the organism. In the presence of antidiuretic hormone (ADH; also called vasopressin), these ducts become permeable to water and facilitate its reabsorption, thus concentrating the urine and reducing its volume. Conversely, when the organism must eliminate excess water, such as after excess fluid drinking, the production of ADH is decreased and the collecting tubule becomes less permeable to water, rendering urine dilute and abundant. Failure of the organism to decrease ADH production appropriately, a condition known as syndrome of inappropriate ADH (SIADH), may lead to water retention and dangerous dilution of body fluids, which in turn may cause severe neurological damage. Failure to produce ADH (or inability of the collecting ducts to respond to it) may cause excessive urination, called diabetes insipidus (DI).
A second major function of the collecting duct system is the maintenance of acid-base homeostasis.
After being processed along the collecting tubules and ducts, the fluid, now called urine, is drained into the bladder via the ureter, to be finally excluded from the organism.
# Functions
## Excretion of waste products
The kidneys excrete a variety of waste products produced by metabolism, including the nitrogenous wastes: urea (from protein catabolism) and uric acid (from nucleic acid metabolism) and water.
## Homeostasis
The kidney is one of the major organs involved in whole-body homeostasis. Among its homeostatic functions are acid-base balance, regulation of electrolyte concentrations, control of blood volume, and regulation of blood pressure. The kidneys accomplish these homeostatic functions independently and through coordination with other organs, particularly those of the endocrine system. The kidney communicates with these organs through hormones secreted into the bloodstream.
### Acid-base balance
The kidneys regulate the pH, by eliminating H+ ions concentration called augmentation mineral ion concentration, and water composition of the blood.
By exchanging hydronium ions and hydroxyl ions, the blood plasma is maintained by the kidney at a slightly alkaline pH of 7.4. Urine, on the other hand, is acidic at pH 5 or alkaline at pH 8.
The pH is maintained through four main protein transporters: NHE3 (a sodium-hydrogen exchanger), V-type H-ATPase (an isoform of the hydrogen ATPase), NBC1 (a sodium-bicarbonate cotransporter) and AE1 (an anion exchanger which exchanges chloride for bicarbonate). Due to the polar alignment of cells in the renal epithelia NHE3 and the H-ATPase are exposed to the lumen (which is essentially outside the body), on the apical side of the cells, and are responsible for excreting hydrogen ions (or protons). Conversely, NBC1 and AE1 are on the basolateral side of the cells, and allow bicarbonate ions to move back into the extracellular fluid and thus are returned to the blood plasma.
### Blood pressure
Sodium ions are controlled in a homeostatic process involving aldosterone which increases sodium ion reabsorption in the distal convoluted tubules.
When blood pressure becomes low, a proteolytic enzyme called Renin is secreted by cells of the juxtaglomerular apparatus (part of the distal convoluted tubule) which are sensitive to pressure. Renin acts on a blood protein, angiotensinogen, converting it to angiotensin I (10 amino acids). Angiotensin I is then converted by the Angiotensin-converting enzyme (ACE) in the lung capillaries to Angiotensin II (8 amino acids), which stimulates the secretion of Aldosterone by the adrenal cortex, which then affects the renal tubules.
Aldosterone stimulates an increase in the reabsorption of sodium ions from the kidney tubules which causes an increase in the volume of water that is reabsorbed from the tubule. This increase in water reabsorption increases the volume of blood which ultimately raises the blood pressure.
### Plasma volume
Any significant rise or drop in plasma osmolality is detected by the hypothalamus, which communicates directly with the posterior pituitary gland. A rise in osmolality causes the gland to secrete antidiuretic hormone, resulting in water reabsorption by the kidney and an increase in urine concentration. The two factors work together to return the plasma osmolality to its normal levels.
## Hormone secretion
The kidneys secrete a variety of hormones, including erythropoietin, urodilatin, renin and vitamin D.
# Embryology
The mammalian kidney develops from intermediate mesoderm. Kidney development, also called nephrogenesis, proceeds through a series of three successive phases, each marked by the development of a more advanced pair of kidneys: the pronephros, mesonephros, and metanephros. (The plural forms of these terms end in -oi.)
## Pronephros
During approximately day 22 of human gestation, the paired pronephroi appear towards the cranial end of the intermediate mesoderm. In this region, epithelial cells arrange themselves in a series of tubules called nephrotomes and join laterally with the pronephric duct, which does not reach the outside of the embryo. Thus the pronephros is considered nonfunctional in mammals because it cannot excrete waste from the embryo.
## Mesonephros
Each pronephric duct grows towards the tail of the embryo, and in doing so induces intermediate mesoderm in the thoracolumbar area to become epithelial tubules called mesonephric tubules. Each mesonephric tubule receives a blood supply from a branch of the aorta, ending in a capillary tuft analogous to the glomerulus of the definitive nephron. The mesonephric tubule forms a capsule around the capillary tuft, allowing for filtration of blood. This filtrate flows through the mesonephric tubule and is drained into the continuation of the pronephric duct, now called the mesonephric duct or Wolffian duct. The nephrotomes of the pronephros degenerate while the mesonephric duct extends towards the most caudal end of the embryo, ultimately attaching to the cloaca. The mammalian mesonephros is similar to the kidneys of aquatic amphibians and fishes.
## Metanephros
During the fifth week of gestation, the mesonephric duct develops an outpouching, the ureteric bud, near its attachment to the cloaca. This bud, also called the metanephrogenic diverticulum, grows posteriorly and towards the head of the embryo. The elongated stalk of the ureteric bud, the metanephric duct, later forms the ureter. As the cranial end of the bud extends into the intermediate mesoderm, it undergoes a series of branchings to form the collecting duct system of the kidney. It also forms the major and minor calyces and the renal pelvis.
The portion of undifferentiated intermediate mesoderm in contact with the tips of the branching ureteric bud is known as the metanephrogenic blastema. Signals released from the ureteric bud induce the differentiation of the metanephrogenic blastema into the renal tubules. As the renal tubules grow, they come into contact and join with connecting tubules of the collecting duct system, forming a continuous passage for flow from the renal tubule to the collecting duct. Simultaneously, precursors of vascular endothelial cells begin to take their position at the tips of the renal tubules. These cells differentiate into the cells of the definitive glomerulus.
# Terms
- renal capsule: The membranous covering of the kidney.
- cortex: The outer layer over the internal medulla. It contains blood vessels, glomeruli (which are the kidneys' "filters") and urine tubes and is supported by a fibrous matrix.
- hilus: The opening in the middle of the concave medial border for nerves and blood vessels to pass into the renal sinus.
- renal column: The structures which support the cortex. They consist of lines of blood vessels and urinary tubes and a fibrous material.
- renal sinus: The cavity which houses the renal pyramids.
- calyces: The recesses in the internal medulla which hold the pyramids. They are used to subdivide the sections of the kidney. (singular - calyx)
- papillae: The small conical projections along the wall of the renal sinus. They have openings through which urine passes into the calyces. (singular - papilla)
- renal pyramids: The conical segments within the internal medulla. They contain the secreting apparatus and tubules and are also called malpighian pyramids.
- renal artery: Two renal arteries come from the aorta, each connecting to a kidney. The artery divides into five branches, each of which leads to a ball of capillaries. The arteries supply (unfiltered) blood to the kidneys. The left kidney receives about 60% of the renal bloodflow.
- renal vein: The filtered blood returns to circulation through the renal veins which join into the inferior vena cava.
- renal pelvis: Basically just a funnel, the renal pelvis accepts the urine and channels it out of the hilus into the ureter.
- ureter: A narrow tube 40 cm long and 4 mm in diameter. Passing from the renal pelvis out of the hilus and down to the bladder. The ureter carries urine from the kidneys to the bladder by means of peristalsis.
- renal lobe: Each pyramid together with the associated overlying cortex forms a renal lobe
# Diseases and disorders
## Congenital
- Congenital hydronephrosis
- Congenital obstruction of urinary tract
- Duplicated ureter
- Horseshoe kidney
- Polycystic kidney disease
- Renal dysplasia
- Unilateral small kidney
- Multicystic dysplastic kidney
## Acquired
- Diabetic nephropathy
- Glomerulonephritis
- Hydronephrosis is the enlargement of one or both of the kidneys caused by obstruction of the flow of urine.
- Interstitial nephritis
- Kidney stones are a relatively common and particularly painful disorder.
- Kidney tumors
Wilms tumor
Renal cell carcinoma
- Wilms tumor
- Renal cell carcinoma
- Lupus nephritis
- Minimal change disease
- In nephrotic syndrome, the glomerulus has been damaged so that a large amount of protein in the blood enters the urine. Other frequent features of the nephrotic syndrome include swelling, low serum albumin, and high cholesterol.
- Pyelonephritis is infection of the kidneys and is frequently caused by complication of a urinary tract infection.
- Renal failure
Acute renal failure
Chronic renal failure
- Acute renal failure
- Chronic renal failure
# The failing kidney
Generally, humans can live normally with just one kidney, as one has more functioning renal tissue than is needed to survive, possibly due to the nature of the prehistoric human diet. Only when the amount of functioning kidney tissue is greatly diminished will chronic renal failure develop. If the glomerular filtration rate (a measure of renal function) has fallen very low (end-stage renal failure), or if the renal dysfunction leads to severe symptoms, then renal replacement therapy is indicated, either dialysis or renal transplantation.
# Medical terminology
- Medical terms related to the kidneys involve the prefixes renal- and nephro-.
- Surgical removal of the kidney is a nephrectomy, while a radical nephrectomy is removal of the kidney, its surrounding tissue, lymph nodes, and potentially the adrenal gland. A radical nephrectomy is performed for the removal of the cancers.
# Histology
Human cell types found in the kidney include:
- Kidney glomerulus parietal cell
- Kidney glomerulus podocyte
- Kidney proximal tubule brush border cell
- Loop of Henle thin segment cell
- Thick ascending limb cell
- Kidney distal tubule cell
- Kidney collecting duct cell
Cortical collecting duct cell
Medullary collecting duct cell
- Cortical collecting duct cell
- Medullary collecting duct cell
- Interstitial kidney cell, which do not participate in the filtration process.
# World Kidney Day
World Kidney Day is observed on the second Thursday of March every year.
It was held for the first time in 2006, to increase awareness of kidney disease and educate persons at risk regarding the importance of prevention and early detection.
It is a joint initiative by the International Society of Nephrology (ISF) and International Federation of Kidney Foundations (IFKF).
The next World Kidney Day will be held on 13 March 2008. In 2007, it was held on 8th March.
# Histopathological Findings in Kidney Diseases | Kidney
Template:Infobox Anatomy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
The kidneys are complex organs that have numerous of biological roles. Their primary role is to maintain the homeostatic balance of bodily fluids. They primarily do this by filtering and secreting metabolites (such as urea) and minerals from the blood and excreting them, along with water, as urine. Because the kidneys are poised to sense plasma concentrations of compounds such as sodium, potassium, hydrogen ion, oxygen, and glucose, they are important regulators of blood pressure, glucose metabolism, and erythropoeisis.
The medical field that studies the kidneys and diseases of the kidney is called nephrology[1]. The prefix nephro- meaning kidney is from the Ancient Greek word nephros (νεφρός); the adjective renal meaning related to the kidney is from Latin rēnēs, meaning kidneys.
In humans, the kidneys are located in the posterior part of the abdomen. There is one on each side of the spine; the right kidney sits just below the liver, the left below the diaphragm and adjacent to the spleen. Above each kidney is an adrenal gland (also called the suprarenal gland). The asymmetry within the abdominal cavity caused by the liver results in the right kidney being slightly lower than the left one while the left kidney is located slightly more medial.
The kidneys are retroperitoneal. They are approximately at the vertebral level T12 to L3. The upper parts of the kidneys are partially protected by the eleventh and twelfth ribs, and each whole kidney is surrounded by two layers of fat (the perirenal and pararenal fat) which help to cushion it. Congenital absence of one or both kidneys, known as unilateral or bilateral renal agenesis, can occur.
# Anatomy
In a normal human adult, each kidney is about 10 cm long, 5.5 cm in width and about 3 cm thick, weighing 150 grams.[2] Together, kidneys weigh about 0.5% of a person's total body weight. The kidneys are "bean-shaped" organs, and have a concave side facing inwards (medially). On this medial aspect of each kidney is an opening, called the hilum, which admits the renal artery, the renal vein, nerves, and the ureter.
The outer portion of the kidney is called the renal cortex, which sits directly beneath the kidney's loose connective tissue/fibrous capsule. Deep to the cortex lies the renal medulla, which is divided into 10-20 renal pyramids in humans. Each pyramid together with the associated overlying cortex forms a renal lobe. The tip of each pyramid (called a papilla) empties into a calyx, and the calices empty into the renal pelvis. The pelvis transmits urine to the urinary bladder via the ureter. People are born with two kidneys but are able to live with only one.
The poles are the highest and lowest points of the kidney. Since the kidneys are located on different heights, the upper pole of the right kidney is at the same level as the hilum of the left kidney. This also happens to be at the same level as the transpyloric plane.[3]
## Blood supply
Each kidney receives its blood supply from the renal artery, two of which branch from the abdominal aorta. Upon entering the hilum of the kidney, the renal artery divides into smaller interlobar arteries situated between the renal papillae. At the outer medulla, the interlobar arteries branch into arcuate arteries, which course along the border between the renal medulla and cortex, giving off still smaller branches, the cortical radial arteries (sometimes called interlobular arteries). Branching off these cortical arteries are the afferent arterioles supplying the glomerular capillaries, which drain into efferent arterioles. Efferent arterioles divide into peritubular capillaries that provide an extensive blood supply to the cortex. Blood from these capillaries collects in renal venules and leaves the kidney via the renal vein. Efferent arterioles of glomeruli closest to the medulla (those that belong to juxtamedullary nephrons) send branches into the medulla, forming the vasa recta. Blood supply is intimately linked to blood pressure.
## Innervation
The kidney is innervated by the renal and ureteric nerve, which arises from the renal plexus. [4] It is sympathetic, parasympathetic and visceral afferent.[4] The renal plexus, in turn, is innervated by thoracic splanchnic nerves, especially the caudal ones.[4]
## Nephron
The basic functional unit of the kidney is the nephron, of which there are more than a million within the cortex and medulla of each normal adult human kidney. Nephrons regulate water and solute within the cortex and medulla of each normal adult human kidney. Nephrons regulate water and soluble matter (especially electrolytes) in the body by first filtering the blood under pressure, and then reabsorbing some necessary fluid and molecules back into the blood while secreting other, unneeded molecules. Reabsorption and secretion are accomplished with both cotransport and countertransport mechanisms established in the nephrons and associated collecting ducts.
## Collecting duct system
The fluid flows from the nephron into the collecting duct system. This segment of the nephron is crucial to the process of water conservation by the organism. In the presence of antidiuretic hormone (ADH; also called vasopressin), these ducts become permeable to water and facilitate its reabsorption, thus concentrating the urine and reducing its volume. Conversely, when the organism must eliminate excess water, such as after excess fluid drinking, the production of ADH is decreased and the collecting tubule becomes less permeable to water, rendering urine dilute and abundant. Failure of the organism to decrease ADH production appropriately, a condition known as syndrome of inappropriate ADH (SIADH), may lead to water retention and dangerous dilution of body fluids, which in turn may cause severe neurological damage. Failure to produce ADH (or inability of the collecting ducts to respond to it) may cause excessive urination, called diabetes insipidus (DI).
A second major function of the collecting duct system is the maintenance of acid-base homeostasis.
After being processed along the collecting tubules and ducts, the fluid, now called urine, is drained into the bladder via the ureter, to be finally excluded from the organism.
# Functions
## Excretion of waste products
The kidneys excrete a variety of waste products produced by metabolism, including the nitrogenous wastes: urea (from protein catabolism) and uric acid (from nucleic acid metabolism) and water.
## Homeostasis
The kidney is one of the major organs involved in whole-body homeostasis. Among its homeostatic functions are acid-base balance, regulation of electrolyte concentrations, control of blood volume, and regulation of blood pressure. The kidneys accomplish these homeostatic functions independently and through coordination with other organs, particularly those of the endocrine system. The kidney communicates with these organs through hormones secreted into the bloodstream.
### Acid-base balance
The kidneys regulate the pH, by eliminating H+ ions concentration called augmentation mineral ion concentration, and water composition of the blood.
By exchanging hydronium ions and hydroxyl ions, the blood plasma is maintained by the kidney at a slightly alkaline pH of 7.4. Urine, on the other hand, is acidic at pH 5 or alkaline at pH 8.
The pH is maintained through four main protein transporters: NHE3 (a sodium-hydrogen exchanger), V-type H-ATPase (an isoform of the hydrogen ATPase), NBC1 (a sodium-bicarbonate cotransporter) and AE1 (an anion exchanger which exchanges chloride for bicarbonate). Due to the polar alignment of cells in the renal epithelia NHE3 and the H-ATPase are exposed to the lumen (which is essentially outside the body), on the apical side of the cells, and are responsible for excreting hydrogen ions (or protons). Conversely, NBC1 and AE1 are on the basolateral side of the cells, and allow bicarbonate ions to move back into the extracellular fluid and thus are returned to the blood plasma.
### Blood pressure
Sodium ions are controlled in a homeostatic process involving aldosterone which increases sodium ion reabsorption in the distal convoluted tubules.
When blood pressure becomes low, a proteolytic enzyme called Renin is secreted by cells of the juxtaglomerular apparatus (part of the distal convoluted tubule) which are sensitive to pressure. Renin acts on a blood protein, angiotensinogen, converting it to angiotensin I (10 amino acids). Angiotensin I is then converted by the Angiotensin-converting enzyme (ACE) in the lung capillaries to Angiotensin II (8 amino acids), which stimulates the secretion of Aldosterone by the adrenal cortex, which then affects the renal tubules.
Aldosterone stimulates an increase in the reabsorption of sodium ions from the kidney tubules which causes an increase in the volume of water that is reabsorbed from the tubule. This increase in water reabsorption increases the volume of blood which ultimately raises the blood pressure.
### Plasma volume
Any significant rise or drop in plasma osmolality is detected by the hypothalamus, which communicates directly with the posterior pituitary gland. A rise in osmolality causes the gland to secrete antidiuretic hormone, resulting in water reabsorption by the kidney and an increase in urine concentration. The two factors work together to return the plasma osmolality to its normal levels.
## Hormone secretion
The kidneys secrete a variety of hormones, including erythropoietin, urodilatin, renin and vitamin D.
# Embryology
The mammalian kidney develops from intermediate mesoderm. Kidney development, also called nephrogenesis, proceeds through a series of three successive phases, each marked by the development of a more advanced pair of kidneys: the pronephros, mesonephros, and metanephros.[5] (The plural forms of these terms end in -oi.)
## Pronephros
During approximately day 22 of human gestation, the paired pronephroi appear towards the cranial end of the intermediate mesoderm. In this region, epithelial cells arrange themselves in a series of tubules called nephrotomes and join laterally with the pronephric duct, which does not reach the outside of the embryo. Thus the pronephros is considered nonfunctional in mammals because it cannot excrete waste from the embryo.
## Mesonephros
Each pronephric duct grows towards the tail of the embryo, and in doing so induces intermediate mesoderm in the thoracolumbar area to become epithelial tubules called mesonephric tubules. Each mesonephric tubule receives a blood supply from a branch of the aorta, ending in a capillary tuft analogous to the glomerulus of the definitive nephron. The mesonephric tubule forms a capsule around the capillary tuft, allowing for filtration of blood. This filtrate flows through the mesonephric tubule and is drained into the continuation of the pronephric duct, now called the mesonephric duct or Wolffian duct. The nephrotomes of the pronephros degenerate while the mesonephric duct extends towards the most caudal end of the embryo, ultimately attaching to the cloaca. The mammalian mesonephros is similar to the kidneys of aquatic amphibians and fishes.
## Metanephros
During the fifth week of gestation, the mesonephric duct develops an outpouching, the ureteric bud, near its attachment to the cloaca. This bud, also called the metanephrogenic diverticulum, grows posteriorly and towards the head of the embryo. The elongated stalk of the ureteric bud, the metanephric duct, later forms the ureter. As the cranial end of the bud extends into the intermediate mesoderm, it undergoes a series of branchings to form the collecting duct system of the kidney. It also forms the major and minor calyces and the renal pelvis.
The portion of undifferentiated intermediate mesoderm in contact with the tips of the branching ureteric bud is known as the metanephrogenic blastema. Signals released from the ureteric bud induce the differentiation of the metanephrogenic blastema into the renal tubules. As the renal tubules grow, they come into contact and join with connecting tubules of the collecting duct system, forming a continuous passage for flow from the renal tubule to the collecting duct. Simultaneously, precursors of vascular endothelial cells begin to take their position at the tips of the renal tubules. These cells differentiate into the cells of the definitive glomerulus.
# Terms
- renal capsule: The membranous covering of the kidney.
- cortex: The outer layer over the internal medulla. It contains blood vessels, glomeruli (which are the kidneys' "filters") and urine tubes and is supported by a fibrous matrix.
- hilus: The opening in the middle of the concave medial border for nerves and blood vessels to pass into the renal sinus.
- renal column: The structures which support the cortex. They consist of lines of blood vessels and urinary tubes and a fibrous material.
- renal sinus: The cavity which houses the renal pyramids.
- calyces: The recesses in the internal medulla which hold the pyramids. They are used to subdivide the sections of the kidney. (singular - calyx)
- papillae: The small conical projections along the wall of the renal sinus. They have openings through which urine passes into the calyces. (singular - papilla)
- renal pyramids: The conical segments within the internal medulla. They contain the secreting apparatus and tubules and are also called malpighian pyramids.
- renal artery: Two renal arteries come from the aorta, each connecting to a kidney. The artery divides into five branches, each of which leads to a ball of capillaries. The arteries supply (unfiltered) blood to the kidneys. The left kidney receives about 60% of the renal bloodflow.
- renal vein: The filtered blood returns to circulation through the renal veins which join into the inferior vena cava.
- renal pelvis: Basically just a funnel, the renal pelvis accepts the urine and channels it out of the hilus into the ureter.
- ureter: A narrow tube 40 cm long and 4 mm in diameter. Passing from the renal pelvis out of the hilus and down to the bladder. The ureter carries urine from the kidneys to the bladder by means of peristalsis.
- renal lobe: Each pyramid together with the associated overlying cortex forms a renal lobe
# Diseases and disorders
## Congenital
- Congenital hydronephrosis
- Congenital obstruction of urinary tract
- Duplicated ureter
- Horseshoe kidney
- Polycystic kidney disease
- Renal dysplasia
- Unilateral small kidney
- Multicystic dysplastic kidney
## Acquired
- Diabetic nephropathy
- Glomerulonephritis
- Hydronephrosis is the enlargement of one or both of the kidneys caused by obstruction of the flow of urine.
- Interstitial nephritis
- Kidney stones are a relatively common and particularly painful disorder.
- Kidney tumors
Wilms tumor
Renal cell carcinoma
- Wilms tumor
- Renal cell carcinoma
- Lupus nephritis
- Minimal change disease
- In nephrotic syndrome, the glomerulus has been damaged so that a large amount of protein in the blood enters the urine. Other frequent features of the nephrotic syndrome include swelling, low serum albumin, and high cholesterol.
- Pyelonephritis is infection of the kidneys and is frequently caused by complication of a urinary tract infection.
- Renal failure
Acute renal failure
Chronic renal failure
- Acute renal failure
- Chronic renal failure
# The failing kidney
Generally, humans can live normally with just one kidney, as one has more functioning renal tissue than is needed to survive, possibly due to the nature of the prehistoric human diet. Only when the amount of functioning kidney tissue is greatly diminished will chronic renal failure develop. If the glomerular filtration rate (a measure of renal function) has fallen very low (end-stage renal failure), or if the renal dysfunction leads to severe symptoms, then renal replacement therapy is indicated, either dialysis or renal transplantation.
# Medical terminology
- Medical terms related to the kidneys involve the prefixes renal- and nephro-.
- Surgical removal of the kidney is a nephrectomy, while a radical nephrectomy is removal of the kidney, its surrounding tissue, lymph nodes, and potentially the adrenal gland. A radical nephrectomy is performed for the removal of the cancers.
# Histology
Human cell types found in the kidney include:
- Kidney glomerulus parietal cell
- Kidney glomerulus podocyte
- Kidney proximal tubule brush border cell
- Loop of Henle thin segment cell
- Thick ascending limb cell
- Kidney distal tubule cell
- Kidney collecting duct cell
Cortical collecting duct cell
Medullary collecting duct cell
- Cortical collecting duct cell
- Medullary collecting duct cell
- Interstitial kidney cell, which do not participate in the filtration process.[6]
# World Kidney Day
World Kidney Day is observed on the second Thursday of March every year.
[7]
It was held for the first time in 2006, to increase awareness of kidney disease and educate persons at risk regarding the importance of prevention and early detection.
[8] It is a joint initiative by the International Society of Nephrology (ISF) and International Federation of Kidney Foundations (IFKF).
The next World Kidney Day will be held on 13 March 2008. In 2007, it was held on 8th March.
# Histopathological Findings in Kidney Diseases | https://www.wikidoc.org/index.php/Cortical_collecting_tubule_cell | |
c6d2a87b98b0b680967ec6fc6279652ce1cfd622 | wikidoc | Cousin | Cousin
A cousin in English kinship terminology is a relative with whom one shares a common grandparent or more distant ancestor, and who is not in one's own line of descent. The term cousin is rarely used where there are other specific terms to describe relationships.
A system of degrees and removes is used to describe the relationship between the two cousins and the ancestor they have in common. The degree (first, second, third cousin, etc.) indicates the minimum number of generations between either cousin and the nearest common ancestor; the remove (once removed, twice removed, etc.) indicates the number of generations, if any, separating the two cousins from each other.
For example, the child of one's aunt or uncle is one's first cousin, because there is one generation (unshared parents) between the cousins and their shared grandparents. The child of one's first cousin is one's first cousin once removed because the child belongs to the next generation following one's own.
The system can handle kinships going back many generations. In 2004, genealogists discovered that U.S. Presidential candidates George W. Bush and John Kerry shared a common ancestral couple in the 1500s. It was reported that the two men are sixteenth cousins, three times removed. However, the two are in fact ninth cousins, two times removed. Also, in 2007, it was revealed that U.S. vice president Dick Cheney and senator Barack Obama are eighth cousins.
Non-genealogical usage often eliminates the degrees and removes, and refers to people with common ancestors merely as cousins or distant cousins.
# Family tree
This family tree diagram shows the relationship of each person to the orange person, with cousins colored in green.
# Cousin chart, or table of consanguinity
A cousin chart, or table of consanguinity, is helpful in identifying the degree of cousin relationship between two individuals using their most recent common ancestor as the reference point. Cousinship between two individuals can be specifically described in degrees and removes by determining how close, generationally, the common ancestor is to each individual.
Additional modifying words are used to clarify the exact degree of relatedness between the two people. Ordinal numbers are used to specify the number of generations between individuals and a common ancestor, and further clarification of exact cousinship is made by specifying the difference in generational level between the two cousins, if any, by using degrees of remove. For example, "first cousins once removed" describes two individuals with one cousin's grandparents as the common ancestor but who themselves are one generation different from each other.
Assuming a common ancestor, in principle any two individuals might share a cousin relationship (except as noted above) if the common ancestor and number of generations of descent to each individual from that common ancestor could be determined.
### Chart
The closest relationship prevails (nearest common ancestor) - note that cousinship is not calculated between individuals when one is descended from the other, for example, two individuals are not called cousins if they are any degree of grandparent, parent and child. Also cousinship is not calculated between individuals of any degree of aunt/uncle and nephew/niece relationship to each other.
## Chart relationships as sentences
Reminder: the closest relationship prevails - note that cousinship is not calculated between individuals when one is descended from the other, for example, two individuals are not called cousins if they are any degree of grandparent, parent and child. Also cousinship is not calculated between individuals of any degree of aunt/uncle and nephew/niece relationship to each other.
- If we share grandparents but have different parents we are first cousins
- If we share great grandparents but have different grandparents we are second cousins
- If we share great-great grandparents but have different great grandparents we are third cousins
- My first cousin's child and I are first cousins once removed (one generation difference between us)
- My first cousin's grandchild and I are first cousins twice removed (two generations difference between us)
Similarly
- My parent's first cousin and I are first cousins once removed (one generation difference between us)
- My grandparent's first cousin and I are first cousins twice removed (two generations difference between us)
- My second cousin's child and I are second cousins once removed (one generation difference between us)
- My second cousin's grandchild and I are second cousins twice removed (two generations difference between us)
Similarly
- My parent's second cousin and I are second cousins once removed (one generation difference between us)
- My grandparent's second cousin and I are second cousins twice removed (two generations difference between us)
Following this pattern, it can be determined that xth cousin y-times removed means either of the following:
- The xth cousin of your direct ancestor y generations previously (eg. your great-grandparent's fifth cousin is your fifth cousin thrice removed); or
- Your xth cousin's direct descendant y generations away (e.g. your fifth cousin's great-grandchild is also your fifth cousin thrice removed)
# Determining cousin type
The name of the cousinship is not determined by oneself, but rather is always determined by the generational level of the individual most closely related to the ancestor in common. The following assumes there are no double cousins:
- To work out if two people are first, second, or third cousins, count back the generations to their common ancestor. For example, if the common ancestor is one's grandmother, that is two generations. If it is one's great-grandmother, that is three generations.
- Identify the one of the two descendants who is generationally closest to the common ancestor. For example, if one of the cousins is a great-great-grandchild (four generations) and the other is a grandchild, the grandchild is generationally closest to the common ancestor.
- If the generationally closest descendant of the common ancestor is a grandchild (two generations), then the cousins are first cousins; if three generations separate the common ancestor and the generationally closest cousin, then the two are second cousins, and so on.
- If the cousins are separated from the common ancestor by an equal number of generations, there is no "remove," for instance if both are grandchildren of the common ancestor. But if the number of generations between the common ancestor is different for each cousin, that difference is expressed by using a clarifier, "removed," with the number of removes. For example, if one person is a grandchild of (2 generations from) the common ancestor, and the other person is a great-great-grandchild of (4 generations from) that common ancestor, then the two are first-cousins-twice-removed.
An alternative method is as follows. You and your cousin count the generations between you and the common ancestor. Do not count the common ancestor and do not count yourselves. Thus, if it is a grand parent, this number is one. Let this be X. If X is different for the two of you, then let the difference between be Y. Now, use the smaller X (if there is a difference). You are X cousins, Y times removed. If Y is zero, (because the number of generations between you and your ancestor is the same as for your cousin), then you are simply X cousins. X is stated as an ordinarial, i.e. first, second, etc.
Note that the above system is symmetric; if person A is person B's second cousin once removed, then person B is person A's second cousin once removed as well, even though the relationship between them is not symmetric (since the two are not from the same generation).
Also note that much of this terminology is variable; for example, many dictionaries give "a child of one's first cousin" as a secondary sense for the term second cousin (the primary sense being "a child of a first cousin of one's parent").
A different and partly conflicting system that is sometimes used is asymmetric (i.e. it mirrors the fact that aunt/uncle and niece/nephew are asymmetric names). With this system to work out what cousinage X is to Y, identify the descendant or ancestor of X that is the same generation as Y (i.e. the same number of generations from the common ancestor), then count how many generational removes there are up or down the tree from those same-generation cousins. In other words go across the family tree first, then up or down. For example take X and Y who have common ancestors who are X's great grandparents and Y's grandparents. From Y's point of view, X is Y's first cousin's child, and thus is Y's first cousin once removed (downwards), but from X's point of view Y's child is X's second cousin, and Y therefore is X's second cousin once removed (upwards).
# Kinship chart
Recently, a kinship chart has been proposed that takes into account which relative belongs to the older generation. The terms in this chart are already in common usage.
### Chart
The closest relationship prevails (nearest common ancestor)
# Double cousins and half cousins
Generally, one's cousinship to another is determined by a connection through only one parent's biological family. But an individual's cousinship to another individual may be determined by a connection through both of one's parents. These cousins are biologically connected to both the maternal and paternal family trees and that cousinship is termed a double cousin. Another term used to describe this is cousins on both sides.
If a pair of siblings from one family each form a couple with a pair of siblings from another family, then the children of these two couples will be double first cousins to one another. The children of the couples would already automatically be first cousins due to the fact that they are children of one of their parent's siblings, but in this case the children of their mother's sibling, are also the children of their father's sibling, and thus they are double first cousins. Such cousins have double the consanguinity of ordinary cousins and are as related as half-siblings. Instead of the 12.5% consanguinity that simple first cousins share with each other, double first cousins share a 25% consanguinity with each other. Further, if identical twins form a coupling with a corresponding set of identical twins, the children of these two couples, though legally (double) first cousins to one another, would genetically be as closely related to each other as ordinary full siblings.
Sometimes the children of these unions are called cousin-siblings, cousin-brothers, or cousin-sisters. Note that no incest has occurred to create these close kinships.
Half-siblings share only one parent. Extrapolating from that, if one of John's parents and one of Mary's parents are half-siblings, then John and Mary are half-cousins. The half-sibling of each of their respective parents would be their half-aunt or half-uncle but these terms although technically specific are rarely used in practice. While it would not be unusual to hear of another's half-brother, or half-sister, so described, in common usage one would rarely hear of another's half-cousins or half-aunt, so described, and instead hear them described simply as the other's cousin or aunt.
# Mathematical definitions
The family relationship between two individuals a and b, where Ga and Gb respectively are the number of generations between each individual and their nearest common ancestor, can be calculated by the following:
- If x = 0 and y = 0 then they are the same person.
- If x = 0 and y = 1 then they are parent and child.
- If x = 0 and y = 2 then they are grandparent and grandchild.
- If x = 0 and y > 2 then they are great ... great-grandparent and great ... great-grandchild, with y − 2 greats.
- If x = 1 and y = 0 then they are siblings (brothers or sisters).
- If x = 1 and y = 1 then they are uncle/aunt and nephew/niece.
- If x = 1 and y > 1 then they are great ... great-granduncle/great-grandaunt and great ... great-grandnephew/great-grandniece, with y-1 greats.
- If x > 1 and y = 0 then they are (x − 1)th cousins.
- If x > 1 and y > 0 then they are (x − 1)th cousins y times removed.
So two people sharing a pair of grandparents have x = 2 and y = 0 and are described as being first cousins.
If x > 0 and they only share one nearest common ancestor rather than two, then the word "half" is sometimes added at the beginning of the relationship.
The mathematical definition is more elegant if you always express consanguinity as the ordered pair of natural numbers (x, y) as defined above. In that case, the relationship one has with oneself is (0, 0), the relationship between parent and child is (0, 1), and the relationship between grandparent and grandchild is (0, 2). The relationship between siblings is (1, 0); and between aunt/uncle and nephew/niece is (1, 1). First cousins are (2, 0). The first number expresses how many generations back the two people's most recent common ancestor is, while the second number expresses the generation difference between the two people.
# Alternative canon law charts
Another visual chart used in determining the legal relationship between two people who share a common ancestor (blood) is based upon a diamond shape, and is usually referred to as a canon law relationship chart.
The chart is used by placing the "Common Progenitor" (the person from which both people are descended) in the top space within the diamond shaped chart, and then following each line down the outside edge of the chart. Upon reaching the final place along the opposing outside edge for each person, the relationship is the determined by following that line inward to the point where the lines intersect. The information contained in the common "intersection" defines the relationship.
For a simple example, in the illustration to the right, if two siblings wanted to use the chart to determine their relationship using the chart to the right, their common parents would be placed in the top most position and each child assigned the space below and along the outside of the chart. Then, following the spaces inward, the two would meet in the "brother (sister)" diamond. If their children would want to determine their relationship, they would follow the path established by their parents, but descend an additional step below along the outside of the chart (showing that they are grandchildren of the "Common Progenitor"; following their respect lines inward, they would come to rest in the space marked "1st cousin." In cases where one side descends the outside of the diamond further than the other side because of additional generations removed from the "Common Progenitor," following the lines inward shows both the cousin rank (1st cousin, 2nd Cousin) plus the number of times (generations) "removed."
In the example provided at the right, generations one (child) through ten (8th Great Grandchild) from the Common Progenitor are provided, however the format of the chart can easily be expanded to accommodate any number of generations needed to resolve the question of relationship. | Cousin
A cousin in English kinship terminology is a relative with whom one shares a common grandparent or more distant ancestor, and who is not in one's own line of descent. The term cousin is rarely used where there are other specific terms to describe relationships.
A system of degrees and removes is used to describe the relationship between the two cousins and the ancestor they have in common. The degree (first, second, third cousin, etc.) indicates the minimum number of generations between either cousin and the nearest common ancestor; the remove (once removed, twice removed, etc.) indicates the number of generations, if any, separating the two cousins from each other.
For example, the child of one's aunt or uncle is one's first cousin, because there is one generation (unshared parents) between the cousins and their shared grandparents. The child of one's first cousin is one's first cousin once removed because the child belongs to the next generation following one's own.
The system can handle kinships going back many generations. In 2004, genealogists discovered that U.S. Presidential candidates George W. Bush and John Kerry shared a common ancestral couple in the 1500s. It was reported that the two men are sixteenth cousins, three times removed.[1] However, the two are in fact ninth cousins, two times removed.[2] Also, in 2007, it was revealed that U.S. vice president Dick Cheney and senator Barack Obama are eighth cousins.[3]
Non-genealogical usage often eliminates the degrees and removes, and refers to people with common ancestors merely as cousins or distant cousins.
# Family tree
This family tree diagram shows the relationship of each person to the orange person, with cousins colored in green.
# Cousin chart, or table of consanguinity
A cousin chart, or table of consanguinity, is helpful in identifying the degree of cousin relationship between two individuals using their most recent common ancestor as the reference point. Cousinship between two individuals can be specifically described in degrees and removes by determining how close, generationally, the common ancestor is to each individual.
Additional modifying words are used to clarify the exact degree of relatedness between the two people. Ordinal numbers are used to specify the number of generations between individuals and a common ancestor, and further clarification of exact cousinship is made by specifying the difference in generational level between the two cousins, if any, by using degrees of remove. For example, "first cousins once removed" describes two individuals with one cousin's grandparents as the common ancestor but who themselves are one generation different from each other.
Assuming a common ancestor, in principle any two individuals might share a cousin relationship (except as noted above) if the common ancestor and number of generations of descent to each individual from that common ancestor could be determined.
### Chart
The closest relationship prevails (nearest common ancestor) - note that cousinship is not calculated between individuals when one is descended from the other, for example, two individuals are not called cousins if they are any degree of grandparent, parent and child. Also cousinship is not calculated between individuals of any degree of aunt/uncle and nephew/niece relationship to each other.
## Chart relationships as sentences
Reminder: the closest relationship prevails - note that cousinship is not calculated between individuals when one is descended from the other, for example, two individuals are not called cousins if they are any degree of grandparent, parent and child. Also cousinship is not calculated between individuals of any degree of aunt/uncle and nephew/niece relationship to each other.
- If we share grandparents but have different parents we are first cousins
- If we share great grandparents but have different grandparents we are second cousins
- If we share great-great grandparents but have different great grandparents we are third cousins
- My first cousin's child and I are first cousins once removed (one generation difference between us)
- My first cousin's grandchild and I are first cousins twice removed (two generations difference between us)
Similarly
- My parent's first cousin and I are first cousins once removed (one generation difference between us)
- My grandparent's first cousin and I are first cousins twice removed (two generations difference between us)
- My second cousin's child and I are second cousins once removed (one generation difference between us)
- My second cousin's grandchild and I are second cousins twice removed (two generations difference between us)
Similarly
- My parent's second cousin and I are second cousins once removed (one generation difference between us)
- My grandparent's second cousin and I are second cousins twice removed (two generations difference between us)
Following this pattern, it can be determined that xth cousin y-times removed means either of the following:
- The xth cousin of your direct ancestor y generations previously (eg. your great-grandparent's fifth cousin is your fifth cousin thrice removed); or
- Your xth cousin's direct descendant y generations away (e.g. your fifth cousin's great-grandchild is also your fifth cousin thrice removed)
# Determining cousin type
The name of the cousinship is not determined by oneself, but rather is always determined by the generational level of the individual most closely related to the ancestor in common. The following assumes there are no double cousins:
- To work out if two people are first, second, or third cousins, count back the generations to their common ancestor. For example, if the common ancestor is one's grandmother, that is two generations. If it is one's great-grandmother, that is three generations.
- Identify the one of the two descendants who is generationally closest to the common ancestor. For example, if one of the cousins is a great-great-grandchild (four generations) and the other is a grandchild, the grandchild is generationally closest to the common ancestor.
- If the generationally closest descendant of the common ancestor is a grandchild (two generations), then the cousins are first cousins; if three generations separate the common ancestor and the generationally closest cousin, then the two are second cousins, and so on.
- If the cousins are separated from the common ancestor by an equal number of generations, there is no "remove," for instance if both are grandchildren of the common ancestor. But if the number of generations between the common ancestor is different for each cousin, that difference is expressed by using a clarifier, "removed," with the number of removes. For example, if one person is a grandchild of (2 generations from) the common ancestor, and the other person is a great-great-grandchild of (4 generations from) that common ancestor, then the two are first-cousins-twice-removed.
An alternative method is as follows. You and your cousin count the generations between you and the common ancestor. Do not count the common ancestor and do not count yourselves. Thus, if it is a grand parent, this number is one. Let this be X. If X is different for the two of you, then let the difference between be Y. Now, use the smaller X (if there is a difference). You are X cousins, Y times removed. If Y is zero, (because the number of generations between you and your ancestor is the same as for your cousin), then you are simply X cousins. X is stated as an ordinarial, i.e. first, second, etc.
Note that the above system is symmetric; if person A is person B's second cousin once removed, then person B is person A's second cousin once removed as well, even though the relationship between them is not symmetric (since the two are not from the same generation).
Also note that much of this terminology is variable; for example, many dictionaries give "a child of one's first cousin" as a secondary sense for the term second cousin (the primary sense being "a child of a first cousin of one's parent").
A different and partly conflicting system that is sometimes used is asymmetric (i.e. it mirrors the fact that aunt/uncle and niece/nephew are asymmetric names). With this system to work out what cousinage X is to Y, identify the descendant or ancestor of X that is the same generation as Y (i.e. the same number of generations from the common ancestor), then count how many generational removes there are up or down the tree from those same-generation cousins. In other words go across the family tree first, then up or down. For example take X and Y who have common ancestors who are X's great grandparents and Y's grandparents. From Y's point of view, X is Y's first cousin's child, and thus is Y's first cousin once removed (downwards), but from X's point of view Y's child is X's second cousin, and Y therefore is X's second cousin once removed (upwards).
# Kinship chart
Recently, a kinship chart has been proposed that takes into account which relative belongs to the older generation. The terms in this chart are already in common usage.
[4]
### Chart
The closest relationship prevails (nearest common ancestor)
# Double cousins and half cousins
Generally, one's cousinship to another is determined by a connection through only one parent's biological family. But an individual's cousinship to another individual may be determined by a connection through both of one's parents. These cousins are biologically connected to both the maternal and paternal family trees and that cousinship is termed a double cousin. Another term used to describe this is cousins on both sides.
If a pair of siblings from one family each form a couple with a pair of siblings from another family, then the children of these two couples will be double first cousins to one another. The children of the couples would already automatically be first cousins due to the fact that they are children of one of their parent's siblings, but in this case the children of their mother's sibling, are also the children of their father's sibling, and thus they are double first cousins. Such cousins have double the consanguinity of ordinary cousins and are as related as half-siblings. Instead of the 12.5% consanguinity that simple first cousins share with each other, double first cousins share a 25% consanguinity with each other. Further, if identical twins form a coupling with a corresponding set of identical twins, the children of these two couples, though legally (double) first cousins to one another, would genetically be as closely related to each other as ordinary full siblings.
Sometimes the children of these unions are called cousin-siblings, cousin-brothers, or cousin-sisters. Note that no incest has occurred to create these close kinships.
Half-siblings share only one parent. Extrapolating from that, if one of John's parents and one of Mary's parents are half-siblings, then John and Mary are half-cousins[citation needed]. The half-sibling of each of their respective parents would be their half-aunt or half-uncle but these terms although technically specific are rarely used in practice. While it would not be unusual to hear of another's half-brother, or half-sister, so described, in common usage one would rarely hear of another's half-cousins or half-aunt, so described, and instead hear them described simply as the other's cousin or aunt.
# Mathematical definitions
The family relationship between two individuals a and b, where Ga and Gb respectively are the number of generations between each individual and their nearest common ancestor, can be calculated by the following:
- If x = 0 and y = 0 then they are the same person.
- If x = 0 and y = 1 then they are parent and child.
- If x = 0 and y = 2 then they are grandparent and grandchild.
- If x = 0 and y > 2 then they are great ... great-grandparent and great ... great-grandchild, with y − 2 greats.
- If x = 1 and y = 0 then they are siblings (brothers or sisters).
- If x = 1 and y = 1 then they are uncle/aunt and nephew/niece.
- If x = 1 and y > 1 then they are great ... great-granduncle/great-grandaunt and great ... great-grandnephew/great-grandniece, with y-1 greats.
- If x > 1 and y = 0 then they are (x − 1)th cousins.
- If x > 1 and y > 0 then they are (x − 1)th cousins y times removed.
So two people sharing a pair of grandparents have x = 2 and y = 0 and are described as being first cousins.
If x > 0 and they only share one nearest common ancestor rather than two, then the word "half" is sometimes added at the beginning of the relationship.
The mathematical definition is more elegant if you always express consanguinity as the ordered pair of natural numbers (x, y) as defined above. In that case, the relationship one has with oneself is (0, 0), the relationship between parent and child is (0, 1), and the relationship between grandparent and grandchild is (0, 2). The relationship between siblings is (1, 0); and between aunt/uncle and nephew/niece is (1, 1). First cousins are (2, 0). The first number expresses how many generations back the two people's most recent common ancestor is, while the second number expresses the generation difference between the two people.
# Alternative canon law charts
Another visual chart used in determining the legal relationship between two people who share a common ancestor (blood) is based upon a diamond shape, and is usually referred to as a canon law relationship chart.
The chart is used by placing the "Common Progenitor" (the person from which both people are descended) in the top space within the diamond shaped chart, and then following each line down the outside edge of the chart. Upon reaching the final place along the opposing outside edge for each person, the relationship is the determined by following that line inward to the point where the lines intersect. The information contained in the common "intersection" defines the relationship.
For a simple example, in the illustration to the right, if two siblings wanted to use the chart to determine their relationship using the chart to the right, their common parents would be placed in the top most position and each child assigned the space below and along the outside of the chart. Then, following the spaces inward, the two would meet in the "brother (sister)" diamond. If their children would want to determine their relationship, they would follow the path established by their parents, but descend an additional step below along the outside of the chart (showing that they are grandchildren of the "Common Progenitor"; following their respect lines inward, they would come to rest in the space marked "1st cousin." In cases where one side descends the outside of the diamond further than the other side because of additional generations removed from the "Common Progenitor," following the lines inward shows both the cousin rank (1st cousin, 2nd Cousin) plus the number of times (generations) "removed."
In the example provided at the right, generations one (child) through ten (8th Great Grandchild) from the Common Progenitor are provided, however the format of the chart can easily be expanded to accommodate any number of generations needed to resolve the question of relationship. | https://www.wikidoc.org/index.php/Cousin | |
3e1dc5bcb0e1fd9f04a8ad5f40c6626931d2ddcd | wikidoc | Cowpox | Cowpox
# Overview
Cowpox (also known as Catpox) is a disease of the skin that is caused by a virus (Cowpox virus) that is related to the Vaccinia virus. The ailment manifests itself in the form of red blisters and is transmitted by touch from infected animals to humans. The virus that causes cowpox was used to perform the first successful vaccination against another disease, smallpox, which is caused by the related Variola virus. Therefore the word "vaccination" has the Latin root vacca meaning cow.
The first vaccination was performed in 1774 by a farmer, Benjamin Jesty, in Dorset, England. He inoculated his wife and two young sons and thus spared them probable death by smallpox which was raging in the area in which they lived. His patients who had contracted and recovered from cowpox (mainly dairymaids), a disease similar to but much milder than smallpox, seemed to be immune not only to further cases of cowpox, but also to smallpox. By scratching the fluid from cowpox lesions into the skin of healthy individuals, he was able to immunize those people against smallpox.
However, credit was stolen by the politically astute Dr. Jenner who performed his first inoculation, having heard of Jesty's work, 22 years later. Even today he is falsely credited with the first vaccination.
The term vaccination was first used by Edward Jenner (an English physician) in 1796.
The virus is found in Europe and mainly in the UK. Human cases today are very rare and most often contracted from domestic cats. The virus is not commonly found in cows; the reservoir hosts for the virus are woodland rodents particularly voles. It is from these rodents that domestic cats contract the virus. Symptoms in cats include lesions on the face, neck, forelimbs, and paws, and less commonly upper respiratory tract infection. Symptoms of infection with cowpox virus in humans are localized, pustular lesions generally found on the hands and limited to the site of introduction. The incubation period is 9-10 days. The virus is prevalent in late summer and autumn.
# Historical use
Cowpox was the original vaccine of sorts for smallpox. After infection with the disease, the body (usually) gains the ability of recognizing the similar smallpox virus from its antigens and so is able to fight the smallpox disease much more efficiently. The vaccinia virus now used for smallpox vaccination is sufficiently different from the cowpox virus found in the wild as to be considered a separate virus. Cowpox got its name from cow maids touching the udders of infected cows. | Cowpox
Template:DiseaseDisorder infobox
# Overview
Cowpox (also known as Catpox) is a disease of the skin that is caused by a virus (Cowpox virus) that is related to the Vaccinia virus. The ailment manifests itself in the form of red blisters and is transmitted by touch from infected animals to humans. The virus that causes cowpox was used to perform the first successful vaccination against another disease, smallpox, which is caused by the related Variola virus. Therefore the word "vaccination" has the Latin root vacca meaning cow.
The first vaccination was performed in 1774 by a farmer, Benjamin Jesty, in Dorset, England. He inoculated his wife and two young sons and thus spared them probable death by smallpox which was raging in the area in which they lived. His patients who had contracted and recovered from cowpox (mainly dairymaids), a disease similar to but much milder than smallpox, seemed to be immune not only to further cases of cowpox, but also to smallpox. By scratching the fluid from cowpox lesions into the skin of healthy individuals, he was able to immunize those people against smallpox.
However, credit was stolen by the politically astute Dr. Jenner who performed his first inoculation, having heard of Jesty's work, 22 years later. Even today he is falsely credited with the first vaccination.
The term vaccination was first used by Edward Jenner (an English physician) in 1796.
The virus is found in Europe and mainly in the UK. Human cases today are very rare and most often contracted from domestic cats. The virus is not commonly found in cows; the reservoir hosts for the virus are woodland rodents particularly voles. It is from these rodents that domestic cats contract the virus. Symptoms in cats include lesions on the face, neck, forelimbs, and paws, and less commonly upper respiratory tract infection.[1] Symptoms of infection with cowpox virus in humans are localized, pustular lesions generally found on the hands and limited to the site of introduction. The incubation period is 9-10 days. The virus is prevalent in late summer and autumn.
# Historical use
Cowpox was the original vaccine of sorts for smallpox. After infection with the disease, the body (usually) gains the ability of recognizing the similar smallpox virus from its antigens and so is able to fight the smallpox disease much more efficiently. The vaccinia virus now used for smallpox vaccination is sufficiently different from the cowpox virus found in the wild as to be considered a separate virus. [2]Cowpox got its name from cow maids touching the udders of infected cows. | https://www.wikidoc.org/index.php/Cowpox | |
1b996d9f6844543069bbe0af246390a137d4d0f0 | wikidoc | Cresol | Cresol
Cresols are organic compounds which are methylphenols. They are a widely occurring natural and manufactured group of aromatic organic compounds which are categorized as phenols (sometimes called phenolics). Depending on the temperature, cresols can be solid or liquid because they have melting points not far from room temperature. Like other types of phenols, they are slowly oxidized by long exposure to air and the impurities often give cresols a yellowish to brownish red tint. Cresols have an odor characteristic to that of other simple phenols, reminiscent to some of a "medicine" smell.
# Chemical structure
In its chemical structure, a cresol molecule has a methyl group substituted onto the benzene ring of a phenol molecule. There are three forms of cresols with formula (CH3)C6H4(OH) that are only slightly different in their chemical structure: ortho-cresol (o-cresol), meta-cresol (m-cresol), and para-cresol (p-cresol). These forms occur separately or as a mixture.
# Applications
Cresols are used to dissolve other chemicals, as disinfectants and deodorizers, and to make specific chemicals that kill insect pests.
Cresol solutions are used as household cleaners and disinfectants, perhaps most famously under the trade name Lysol. In the past, cresol solutions have been used as antiseptics in surgery, but they have been largely displaced in this role by less toxic compounds. Lysol was also advertised as a disinfecting vaginal douche in mid-twentieth century America.
Cresols are found in many foods and in wood and tobacco smoke, crude oil, coal tar, and in brown mixtures such as creosote, cresolene and cresylic acids, which are wood preservatives. Small organisms in soil and water produce cresols when they break down materials in the environment.
Xylenols are dimethylphenols, or they can be thought of as methylcresols.
# Health effects
Most exposures to cresols are at very low levels that are not harmful. When cresols are breathed, ingested, or applied to the skin at very high levels, they can be very harmful. Effects observed in people include irritation and burning of skin, eyes, mouth, and throat; abdominal pain and vomiting; heart damage; anemia; liver and kidney damage; facial paralysis; coma; and death.
Breathing high levels of cresols for a short time results in irritation of the nose and throat. Aside from these effects, very little is known about the effects of breathing cresols, for example, at lower levels over longer times.
Ingesting high levels results in kidney problems, mouth and throat burns, abdominal pain, vomiting, and effects on the blood and nervous system.
Skin contact with high levels of cresols can burn the skin and damage the kidneys, liver, blood, brain, and lungs.
Short-term and long-term studies with animals have shown similar effects from exposure to cresols. No human or animal studies have shown harmful effects from cresols on the ability to have children.
It is not known what the effects are from long-term ingestion or skin contact with low levels of cresols.
# References for Table of Properties
- o-CRESOL (ICSC)
- m-CRESOL (ICSC)
- p-CRESOL (ICSC)
- Environmental Science - SMILES Examples Notations
cs:Kresol
de:Kresole
lv:Krezols
nl:Orthocresol | Cresol
Cresols are organic compounds which are methylphenols. They are a widely occurring natural and manufactured group of aromatic organic compounds which are categorized as phenols (sometimes called phenolics). Depending on the temperature, cresols can be solid or liquid because they have melting points not far from room temperature. Like other types of phenols, they are slowly oxidized by long exposure to air and the impurities often give cresols a yellowish to brownish red tint. Cresols have an odor characteristic to that of other simple phenols, reminiscent to some of a "medicine" smell.
# Chemical structure
In its chemical structure, a cresol molecule has a methyl group substituted onto the benzene ring of a phenol molecule. There are three forms of cresols with formula (CH3)C6H4(OH) that are only slightly different in their chemical structure: ortho-cresol (o-cresol), meta-cresol (m-cresol), and para-cresol (p-cresol). These forms occur separately or as a mixture.
# Applications
Cresols are used to dissolve other chemicals, as disinfectants and deodorizers, and to make specific chemicals that kill insect pests.
Cresol solutions are used as household cleaners and disinfectants, perhaps most famously under the trade name Lysol. In the past, cresol solutions have been used as antiseptics in surgery, but they have been largely displaced in this role by less toxic compounds. Lysol was also advertised as a disinfecting vaginal douche in mid-twentieth century America.
Cresols are found in many foods and in wood and tobacco smoke, crude oil, coal tar, and in brown mixtures such as creosote, cresolene and cresylic acids, which are wood preservatives. Small organisms in soil and water produce cresols when they break down materials in the environment.
Xylenols are dimethylphenols, or they can be thought of as methylcresols.
# Health effects
Most exposures to cresols are at very low levels that are not harmful. When cresols are breathed, ingested, or applied to the skin at very high levels, they can be very harmful. Effects observed in people include irritation and burning of skin, eyes, mouth, and throat; abdominal pain and vomiting; heart damage; anemia; liver and kidney damage; facial paralysis; coma; and death.
Breathing high levels of cresols for a short time results in irritation of the nose and throat. Aside from these effects, very little is known about the effects of breathing cresols, for example, at lower levels over longer times.
Ingesting high levels results in kidney problems, mouth and throat burns, abdominal pain, vomiting, and effects on the blood and nervous system.
Skin contact with high levels of cresols can burn the skin and damage the kidneys, liver, blood, brain, and lungs.
Short-term and long-term studies with animals have shown similar effects from exposure to cresols. No human or animal studies have shown harmful effects from cresols on the ability to have children.
It is not known what the effects are from long-term ingestion or skin contact with low levels of cresols.
# References for Table of Properties
- o-CRESOL (ICSC)
- m-CRESOL (ICSC)
- p-CRESOL (ICSC)
- Environmental Science - SMILES Examples Notations
cs:Kresol
de:Kresole
lv:Krezols
nl:Orthocresol | https://www.wikidoc.org/index.php/Cresol | |
f1f94d69e90bf6ef6b2c11cda239c62fdf84b7c9 | wikidoc | Cripto | Cripto
Cripto is an EGF-CFC or epidermal growth factor-CFC, which is encoded by the Cryptic family 1 gene. Cryptic family protein 1B is a protein that in humans is encoded by the CFC1B gene. Cryptic family protein 1B acts as a receptor for the TGF beta signaling pathway. It has been associated with the translation of an extracellular protein for this pathway. The extracellular protein which Cripto encodes plays a crucial role in the development of left and right division of symmetry. Mutations of it could cause congenital heart disease.
Crypto is a glycosylphosphatidylinositol-anchored co-receptor that binds nodal and the activin type I ActRIB (ALK)-4 receptor (ALK4).
# Structure
Cripto is composed of two adjacent cysteine-rich motifs: the EGF-like and the CFC of an N-terminal signal peptide and of a C-terminal hydrophobic region attached by a GPI anchor, which makes it a potentially essential element in the signaling pathway directing vertebrate embryo development. NMR data confirm that the CFC domain has a C1-C4, C2-C6, C3-C5 disulfide pattern and show that structures are rather flexible and globally extended, with three non-canonical anti-parallel strands.
# Function
In the Nodal signaling pathway of embryonic development, Cripto has been shown to have dual function as a co-receptor as well as ligand. Particularly in cell cultures, it has been shown to act as a signaling molecule with the capabilities of a growth factor, and in co-culture assays, it has displayed the property of a co-ligand to Nodal. Glycosylation is responsible for mediating this interface with Nodal. EGF-CFC proteins’ composition as a receptor complex is further solidified by the GPI linkage, making the cell membrane connection able to regulate growth factor signaling of Nodal.
# Expression during embryonic development
High concentrations of Cripto are found in both the trophoblast and inner cell mass, along the primitive streak as the second epithelial-mesenchymal transformation event occurs to form the mesoderm, and in the myocardium of the developing heart. Though no specific defect has been formally associated with mutations in Cripto, in vitro studies that disrupt gene function at various times during development have provided glimpses of possible malformations. For example, inactivation of Cripto during gastrulation disrupted the migration of newly formed mesenchymal mesoderm cells, resulting in the accumulation of cells around the primitive streak and eventual embryonic death. Other results of Cripto disruption include the lack of posterior structures. and a block on the differentiation of cardiac myocyte,. both of which lead to embryonic death.
Cripto’s functions have been hypothesized from these null mutation studies. It is now known that Cripto is similar to other morphogens originating from the primitive streak in that it is asymmetrically expressed, specifically in a proximal-distal gradient, explaining the failure of posterior structures to form in the absence of Cripto.
# Clinical significance
CFC1B has oncogene potential due to the tumor cell proliferation through initiation by autocrine or paracrine signaling. Furthermore, the cryptic protein is highly over-expressed in many tumors such as colorectal, gastric, breast, and pancreatic cancers in homosapiens. Cripto is one of the key regulators of embryonic stem cells differentiation into cardiomyocyte vs. neuronal fate. Expression levels of cripto are associated with resistance to EGFR inhibitors. | Cripto
Cripto is an EGF-CFC or epidermal growth factor-CFC, which is encoded by the Cryptic family 1 gene.[1] Cryptic family protein 1B is a protein that in humans is encoded by the CFC1B gene.[2][3] Cryptic family protein 1B acts as a receptor for the TGF beta signaling pathway. It has been associated with the translation of an extracellular protein for this pathway.[1] The extracellular protein which Cripto encodes plays a crucial role in the development of left and right division of symmetry. Mutations of it could cause congenital heart disease.[4]
Crypto is a glycosylphosphatidylinositol-anchored co-receptor that binds nodal and the activin type I ActRIB (ALK)-4 receptor (ALK4).[1][5]
# Structure
Cripto is composed of two adjacent cysteine-rich motifs: the EGF-like and the CFC of an N-terminal signal peptide and of a C-terminal hydrophobic region attached by a GPI anchor,[6] which makes it a potentially essential element in the signaling pathway directing vertebrate embryo development.[7] NMR data confirm that the CFC domain has a C1-C4, C2-C6, C3-C5 disulfide pattern and show that structures are rather flexible and globally extended, with three non-canonical anti-parallel strands.[6]
# Function
In the Nodal signaling pathway of embryonic development, Cripto has been shown to have dual function as a co-receptor as well as ligand. Particularly in cell cultures, it has been shown to act as a signaling molecule with the capabilities of a growth factor, and in co-culture assays, it has displayed the property of a co-ligand to Nodal. Glycosylation is responsible for mediating this interface with Nodal. EGF-CFC proteins’ composition as a receptor complex is further solidified by the GPI linkage, making the cell membrane connection able to regulate growth factor signaling of Nodal.[1]
# Expression during embryonic development
High concentrations of Cripto are found in both the trophoblast and inner cell mass, along the primitive streak as the second epithelial-mesenchymal transformation event occurs to form the mesoderm, and in the myocardium of the developing heart. Though no specific defect has been formally associated with mutations in Cripto, in vitro studies that disrupt gene function at various times during development have provided glimpses of possible malformations. For example, inactivation of Cripto during gastrulation disrupted the migration of newly formed mesenchymal mesoderm cells, resulting in the accumulation of cells around the primitive streak and eventual embryonic death.[8] Other results of Cripto disruption include the lack of posterior structures.[9][10] and a block on the differentiation of cardiac myocyte,.[11] both of which lead to embryonic death.
Cripto’s functions have been hypothesized from these null mutation studies. It is now known that Cripto is similar to other morphogens originating from the primitive streak in that it is asymmetrically expressed, specifically in a proximal-distal gradient,[9] explaining the failure of posterior structures to form in the absence of Cripto.
# Clinical significance
CFC1B has oncogene potential [6] due to the tumor cell proliferation through initiation by autocrine or paracrine signaling.[1] Furthermore, the cryptic protein is highly over-expressed in many tumors [6] such as colorectal, gastric, breast, and pancreatic cancers in homosapiens.[1] Cripto is one of the key regulators of embryonic stem cells differentiation into cardiomyocyte vs. neuronal fate.[12] Expression levels of cripto are associated with resistance to EGFR inhibitors.[13] | https://www.wikidoc.org/index.php/Cripto | |
6548cf97ca3aa1022272bc88b1946e3ddf1e5444 | wikidoc | Crista | Crista
# Overview
Cristae (singular crista) are the internal compartments formed by the inner membrane of a mitochondrion. They are studded with proteins, including ATP synthase and a variety of cytochromes. The maximum surface for chemical reactions to occur is within the mitochondria. This allows cellular respiration (aerobic respiration since the mitochondria requires oxygen) to occur.
# Electron transport chain of the cristae
NADH is split into NAD+, H+ ions, and electrons by an enzyme. FADH2 is also split into H+ ions, electrons, and FAD. As those electrons travel further through the electron transport chain in the inner membrane, energy is gradually released and used to pump the hydrogen ions from the splitting of NADH and FADH2 into the space between the inner membrane and the outer membrane (called the intermembrane space), creating an electrochemical gradient. As a result, chemiosmosis occurs, producing ATP from ADP and a phosphate group when ATP synthase harnesses the potential energy from the concentration gradient formed by the amount of H+ ions. H+ ions passively pass into the mitochondrian matrix by the ATP synthase, and later on help to reform H2O.
The electron transport chain requires a constant supply of electrons in order to properly function and generate ATP. However, the electrons that have entered the electron transport chain would eventually pile up like cars traveling down a one-way dead-end street. Those electrons are finally accepted by oxygen (O2), which combine with some of the hydrogen ions from the mitochondrian matrix through ATP synthase and the electrons that had traveled through the electron transport chain. As a result they form two molecules of water (H2O). By accepting the electrons, oxygen allows the electron transport chain to continue functioning.
The electrons from each NADH molecule can form a total of 2.5 ATPs from ADPs and phosphate groups through the electron transport chain, while each FADH2 molecule can produce a total of 1.5 ATPs. As a result, the 10 NADH molecules (from glycolysis and the Krebs cycle) and the 2 FADH2 molecules can form a total of 34 ATPs from this electron transport chain during aerobic respiration. This means that combined with the Krebs Cycle and glycolysis, the efficiency for the electron transport chain is about 65%, as compared to only 3.5% efficiency for glycolysis alone.
# Usefulness
The cristae greatly increase the surface area on which the above mentioned reactions take place. If they were absent, the inner membrane would be reduced to a single spherical shape, and with less reaction surface available, the reaction efficiency would be likewise reduced. Therefore, cristae are a necessity for the mitochondria to function efficiently.
ca:Cresta mitocondrial
de:Crista | Crista
# Overview
Cristae (singular crista) are the internal compartments formed by the inner membrane of a mitochondrion. They are studded with proteins, including ATP synthase and a variety of cytochromes. The maximum surface for chemical reactions to occur is within the mitochondria. This allows cellular respiration (aerobic respiration since the mitochondria requires oxygen) to occur.
# Electron transport chain of the cristae
NADH is split into NAD+, H+ ions, and electrons by an enzyme. FADH2 is also split into H+ ions, electrons, and FAD. As those electrons travel further through the electron transport chain in the inner membrane, energy is gradually released and used to pump the hydrogen ions from the splitting of NADH and FADH2 into the space between the inner membrane and the outer membrane (called the intermembrane space), creating an electrochemical gradient. As a result, chemiosmosis occurs, producing ATP from ADP and a phosphate group when ATP synthase harnesses the potential energy from the concentration gradient formed by the amount of H+ ions. H+ ions passively pass into the mitochondrian matrix by the ATP synthase, and later on help to reform H2O.
The electron transport chain requires a constant supply of electrons in order to properly function and generate ATP. However, the electrons that have entered the electron transport chain would eventually pile up like cars traveling down a one-way dead-end street. Those electrons are finally accepted by oxygen (O2), which combine with some of the hydrogen ions from the mitochondrian matrix through ATP synthase and the electrons that had traveled through the electron transport chain. As a result they form two molecules of water (H2O). By accepting the electrons, oxygen allows the electron transport chain to continue functioning.
The electrons from each NADH molecule can form a total of 2.5 ATPs from ADPs and phosphate groups through the electron transport chain, while each FADH2 molecule can produce a total of 1.5 ATPs. As a result, the 10 NADH molecules (from glycolysis and the Krebs cycle) and the 2 FADH2 molecules can form a total of 34 ATPs from this electron transport chain during aerobic respiration. This means that combined with the Krebs Cycle and glycolysis, the efficiency for the electron transport chain is about 65%, as compared to only 3.5% efficiency for glycolysis alone.
# Usefulness
The cristae greatly increase the surface area on which the above mentioned reactions take place. If they were absent, the inner membrane would be reduced to a single spherical shape, and with less reaction surface available, the reaction efficiency would be likewise reduced. Therefore, cristae are a necessity for the mitochondria to function efficiently.
ca:Cresta mitocondrial
de:Crista
Template:WH
Template:WS | https://www.wikidoc.org/index.php/Crista | |
61b90a0d9239ade6c51cb068c5743322c031ac96 | wikidoc | Crutch | Crutch
Crutches are medical tools used in the event that one's leg or legs may be injured or unable to support weight. The term, crutch, can also refer to anything used by a person as a psychological or emotional prop, or to something used as an excuse not to engage in normal life activities.
# Medical crutches
## Types
There are several different types of medical crutches:
## Information on use
Several different gait patterns are possible, and the user chooses which one to use depending on the reason the crutches are needed. For example, a person with a leg injury generally performs a "swing-to" gait: he lifts the injured leg, places both crutches in front of himself, and then swings his uninjured leg to meet the crutches. Other gaits are used when both legs are equally affected by some disability, or when the injured leg is partially weight-bearing.
Crutch is also used as a verb to refer to the use of crutches to travel somewhere. For example: "I am going to crutch to the store," or "I will be crutching over to your place."
The word "crutch" can refer to an object that is a weakness, that the bearer requires in order to function (metaphorically)
Example: Can sexuality be used as a social crutch?
# Materials
- Wooden
- Steel/ other metals
- Aluminium
- Carbon fiber
- Titanium | Crutch
Crutches are medical tools used in the event that one's leg or legs may be injured or unable to support weight. The term, crutch, can also refer to anything used by a person as a psychological or emotional prop, or to something used as an excuse not to engage in normal life activities.
# Medical crutches
## Types
There are several different types of medical crutches:
## Information on use
Several different gait patterns are possible, and the user chooses which one to use depending on the reason the crutches are needed. For example, a person with a leg injury generally performs a "swing-to" gait: he lifts the injured leg, places both crutches in front of himself, and then swings his uninjured leg to meet the crutches. Other gaits are used when both legs are equally affected by some disability, or when the injured leg is partially weight-bearing.[2]
Crutch is also used as a verb to refer to the use of crutches to travel somewhere. For example: "I am going to crutch to the store," or "I will be crutching over to your place."
The word "crutch" can refer to an object that is a weakness, that the bearer requires in order to function (metaphorically)
Example: Can sexuality be used as a social crutch?[3]
# Materials
- Wooden
- Steel/ other metals
- Aluminium
- Carbon fiber
- Titanium | https://www.wikidoc.org/index.php/Crutch | |
9da92f243e473bc6215c29f1adc22bf3c44e9508 | wikidoc | Cubane | Cubane
Cubane (C8H8) is a synthetic hydrocarbon molecule that consists of eight carbon atoms arranged at the corners of a cube, with one hydrogen atom attached to each carbon molecule. It is one of the Platonic hydrocarbons. Cubane is a solid crystalline substance. The cubane molecule was first synthesized in 1964 by Dr. Philip Eaton, a professor of chemistry at the University of Chicago. Before its synthesis, researchers believed that cubic carbon-based molecules could only exist in theory. It was believed that cubane would be impossible to synthesize because the unusually sharp 90-degree bonding angle of the carbon atoms would be too highly strained and hence unstable. Surprisingly, once formed, cubane is actually quite kinetically stable due to a lack of readily available decomposition paths.
Cubane and its derivative compounds have many important properties. The 90-degree bonding angle of the carbon atoms in cubane means that the bonds are highly strained. Therefore, cubane compounds store a great deal of energy in these bonds, which in principle may make them useful as high-density, high-energy fuels and explosives. Cubane also has the highest density of any hydrocarbon, further contributing to its ability to store large amounts of energy. Researchers are looking into using cubane and similarly synthesized cubic molecules in medicine and nanotechnology.
# Synthesis
The original 1964 cubane organic synthesis is a classic and starts from 2-cyclopentenone (compound 1.1 in scheme 1):
Reaction with N-bromosuccinimide in tetrachloromethane places an allylic bromine atom in 1.2 and further bromination with bromine in pentane - methylene chloride gives the tribromide 1.3. Two equivalents of hydrogen bromide are eliminated from this compound with diethylamine in diethyl ether to bromocyclopentadienone 1.4
In the second part (scheme 2), the spontaneous Diels-Alder dimerization of 2.1 to 2.2 is akin the dimerization of cyclopentadiene to dicyclopentadiene. For the next steps to succeed only the endo isomer should form which it does because the bromine atoms on their approach take up positions as far away from each other and the carbonyl group as possible. In this way the like-dipole interactions are minimized in the transition state for this reaction step. Both carbonyl groups are protected as acetals with ethylene glycol and p-toluenesulfonic acid in benzene and then one of them is selectively deprotected with aqueous hydrochloric acid to 2.2
In the next step endo isomer 2.3 with both alkene groups in close proximity forms the cage-like isomer 2.4 in a photochemical cycloaddition. The bromoketone group is converted to ring-contracted carboxylic acid 2.5 in a Favorskii rearrangement with potassium hydroxide. Next the thermal decarboxylation takes place through the acid chloride (with thionyl chloride) and the tert-butyl perester 2.6 (with t-butyl hydroperoxide and pyridine) to 2.7. then the acetal is once more removed in 2.8, another Favorskii rearrangement gives 2.9 and finally another decarboxylation 2.10 and 2.11.
# Inorganic cubanes and related derivatives
The cubane motif occurs outside of the area of organic chemistry. A prevalent non-organic cubane are the clusters found pervasively iron-sulfur proteins. Such species contain sulfur and Fe at alternating corners. Alternatively such inorganic cubane clusters can often be viewed as interpenetrated S4 and Fe4 tetrahedra. Many organometallic compounds adopt cubane structures, examples being (CpFe)4(CO)4, (Cp*Ru)4Cl4, and (Ph3PAg)4I4, | Cubane
Template:Chembox new
Cubane (C8H8) is a synthetic hydrocarbon molecule that consists of eight carbon atoms arranged at the corners of a cube, with one hydrogen atom attached to each carbon molecule. It is one of the Platonic hydrocarbons. Cubane is a solid crystalline substance. The cubane molecule was first synthesized in 1964 by Dr. Philip Eaton, a professor of chemistry at the University of Chicago.[1] Before its synthesis, researchers believed that cubic carbon-based molecules could only exist in theory. It was believed that cubane would be impossible to synthesize because the unusually sharp 90-degree bonding angle of the carbon atoms would be too highly strained and hence unstable. Surprisingly, once formed, cubane is actually quite kinetically stable due to a lack of readily available decomposition paths.
Cubane and its derivative compounds have many important properties. The 90-degree bonding angle of the carbon atoms in cubane means that the bonds are highly strained. Therefore, cubane compounds store a great deal of energy in these bonds, which in principle may make them useful as high-density, high-energy fuels and explosives. Cubane also has the highest density of any hydrocarbon, further contributing to its ability to store large amounts of energy. Researchers are looking into using cubane and similarly synthesized cubic molecules in medicine and nanotechnology.
# Synthesis
The original 1964 cubane organic synthesis is a classic and starts from 2-cyclopentenone (compound 1.1 in scheme 1)[1][2]:
Reaction with N-bromosuccinimide in tetrachloromethane places an allylic bromine atom in 1.2 and further bromination with bromine in pentane - methylene chloride gives the tribromide 1.3. Two equivalents of hydrogen bromide are eliminated from this compound with diethylamine in diethyl ether to bromocyclopentadienone 1.4
In the second part (scheme 2), the spontaneous Diels-Alder dimerization of 2.1 to 2.2 is akin the dimerization of cyclopentadiene to dicyclopentadiene. For the next steps to succeed only the endo isomer should form which it does because the bromine atoms on their approach take up positions as far away from each other and the carbonyl group as possible. In this way the like-dipole interactions are minimized in the transition state for this reaction step. Both carbonyl groups are protected as acetals with ethylene glycol and p-toluenesulfonic acid in benzene and then one of them is selectively deprotected with aqueous hydrochloric acid to 2.2
In the next step endo isomer 2.3 with both alkene groups in close proximity forms the cage-like isomer 2.4 in a photochemical [2+2] cycloaddition. The bromoketone group is converted to ring-contracted carboxylic acid 2.5 in a Favorskii rearrangement with potassium hydroxide. Next the thermal decarboxylation takes place through the acid chloride (with thionyl chloride) and the tert-butyl perester 2.6 (with t-butyl hydroperoxide and pyridine) to 2.7. then the acetal is once more removed in 2.8, another Favorskii rearrangement gives 2.9 and finally another decarboxylation 2.10 and 2.11.
# Inorganic cubanes and related derivatives
The cubane motif occurs outside of the area of organic chemistry. A prevalent non-organic cubane are the [Fe4-S4] clusters found pervasively iron-sulfur proteins. Such species contain sulfur and Fe at alternating corners. Alternatively such inorganic cubane clusters can often be viewed as interpenetrated S4 and Fe4 tetrahedra. Many organometallic compounds adopt cubane structures, examples being (CpFe)4(CO)4, (Cp*Ru)4Cl4, and (Ph3PAg)4I4, | https://www.wikidoc.org/index.php/Cubane | |
29e150ae52a7ce6eb331bb1e5cf12eff098d45a5 | wikidoc | Curare | Curare
This page is about the plant toxins. For the DC Comics character, see Curare.
Curare is not to be confused with Curara.
Curare is a common name for various dart poisons (arrow poisons) originating from South America. The three main types, or families of curare are:
- the tubocurare (also known as tube or bamboo curare, because of its packing into hollow bamboo tubes; main toxin is D-tubocurarine). It is a mono-quaternary alkaloid, an isoquinoline derivative.
- the calebas curare (also called "gourd curare" by older British classifications, being packed into hollow gourds; main toxins are alloferine and toxiferine)
- and the pot curare (packed in terra cotta pots; main toxins are protocurarine, protocurine, and protocuridine).
Of these three families, some formulas belonging to the calebas curare are the most toxic, relative to their LD50 values.
# History
In 1596 Sir Walter Raleigh mentioned the arrow poison in his book Discovery of the Large, Rich, and Beautiful Empire of Guiana (now Guyana). It is possible that the poison he described was not curare at all. The deadly effects of various Amazonian plant mixtures called curare were learned by early european explorers. In 1800, Alexander von Humboldt gave the first western account of how the toxin was prepared from plants by Orinoco River natives.
During 1811-1812 Sir Benjamin Collins Brody (1783-1862) experimented with curare He was the first to show that curare does not kill the animal and the recovery is complete if the animal’s respiration is maintained artificially. In 1825 Charles Waterton (1783-1865) (who gained fame by riding a captured alligator) described a classical experiment in which he kept a curarized she-ass alive by artificial ventilation with a bellows through a tracheostomy. Waterton is also credited with bringing curare to Europe. Robert Hermann Schomburgk, who was a trained botanist, identified the vine as one of the Strychnos species and gave it the now accepted name Strychnos toxifera.
George Harley (1829-1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. From 1887 the Burroughs Wellcome catalogue listed under its 'Tabloids' brand name, tablets of curare at 1/12 grain (price 8 shillings) for use in preparing a solution for hypodermic injection. In 1914 Henry Hallett Dale (1875-1968) described the physiological actions of acetylcholine. After twenty-five years he showed that acetylcholine is responsible for neuromuscular transmission which can be blocked by curare.
The most known and historically important toxin (because of its medical applications) is d-tubocurarine. It was isolated from the crude drug (from a museum sample of curare) in 1935 by Harold King (1887-1956) of London, working in Sir Henry Dale’s laboratory. He also established its chemical structure. It was introduced into anesthesiology in the early 1940s as a muscle relaxant for surgery. Curares are active (i.e. toxic or muscle relaxing, dependent on the intention of their use) only if given/applied parenterally, that is, by an injection, or direct wound contamination by poisoned dart/arrow tip. It is harmless if taken orally because curare compounds are too large and too highly charged to pass through the lining of the digestive tract to get absorbed into the blood. This is crucial, because the native tribes use curares mainly for hunting purposes, thus the curare-poisoned prey must remain safe to eat. In medicine, curare has been superseded by a number of curare-like agents (pancuronium, an alkaloid-like substance with steroidal skeleton in its molecule), that have a similar pharmacodynamic profile but with fewer side effects.
Curare has also been used historically as a paralyzing poison by South American indigenous people. The prey is killed by asphyxiation as the respiratory muscles are unable to contract resulting in apnea.
# Pharmacological properties
Curare is an example of a non-depolarizing muscle relaxant (aka, competitive antagonist) which blocks the nicotinic receptors, one of the two types of cholinergic (acetylcholine) receptors on the post synaptic membrane of the neuromuscular junction. Curare does not occupy the agonist position, but likely binds within the channel pore.
# Curare and anaesthesia
Isolated attempt to use curare during anesthesia dates back to 1912 by Arthur Lawen of Leipzig. But curare came to anesthesia via psychiatry (electroplexy). In 1939 Abram Elting and Bennett used it to modify metrazol induced convulsive therapy. Muscle relaxants are used in modern anesthesia for many reasons, such as providing optimal operating conditions and facilitating intubation of the trachea. Before muscle relaxants, anesthesiologists needed to use larger doses of the anesthetic agent, such as ether, chloroform or cyclopropane to achieve these aims. Such deep anaesthesia risked killing patients that were elderly or had heart conditions.
The source of curare in the Amazon was first researched by Richard Evans Schultes in 1941. Since the 1930s, it was being used in hospitals as a muscle relaxant. He discovered that different types of curare called for as many as 15 ingredients, and in time helped to identify more than 70 species that produced the drug .
On January 23, 1942, Dr. Harold Griffith and Dr. Enid Johnson gave a synthetic preparation of curare (Intracostin/ Intocostrin) to a patient undergoing an appendectomy (to supplement conventional anesthesia). Curare (d-tubocurarine) is now not used for anesthesia and surgery as better drugs are now available. When used with halothane d-tubocurarine can cause profound fall in blood pressure in some patients as both the drugs are ganglion blockers. . However, it is safer to use d-tubocurarine with ether.
In 1954, a sensational article was published by Beecher and Todd suggesting that the use of muscle relaxants (drugs similar to curare) increased death due to anesthesia nearly six fold. This has been completely disproved.
Modern anaesthetists have at their disposal a variety of muscle relaxants for use in anaesthesia. The ability to produce muscle relaxation independently from anaesthesia has permitted anaesthesiologists to adjust the two effects separately as needed to ensure that their patients are safely unconscious and sufficiently relaxed to permit surgery. However, because the muscle relaxants have no effect on consciousness, it is possible, through error or accident, that a patient may remain fully conscious and sensitive to pain during surgery, but unable to move and thus unable to alert attending staff to their state of awareness. This problem is now greatly solved by BIS monitor.
# Plants from which primary components of curare can be extracted
- Strychnos toxifera
- Chondrodendron tomentosum
# Names
Curare is also known as Ampi, Woorari, Woorara, Woorali, Wourali, Wouralia. Ourare, Ourari, Urari, and Uirary.
d-Tubocurarine, the popular alkaloid of Curare used as a medicine, was available as Tubocurarin, Tubocurarinum, Delacurarine, Tubarine, Metubine, Jexin, HSDB 2152, Isoquinoline Alkaloid, Tubadil, Mecostrin, Intracostin and Intocostrin.
# External link
- Charles Waterton's book Wanderings in South America Free version
- Neuromuscular blocking drugs: discovery and development | Curare
This page is about the plant toxins. For the DC Comics character, see Curare.
Curare is not to be confused with Curara.
Curare is a common name for various dart poisons (arrow poisons) originating from South America. The three main types, or families of curare are:
- the tubocurare (also known as tube or bamboo curare, because of its packing into hollow bamboo tubes; main toxin is D-tubocurarine). It is a mono-quaternary alkaloid, an isoquinoline derivative.
- the calebas curare (also called "gourd curare" by older British classifications, being packed into hollow gourds; main toxins are alloferine and toxiferine)
- and the pot curare (packed in terra cotta pots; main toxins are protocurarine, protocurine, and protocuridine).
Of these three families, some formulas belonging to the calebas curare are the most toxic, relative to their LD50 values.
# History
In 1596 Sir Walter Raleigh mentioned the arrow poison in his book Discovery of the Large, Rich, and Beautiful Empire of Guiana (now Guyana). It is possible that the poison he described was not curare at all.[1] The deadly effects of various Amazonian plant mixtures called curare were learned by early european explorers. In 1800, Alexander von Humboldt gave the first western account of how the toxin was prepared from plants by Orinoco River natives.[2]
During 1811-1812 Sir Benjamin Collins Brody (1783-1862) experimented with curare [3] He was the first to show that curare does not kill the animal and the recovery is complete if the animal’s respiration is maintained artificially. In 1825 Charles Waterton (1783-1865) (who gained fame by riding a captured alligator) described a classical experiment in which he kept a curarized she-ass alive by artificial ventilation with a bellows through a tracheostomy.[4] Waterton is also credited with bringing curare to Europe.[5] Robert Hermann Schomburgk, who was a trained botanist, identified the vine as one of the Strychnos species and gave it the now accepted name Strychnos toxifera.[6]
George Harley (1829-1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning.[7][8] From 1887 the Burroughs Wellcome catalogue listed under its 'Tabloids' brand name, tablets of curare at 1/12 grain (price 8 shillings) for use in preparing a solution for hypodermic injection. In 1914 Henry Hallett Dale (1875-1968) described the physiological actions of acetylcholine.[9] After twenty-five years he showed that acetylcholine is responsible for neuromuscular transmission which can be blocked by curare.[10]
The most known and historically important toxin (because of its medical applications) is d-tubocurarine. It was isolated from the crude drug (from a museum sample of curare) in 1935 by Harold King (1887-1956) of London, working in Sir Henry Dale’s laboratory. He also established its chemical structure.[11] It was introduced into anesthesiology in the early 1940s as a muscle relaxant for surgery. Curares are active (i.e. toxic or muscle relaxing, dependent on the intention of their use) only if given/applied parenterally, that is, by an injection, or direct wound contamination by poisoned dart/arrow tip. It is harmless if taken orally[12] [13] because curare compounds are too large and too highly charged to pass through the lining of the digestive tract to get absorbed into the blood. This is crucial, because the native tribes use curares mainly for hunting purposes, thus the curare-poisoned prey must remain safe to eat. In medicine, curare has been superseded by a number of curare-like agents (pancuronium, an alkaloid-like substance with steroidal skeleton in its molecule), that have a similar pharmacodynamic profile but with fewer side effects.
Curare has also been used historically as a paralyzing poison by South American indigenous people. The prey is killed by asphyxiation as the respiratory muscles are unable to contract resulting in apnea.
# Pharmacological properties
Curare is an example of a non-depolarizing muscle relaxant (aka, competitive antagonist) which blocks the nicotinic receptors, one of the two types of cholinergic (acetylcholine) receptors on the post synaptic membrane of the neuromuscular junction. Curare does not occupy the agonist position, but likely binds within the channel pore.
# Curare and anaesthesia
Isolated attempt to use curare during anesthesia dates back to 1912 by Arthur Lawen of Leipzig.[14] But curare came to anesthesia via psychiatry (electroplexy). In 1939 Abram Elting and Bennett used it to modify metrazol induced convulsive therapy.[15] Muscle relaxants are used in modern anesthesia for many reasons, such as providing optimal operating conditions and facilitating intubation of the trachea. Before muscle relaxants, anesthesiologists needed to use larger doses of the anesthetic agent, such as ether, chloroform or cyclopropane to achieve these aims. Such deep anaesthesia risked killing patients that were elderly or had heart conditions.
The source of curare in the Amazon was first researched by Richard Evans Schultes in 1941. Since the 1930s, it was being used in hospitals as a muscle relaxant. He discovered that different types of curare called for as many as 15 ingredients, and in time helped to identify more than 70 species that produced the drug [1].
On January 23, 1942, Dr. Harold Griffith and Dr. Enid Johnson gave a synthetic preparation of curare (Intracostin/ Intocostrin) to a patient undergoing an appendectomy (to supplement conventional anesthesia). Curare (d-tubocurarine) is now not used for anesthesia and surgery as better drugs are now available. When used with halothane d-tubocurarine can cause profound fall in blood pressure in some patients as both the drugs are ganglion blockers. [16]. However, it is safer to use d-tubocurarine with ether.
In 1954, a sensational article was published by Beecher and Todd suggesting that the use of muscle relaxants (drugs similar to curare) increased death due to anesthesia nearly six fold. This has been completely disproved.[17]
Modern anaesthetists have at their disposal a variety of muscle relaxants for use in anaesthesia. The ability to produce muscle relaxation independently from anaesthesia has permitted anaesthesiologists to adjust the two effects separately as needed to ensure that their patients are safely unconscious and sufficiently relaxed to permit surgery. However, because the muscle relaxants have no effect on consciousness, it is possible, through error or accident, that a patient may remain fully conscious and sensitive to pain during surgery, but unable to move and thus unable to alert attending staff to their state of awareness. This problem is now greatly solved by BIS monitor.
# Plants from which primary components of curare can be extracted
- Strychnos toxifera
- Chondrodendron tomentosum
# Names
Curare is also known as Ampi, Woorari, Woorara, Woorali, Wourali, Wouralia. Ourare, Ourari, Urari, and Uirary.
d-Tubocurarine, the popular alkaloid of Curare used as a medicine, was available as Tubocurarin, Tubocurarinum, Delacurarine, Tubarine, Metubine, Jexin, HSDB 2152, Isoquinoline Alkaloid, Tubadil, Mecostrin, Intracostin and Intocostrin.
# External link
- Charles Waterton's book Wanderings in South America Free version
- Neuromuscular blocking drugs: discovery and development | https://www.wikidoc.org/index.php/Curare | |
53c1452ac7f73ef9631adecf265a9951b52fea12 | wikidoc | Cyborg | Cyborg
A cyborg is a cybernetic organism (i.e. an organism that is a self-regulating integration of artificial and natural systems). The term was coined in 1960 when Manfred Clynes and Nathan Kline used it in an article about the advantages of self-regulating human-machine systems in outer space. D. S. Halacy's Cyborg: Evolution of the Superman in 1965 featured an introduction by Manfred Clynes, who wrote of a "new frontier" that was "not merely space, but more profoundly the relationship between 'inner space' to 'outer space'
-a bridge...between mind and matter." The cyborg is often seen today merely as an organism that has enhanced abilities due to technology, but this perhaps oversimplifies the category of feedback.
Fictional cyborgs are portrayed as a synthesis of organic and synthetic parts, and frequently pose the question of difference between human and machine as one concerned with morality, free will, and empathy. Fictional cyborgs may be represented as visibly mechanical (e.g. the Borg in the Star Trek franchise); or as almost indistinguishable from humans (e.g. the Cylons from the re-imagining of Battlestar Galactica). These fictional portrayals often register our society's discomfort with its seemingly increasing reliance upon technology, particularly when used for war, and when used in ways that seem to threaten free will. They also often have abilities, physical or mental, far in advance of their human counterparts (military forms may have inbuilt weapons, amongst other things). Real cyborgs are more frequently people who use cybernetic technology to repair or overcome the physical and mental constraints of their bodies. While cyborgs are commonly thought of as mammals, they can be any kind of organism.
# Overview
According to some definitions of the term, the metaphysical and physical attachments humanity has with even the most basic technologies have already made them cyborgs. In a typical example, a human fitted with a heart pacemaker or an insulin pump (if the person has diabetes) might be considered a cyborg, since these mechanical parts enhance the body's "natural" mechanisms through synthetic feedback mechanisms. Some theorists cite such modifications as contact lenses, hearing aids, or intraocular lenses as examples of fitting humans with technology to enhance their biological capabilities; however, these modifications are no more cybernetic than would be a pen, a wooden leg, or the spears used by chimps to hunt vertebrates. Cochlear implants that combine mechanical modification with any kind of feedback response are more accurately cyborg enhancements.
The prefix "cyber" is also used to address human-technology mixtures in the abstract. This includes artifacts that may not popularly be considered technology. Pen and paper, for example, as well as speech, language. Augmented with these technologies, and connected in communication with people in other times and places, a person becomes capable of much more than they were before. This is like computers, which gain power by using Internet protocols to connect with other computers. Cybernetic technologies include highways, pipes, electrical wiring, buildings, electrical plants, libraries, and other infrastructure that we hardly notice, but which are critical parts of the cybernetics that we work within.
Bruce Sterling in his universe of Shaper/Mechanist suggested an idea of alternative cyborg called Lobster, which is made not by using internal implants, but by using an external shell (e.g. a Powered Exoskeleton). (Bruce Sterling: Cicada Queen). Unlike human cyborgs that appear human externally while being synthetic internally, a Lobster looks inhuman externally but contains a human internally. The computer game Deus Ex: Invisible War prominently featured three clans of Omar, where "Omar" is a Russian translation of the word "Lobster" (since the clans are of Russian origin in the game). Sterling's distinction between cyborgs and Lobsters may have been a reaction to the Terminator films and their outwardly human, internally mechanical cyborgs dominating popular conception of the term. However, regardless of popular conception, Sterling's Lobsters are well within the technical definition of a cyborg, and so the term has no useful application outside of fiction.
# History
The concept of a man-machine mixture was widespread in science fiction before World War II. In 1908 Jean de la Hire introduced Nyctalope (perhaps the first true superhero was also the first literary cyborg) in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who Can Live in Water). Edmond Hamilton presented space explorers with a mixture of organic and machine parts in his novel The Comet Doom in 1928. He later featured the talking, living brain of an old scientist, Simon Wright, floating around in a transparent case, in all the adventures of his famous hero, Captain Future. In the short story "No Woman Born" in 1944, C. L. Moore wrote of Deirdre, a dancer, whose body was burned completely and whose brain was placed in a faceless but beautiful and supple mechanical body.
The term was created by Manfred E. Clynes and Nathan S. Kline in 1960 to refer to their conception of an enhanced human being who could survive in extraterrestrial environments:
Their concept was the outcome of thinking about the need for an intimate relationship between human and machine as the new frontier of space exploration was beginning to take place. A designer of physiological instrumentation and electronic data-processing systems, Clynes was the chief research scientist in the Dynamic Simulation Laboratory at Rockland State Hospital in New York.
A book titled Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable computer was published by Doubleday in 2001. Some of the ideas in the book were incorporated into the 35mm motion picture film Cyberman.
# Individual cyborgs
Generally, the term "cyborg" is used to refer to a man or woman with bionic, or robotic, implants.
Today, the C-LEG system is used to replace human legs that were amputated because of injury or illness. The use of sensors in the artificial leg aids in walking significantly. These are the first real steps towards the next generation of cyborgs.
Additionally cochlear implants and magnetic implants which provide people with a sense that they would not otherwise have had can additionally be thought of as creating cyborgs.
In 2002,under the heading Project Cyborg, a British Scientist, Kevin Warwick, had an array of 100 electrodes fired in to his nervous system in order to link his nervous system into the internet. With this in place he successfully carried out a series of experiments including extending his nervous system over the internet to control a robotic hand, a form of extended sensory input and the first direct electronic communication between the nervous systems of two humans.
# Social cyborgs
More broadly, the full term "cybernetic organism" is used to describe larger networks of communication and control. For example, cities, networks of roads, networks of software, corporations, markets, governments, and the collection of these things together. A corporation can be considered as an artificial intelligence that makes use of replaceable human components to function. People at all ranks can be considered replaceable agents of their functionally intelligent government institutions, whether such a view is desirable or not.
# Cyborg proliferation in society
## Medicine
In medicine, there are two important and different types of cyborgs: these are the restorative and the enhanced. Restorative technologies “restore lost function, organs, and limbs” (Gray 1995). The key aspect of restorative cyborgization is the repair of broken or missing processes to revert to a healthy or average level of function. There is no enhancement to the original faculties and processes that were lost.
On the contrary, the enhanced cyborg “follows a principle, and it is the principle of optimal performance: maximising output (the information or modifications obtained) and minimising input (the energy expended in the process)” (Lyotard 1984). Thus, the enhanced cyborg intends to exceed normal processes or even gain new functions that were not originally present.
Although prostheses in general supplement lost or damaged body parts with the integration of a mechanical artifice, bionic implants in medicine allow model organs or body parts to mimic the original function more closely. Michael Chorost wrote a memoir of his experience with cochlear implants, or bionic ear, titled "Rebuilt: How Becoming Part Computer Made Me More Human." Jesse Sullivan became one of the first people to operate a fully robotic limb through a nerve-muscle graft, enabling him a complex range of motions beyond that of previous prosthetics. By 2004, a fully functioning artificial heart was developed. The continued technological development of bionic and nanotechnologies begins to raise the question of enhancement, and of the future possibilities for cyborgs which surpass the original functionality of the biological model. The ethics and desirability of "enhancement prosthetics" have been debated; their proponents include the transhumanist movement, with its belief that new technologies can assist the human race in developing beyond its present, normative limitations such as ageing and disease, as well as other, more general incapacities, such as limitations on speed, strength, endurance, and intelligence. Opponents of the concept describe what they believe to be biases which propel the development and acceptance of such technologies; namely, a bias towards functionality and efficiency that may compel assent to a view of human people which de-emphasises as defining characteristics actual manifestations of humanity and personhood, in favour of definition in terms of upgrades, versions, and utility.
One of the more common and accepted forms of temporary modification occurs as a result of prenatal diagnosis technologies. Modern parents willingly use testing methods such as ultrasounds and amniocentesis to determine the sex or health of the fetus. The discovery of birth defects or other congenital problems by these procedures may lead to neonatal treatment in the form of open fetal surgery or the less invasive fetal intervention.
A Brain computer interface, or BCI, provides a direct path of communication from the brain to an external device, effectively creating a cyborg. Research of Invasive BCIs, which utilize electrodes implanted directly into the grey matter of the brain, has focused on restoring damaged eye sight in the blind and providing functionality to paralyzed people, most notably those with severe cases, such as Locked-In syndrome.
## Military
The "cyborg soldier" often refers to a soldier whose weapon and survival systems are integrated into the self, creating a human-machine interface. A notable example is the Pilot's Associate, first developed in 1985, which would use Artificial Intelligence to assist a combat pilot. The push for further integration between pilot and aircraft would include the Pilot Associate's ability to "initiate actions of its own when it deems it necessary, including firing weapons and even taking over the aircraft from the pilot. (Gray, Cyborg Handbook)
Military organizations' research has recently focused on the utilization of cyborg animals for inter-species relationships for the purposes of a supposed a tactical advantage. DARPA has announced its interest in developing "cyborg insects" to transmit data from sensors implanted into the insect during the pupal stage. The insect's motion would be controlled from a MEMS, or Micro-Electro-Mechanical System, and would conceivably surveil an environment and detect explosives or gas. Similarly, DARPA has developed a neural implant to remotely control the movement of sharks. The shark's unique senses would be exploited to provide data feedback in relation to enemy ship movement and underwater explosives.
Other proposals have integrated the mechanical into the intuitive abilities of the individual soldier. Researchers at the University of California, Berkeley have set out to "create an exoskeleton that combines a human control system with robotic muscle." The device is distinctly Cyborgian in that it is self-powered, and requires no conscious manipulation by the pilot soldier. The exoskeleton responds to the pilot, through constant computer calculations, to distribute and lessen weight exerted on the pilot, allowing hypothetically for soldiers to haul large amounts of medical supplies and carry injured soldiers to safety.
## Marine Cyborgs
The term “cyborg” not only applies to humans, but to animals as well. Some of the best examples of such animal cyborgs come from the ocean, but such research is relatively new. Technologies used range from simple radio transmitters attached for tracking purposes, to extremely complex surgically implanted electrodes used to record and manipulate behavior. One of the more fictionalized representations of a marine cyborg includes Jones, a cyborg dolphin from William Gibson’s Johnny Mnemonic. Jones is one of the more extreme examples, sporting a purely mechanical head piece, while most real world examples go unnoticed. Most “enhancements” added to marine organisms by humans are small or implanted directly into the skin, and are created as to not disrupt their natural behavior patterns. In current news, DARPA, the Defense Advanced Research Projects Agency, is experimenting with surgically implanted electrodes in shark brains to learn more about their behavior in hopes of being able to control some aspects of it. Shark behavior is still a largely unstudied subject in the biological sciences and the use of such electrodes might provide biologists a vast amount of information in short periods of time. With data collected from the experimentation DARPA engineers hope to decode the signals that the sharks are receiving in order to remotely manipulate such behaviors in the future. The shark’s natural ability to sense weak magnetic and electrical fields is of particular interest to the military, as they hope to use this to their advantage in future campaigns, to see and feel everything that a shark does as it glides through the ocean.
## In Sports
The cyborgization of sports has come to the forefront of the national conscious in recent years. Through the media, America has been exposed to the subject both with the BALCO scandal and the accusations of blood doping at the Tour de France levied against Lance Armstrong and Floyd Landis. But, there is more to the subject; steroids, blood doping, prosthesis, body modification, and maybe in the future, genetic modification are all topics that should be included within cyborgs in sports.
The most commonly used steroid in sports is anabolic steroids. Anabolic steroids are synthetically created to function like male hormones. Athletes use it to enhance their strength and performance beyond their natural means. Anabolic steroids increase the amount of testosterone in the body, which promotes muscle and bone growth in the body. Anabolic steroids also make it so an athlete can workout for longer periods of time than they naturally can.
Blood doping usually refers to three forms of adding red blood cells to the blood stream. The first form of blood doping is called homologous transfusions, in which the red blood cells from another person of the same blood type as the athlete are concentrated and frozen for a later transfusion when the athlete is going to start an event. The other form of blood doping is autologous. Autologous transfusions are when an athlete takes red blood cells out of their body before a competition and transfuse them back in their body right before the competition. The other form of blood doping is done through the injection of a hormone called erythropoietin. Erythropoietin increases the production of red blood cells in the blood stream. All of these forms of blood doping are used to increase the oxygen carrying capacity of the blood. Blood doping is mainly used in endurance sports such as cycling and cross-country skiing because the extra oxygen carrying capacity given to the blood through blood doping gives the athlete more endurance.
The most common forms of prosthetics and enhancement we see in sports today are prosthetic legs and Tommy John surgery. Tommy John surgery is has resurrected many careers in Major League Baseball, actually allowing the pitchers to throw harder than they ever were able to do before. Some prime examples of this are Eric Gagne, Kerry Wood, and John Smoltz. "I hit my top speed (in pitch velocity) after the surgery," says Wood, the Chicago Cubs' 26-year-old All-Star. "I'm throwing harder, consistently." Gagne went from an average pitcher to being hall of fame eligible, winning the National League Cy Young Award in 2002, by tying the National League record for most saves in a season, and the National League Rolaids Relief Man of the Year in 2002 and 2003.
As of now, prosthetic legs and feet are not advanced enough to give the athlete the edge, and people with these prosthetics are allowed to compete, possibly only because they are not actually competitive in the Ironman event among other such -athlons. Prosthesis in track and field, however, is a budding issue. Prosthetic legs and feet may soon be better than their human counterparts. Some prosthetic legs and feet allow for runners to adjust the length of their stride which could potentially improve run times and in time actually allow a runner with prosthetic legs to be the fastest in the world.
## In Fiction
In 1966, Kit Pedler, a medical scientist, created the Cybermen, a race of cyborgs, for the TV program Doctor Who based on his concerns about science changing and threatening humanity. The Cybermen were a race who had replaced much of their bodies with mechanical prostheses and were now supposedly emotionless creatures driven only by logic.
Isaac Asimov's short story "The Bicentennial Man" explored cybernetic concepts. The central character is NDR, a robot who begins to modify himself with organic components. His explorations lead to breakthroughs in human medicine via artificial organs and prosthetics. By the end of the story, there is little physical difference between the body of the hero, now called Andrew, and humans equipped with advanced prosthetics, save for the presence of Andrew's artificial positronic brain. Asimov also explored the idea of the cyborg in relation to robots in his short story Segregationist, collected in The Complete Robot.
The 1972 science fiction novel Cyborg, by Martin Caidin, told the story of a man whose damaged body parts are replaced by mechanical devices. This novel was later adapted into a TV series, The Six Million Dollar Man, in 1973, and its spin-off, The Bionic Woman in 1976.
In 1974, Marvel Comics writer Rich Buckler introduced the cyborg Deathlok the Demolisher, and a dystopian post-apocalyptic future, in Astonishing Tales #25. Buckler's character dealt with rebellion and loyalty, with allusion to Frankenstein's monster, in a twelve-issue run. Deathlok was later resurrected in Captain America.
The 1982 film Blade Runner featured creatures called replicants, bio-engineered or bio-robotic beings. The Nexus series — genetically designed by the Tyrell Corporation — are virtually identical to an adult human, but have superior strength, agility, and variable intelligence depending on the model. Because of their physical similarity to humans a replicant must be detected by its lack of emotional responses and empathy to questions posed in a Voight-Kampff test. A derogatory term for a replicant is "skin-job," a term heard again extensively in Battlestar Galactica. In the opening crawl of the film, they are first said to be the next generation in robotics. The crawl also states genetics play some role in the creation of replicants. The original novel makes mention of the biological components of the androids, but also alludes to the mechanical aspects commonly found in other material relating to robots.
The 1987 science fiction action film RoboCop features a cyborg protagonist. After being killed by a criminal gang, police officer Alex Murphy is transformed by a private company into a cyborg cop. The transformation is used to explore the theme of reification and identity. There are cyborg kaiju in the Godzilla films such as Gigan and Mechagodzilla.
Although frequently referred to onscreen as a cyborg, The Terminator might be more properly an android. However, because it has skin and blood (cellular organic systems), the Terminator is technically a cybernetic organism.Template:Rewrite
One of the most famous cyborgs is Darth Vader from the Star Wars films. Vader was once Anakin Skywalker, a famous Jedi turned to the Dark Side. After a furious battle with his former master, Obi-Wan Kenobi, Anakin is left for dead beside a lava flow on Mustafar, and is outfitted with an artifical life support system as well as robotic arms and legs. General Grievous, Lobot and Luke Skywalker are the three other most prominent cyborgs in the Star Wars universe.
In the manga and anime series by Akira Toriyama titled Dragon Ball, a scientist named Dr. Gero created several cyborgs, including villain Cell, sibling cyborgs Android 17 and Android 18, as well as Android 20, who was built from Gero himself.
In the manga and anime series Ghost in the Shell, Motoko Kusanagi lived in a world where the majority of adults are cyborgs and can connect wirelessly to the Internet for real-time communication and data research. The most common augmentation in the series were artificial brains called cyberbrains. | Cyborg
Template:Cyborg
A cyborg is a cybernetic organism (i.e. an organism that is a self-regulating integration of artificial and natural systems). The term was coined in 1960 when Manfred Clynes and Nathan Kline used it in an article about the advantages of self-regulating human-machine systems in outer space.[1] D. S. Halacy's Cyborg: Evolution of the Superman in 1965 featured an introduction by Manfred Clynes, who wrote of a "new frontier" that was "not merely space, but more profoundly the relationship between 'inner space' to 'outer space'
-a bridge...between mind and matter."[2] The cyborg is often seen today merely as an organism that has enhanced abilities due to technology,[3] but this perhaps oversimplifies the category of feedback.
Fictional cyborgs are portrayed as a synthesis of organic and synthetic parts, and frequently pose the question of difference between human and machine as one concerned with morality, free will, and empathy. Fictional cyborgs may be represented as visibly mechanical (e.g. the Borg in the Star Trek franchise); or as almost indistinguishable from humans (e.g. the Cylons from the re-imagining of Battlestar Galactica). These fictional portrayals often register our society's discomfort with its seemingly increasing reliance upon technology, particularly when used for war, and when used in ways that seem to threaten free will. They also often have abilities, physical or mental, far in advance of their human counterparts (military forms may have inbuilt weapons, amongst other things). Real cyborgs are more frequently people who use cybernetic technology to repair or overcome the physical and mental constraints of their bodies. While cyborgs are commonly thought of as mammals, they can be any kind of organism.
# Overview
According to some definitions of the term, the metaphysical and physical attachments humanity has with even the most basic technologies have already made them cyborgs.[4] In a typical example, a human fitted with a heart pacemaker or an insulin pump (if the person has diabetes) might be considered a cyborg, since these mechanical parts enhance the body's "natural" mechanisms through synthetic feedback mechanisms. Some theorists cite such modifications as contact lenses, hearing aids, or intraocular lenses as examples of fitting humans with technology to enhance their biological capabilities; however, these modifications are no more cybernetic than would be a pen, a wooden leg, or the spears used by chimps to hunt vertebrates.[5] Cochlear implants that combine mechanical modification with any kind of feedback response are more accurately cyborg enhancements.
The prefix "cyber" is also used to address human-technology mixtures in the abstract. This includes artifacts that may not popularly be considered technology. Pen and paper, for example, as well as speech, language. Augmented with these technologies, and connected in communication with people in other times and places, a person becomes capable of much more than they were before. This is like computers, which gain power by using Internet protocols to connect with other computers. Cybernetic technologies include highways, pipes, electrical wiring, buildings, electrical plants, libraries, and other infrastructure that we hardly notice, but which are critical parts of the cybernetics that we work within.
Bruce Sterling in his universe of Shaper/Mechanist suggested an idea of alternative cyborg called Lobster, which is made not by using internal implants, but by using an external shell (e.g. a Powered Exoskeleton).[citation needed] (Bruce Sterling: Cicada Queen). Unlike human cyborgs that appear human externally while being synthetic internally, a Lobster looks inhuman externally but contains a human internally. The computer game Deus Ex: Invisible War prominently featured three clans of Omar, where "Omar" is a Russian translation of the word "Lobster" (since the clans are of Russian origin in the game). Sterling's distinction between cyborgs and Lobsters may have been a reaction to the Terminator films and their outwardly human, internally mechanical cyborgs dominating popular conception of the term. However, regardless of popular conception, Sterling's Lobsters are well within the technical definition of a cyborg, and so the term has no useful application outside of fiction.
# History
The concept of a man-machine mixture was widespread in science fiction before World War II. In 1908 Jean de la Hire introduced Nyctalope (perhaps the first true superhero was also the first literary cyborg) in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who Can Live in Water). Edmond Hamilton presented space explorers with a mixture of organic and machine parts in his novel The Comet Doom in 1928. He later featured the talking, living brain of an old scientist, Simon Wright, floating around in a transparent case, in all the adventures of his famous hero, Captain Future. In the short story "No Woman Born" in 1944, C. L. Moore wrote of Deirdre, a dancer, whose body was burned completely and whose brain was placed in a faceless but beautiful and supple mechanical body.
The term was created by Manfred E. Clynes and Nathan S. Kline in 1960 to refer to their conception of an enhanced human being who could survive in extraterrestrial environments:
Their concept was the outcome of thinking about the need for an intimate relationship between human and machine as the new frontier of space exploration was beginning to take place. A designer of physiological instrumentation and electronic data-processing systems, Clynes was the chief research scientist in the Dynamic Simulation Laboratory at Rockland State Hospital in New York.
A book titled Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable computer was published by Doubleday in 2001. Some of the ideas in the book were incorporated into the 35mm motion picture film Cyberman.
# Individual cyborgs
Generally, the term "cyborg" is used to refer to a man or woman with bionic, or robotic, implants.
Today, the C-LEG system is used to replace human legs that were amputated because of injury or illness. The use of sensors in the artificial leg aids in walking significantly. These are the first real steps towards the next generation of cyborgs[citation needed].
Additionally cochlear implants and magnetic implants which provide people with a sense that they would not otherwise have had can additionally be thought of as creating cyborgs.
In 2002,under the heading Project Cyborg, a British Scientist, Kevin Warwick, had an array of 100 electrodes fired in to his nervous system in order to link his nervous system into the internet. With this in place he successfully carried out a series of experiments including extending his nervous system over the internet to control a robotic hand, a form of extended sensory input and the first direct electronic communication between the nervous systems of two humans[7].
# Social cyborgs
More broadly, the full term "cybernetic organism" is used to describe larger networks of communication and control. For example, cities, networks of roads, networks of software, corporations, markets, governments, and the collection of these things together. A corporation can be considered as an artificial intelligence that makes use of replaceable human components to function. People at all ranks can be considered replaceable agents of their functionally intelligent government institutions, whether such a view is desirable or not.
# Cyborg proliferation in society
## Medicine
In medicine, there are two important and different types of cyborgs: these are the restorative and the enhanced. Restorative technologies “restore lost function, organs, and limbs” (Gray 1995). The key aspect of restorative cyborgization is the repair of broken or missing processes to revert to a healthy or average level of function. There is no enhancement to the original faculties and processes that were lost.
On the contrary, the enhanced cyborg “follows a principle, and it is the principle of optimal performance: maximising output (the information or modifications obtained) and minimising input (the energy expended in the process)” (Lyotard 1984). Thus, the enhanced cyborg intends to exceed normal processes or even gain new functions that were not originally present.
Although prostheses in general supplement lost or damaged body parts with the integration of a mechanical artifice, bionic implants in medicine allow model organs or body parts to mimic the original function more closely. Michael Chorost wrote a memoir of his experience with cochlear implants, or bionic ear, titled "Rebuilt: How Becoming Part Computer Made Me More Human." Jesse Sullivan became one of the first people to operate a fully robotic limb through a nerve-muscle graft, enabling him a complex range of motions beyond that of previous prosthetics. By 2004, a fully functioning artificial heart was developed. The continued technological development of bionic and nanotechnologies begins to raise the question of enhancement, and of the future possibilities for cyborgs which surpass the original functionality of the biological model. The ethics and desirability of "enhancement prosthetics" have been debated; their proponents include the transhumanist movement, with its belief that new technologies can assist the human race in developing beyond its present, normative limitations such as ageing and disease, as well as other, more general incapacities, such as limitations on speed, strength, endurance, and intelligence. Opponents of the concept describe what they believe to be biases which propel the development and acceptance of such technologies; namely, a bias towards functionality and efficiency that may compel assent to a view of human people which de-emphasises as defining characteristics actual manifestations of humanity and personhood, in favour of definition in terms of upgrades, versions, and utility.
One of the more common and accepted forms of temporary modification occurs as a result of prenatal diagnosis technologies. Modern parents willingly use testing methods such as ultrasounds and amniocentesis to determine the sex or health of the fetus. The discovery of birth defects or other congenital problems by these procedures may lead to neonatal treatment in the form of open fetal surgery or the less invasive fetal intervention.
A Brain computer interface, or BCI, provides a direct path of communication from the brain to an external device, effectively creating a cyborg. Research of Invasive BCIs, which utilize electrodes implanted directly into the grey matter of the brain, has focused on restoring damaged eye sight in the blind and providing functionality to paralyzed people, most notably those with severe cases, such as Locked-In syndrome.
## Military
The "cyborg soldier" often refers to a soldier whose weapon and survival systems are integrated into the self, creating a human-machine interface. A notable example is the Pilot's Associate, first developed in 1985, which would use Artificial Intelligence to assist a combat pilot. The push for further integration between pilot and aircraft would include the Pilot Associate's ability to "initiate actions of its own when it deems it necessary, including firing weapons and even taking over the aircraft from the pilot. (Gray, Cyborg Handbook)
Military organizations' research has recently focused on the utilization of cyborg animals for inter-species relationships for the purposes of a supposed a tactical advantage. DARPA has announced its interest in developing "cyborg insects" to transmit data from sensors implanted into the insect during the pupal stage. The insect's motion would be controlled from a MEMS, or Micro-Electro-Mechanical System, and would conceivably surveil an environment and detect explosives or gas.[8] Similarly, DARPA has developed a neural implant to remotely control the movement of sharks. The shark's unique senses would be exploited to provide data feedback in relation to enemy ship movement and underwater explosives[9].
Other proposals have integrated the mechanical into the intuitive abilities of the individual soldier. Researchers at the University of California, Berkeley have set out to "create an exoskeleton that combines a human control system with robotic muscle."[10] The device is distinctly Cyborgian in that it is self-powered, and requires no conscious manipulation by the pilot soldier. The exoskeleton responds to the pilot, through constant computer calculations, to distribute and lessen weight exerted on the pilot, allowing hypothetically for soldiers to haul large amounts of medical supplies and carry injured soldiers to safety.
## Marine Cyborgs
The term “cyborg” not only applies to humans, but to animals as well. Some of the best examples of such animal cyborgs come from the ocean, but such research is relatively new. Technologies used range from simple radio transmitters attached for tracking purposes, to extremely complex surgically implanted electrodes used to record and manipulate behavior. One of the more fictionalized representations of a marine cyborg includes Jones, a cyborg dolphin from William Gibson’s Johnny Mnemonic. Jones is one of the more extreme examples, sporting a purely mechanical head piece, while most real world examples go unnoticed. Most “enhancements” added to marine organisms by humans are small or implanted directly into the skin, and are created as to not disrupt their natural behavior patterns. In current news, DARPA, the Defense Advanced Research Projects Agency, is experimenting with surgically implanted electrodes in shark brains to learn more about their behavior in hopes of being able to control some aspects of it. Shark behavior is still a largely unstudied subject in the biological sciences and the use of such electrodes might provide biologists a vast amount of information in short periods of time. With data collected from the experimentation DARPA engineers hope to decode the signals that the sharks are receiving in order to remotely manipulate such behaviors in the future. The shark’s natural ability to sense weak magnetic and electrical fields is of particular interest to the military, as they hope to use this to their advantage in future campaigns, to see and feel everything that a shark does as it glides through the ocean.
## In Sports
The cyborgization of sports has come to the forefront of the national conscious in recent years. Through the media, America has been exposed to the subject both with the BALCO scandal and the accusations of blood doping at the Tour de France levied against Lance Armstrong and Floyd Landis. But, there is more to the subject; steroids, blood doping, prosthesis, body modification, and maybe in the future, genetic modification are all topics that should be included within cyborgs in sports.
The most commonly used steroid in sports is anabolic steroids. Anabolic steroids are synthetically created to function like male hormones. Athletes use it to enhance their strength and performance beyond their natural means. Anabolic steroids increase the amount of testosterone in the body, which promotes muscle and bone growth in the body. Anabolic steroids also make it so an athlete can workout for longer periods of time than they naturally can.
Blood doping usually refers to three forms of adding red blood cells to the blood stream. The first form of blood doping is called homologous transfusions, in which the red blood cells from another person of the same blood type as the athlete are concentrated and frozen for a later transfusion when the athlete is going to start an event. The other form of blood doping is autologous. Autologous transfusions are when an athlete takes red blood cells out of their body before a competition and transfuse them back in their body right before the competition. The other form of blood doping is done through the injection of a hormone called erythropoietin. Erythropoietin increases the production of red blood cells in the blood stream. All of these forms of blood doping are used to increase the oxygen carrying capacity of the blood. Blood doping is mainly used in endurance sports such as cycling and cross-country skiing because the extra oxygen carrying capacity given to the blood through blood doping gives the athlete more endurance.
The most common forms of prosthetics and enhancement we see in sports today are prosthetic legs and Tommy John surgery. Tommy John surgery is has resurrected many careers in Major League Baseball, actually allowing the pitchers to throw harder than they ever were able to do before. Some prime examples of this are Eric Gagne, Kerry Wood, and John Smoltz. "I hit my top speed (in pitch velocity) after the surgery," says Wood, the Chicago Cubs' 26-year-old All-Star. "I'm throwing harder, consistently." Gagne went from an average pitcher to being hall of fame eligible, winning the National League Cy Young Award in 2002, by tying the National League record for most saves in a season, and the National League Rolaids Relief Man of the Year in 2002 and 2003.
As of now, prosthetic legs and feet are not advanced enough to give the athlete the edge, and people with these prosthetics are allowed to compete, possibly only because they are not actually competitive in the Ironman event among other such -athlons. Prosthesis in track and field, however, is a budding issue. Prosthetic legs and feet may soon be better than their human counterparts. Some prosthetic legs and feet allow for runners to adjust the length of their stride which could potentially improve run times and in time actually allow a runner with prosthetic legs to be the fastest in the world.
## In Fiction
In 1966, Kit Pedler, a medical scientist, created the Cybermen, a race of cyborgs, for the TV program Doctor Who based on his concerns about science changing and threatening humanity. The Cybermen were a race who had replaced much of their bodies with mechanical prostheses and were now supposedly emotionless creatures driven only by logic.
Isaac Asimov's short story "The Bicentennial Man" explored cybernetic concepts. The central character is NDR, a robot who begins to modify himself with organic components. His explorations lead to breakthroughs in human medicine via artificial organs and prosthetics. By the end of the story, there is little physical difference between the body of the hero, now called Andrew, and humans equipped with advanced prosthetics, save for the presence of Andrew's artificial positronic brain. Asimov also explored the idea of the cyborg in relation to robots in his short story Segregationist, collected in The Complete Robot.
The 1972 science fiction novel Cyborg, by Martin Caidin, told the story of a man whose damaged body parts are replaced by mechanical devices. This novel was later adapted into a TV series, The Six Million Dollar Man, in 1973, and its spin-off, The Bionic Woman in 1976.
In 1974, Marvel Comics writer Rich Buckler introduced the cyborg Deathlok the Demolisher, and a dystopian post-apocalyptic future, in Astonishing Tales #25. Buckler's character dealt with rebellion and loyalty, with allusion to Frankenstein's monster, in a twelve-issue run. Deathlok was later resurrected in Captain America.
The 1982 film Blade Runner featured creatures called replicants, bio-engineered or bio-robotic beings. The Nexus series — genetically designed by the Tyrell Corporation — are virtually identical to an adult human, but have superior strength, agility, and variable intelligence depending on the model. Because of their physical similarity to humans a replicant must be detected by its lack of emotional responses and empathy to questions posed in a Voight-Kampff test. A derogatory term for a replicant is "skin-job," a term heard again extensively in Battlestar Galactica. In the opening crawl of the film, they are first said to be the next generation in robotics. The crawl also states genetics play some role in the creation of replicants. The original novel makes mention of the biological components of the androids, but also alludes to the mechanical aspects commonly found in other material relating to robots.
The 1987 science fiction action film RoboCop features a cyborg protagonist. After being killed by a criminal gang, police officer Alex Murphy is transformed by a private company into a cyborg cop. The transformation is used to explore the theme of reification and identity. There are cyborg kaiju in the Godzilla films such as Gigan and Mechagodzilla.
Although frequently referred to onscreen as a cyborg, The Terminator might be more properly an android. However, because it has skin and blood (cellular organic systems), the Terminator is technically a cybernetic organism.Template:Rewrite
One of the most famous cyborgs is Darth Vader from the Star Wars films. Vader was once Anakin Skywalker, a famous Jedi turned to the Dark Side. After a furious battle with his former master, Obi-Wan Kenobi, Anakin is left for dead beside a lava flow on Mustafar, and is outfitted with an artifical life support system as well as robotic arms and legs. General Grievous, Lobot and Luke Skywalker are the three other most prominent cyborgs in the Star Wars universe.
In the manga and anime series by Akira Toriyama titled Dragon Ball, a scientist named Dr. Gero created several cyborgs, including villain Cell, sibling cyborgs Android 17 and Android 18, as well as Android 20, who was built from Gero himself.
In the manga and anime series Ghost in the Shell, Motoko Kusanagi lived in a world where the majority of adults are cyborgs and can connect wirelessly to the Internet for real-time communication and data research. The most common augmentation in the series were artificial brains called cyberbrains. | https://www.wikidoc.org/index.php/Cyborg | |
fde3b0af558230bacb7ea0fe0e1a029fc7be03b3 | wikidoc | Cyclin | Cyclin
Cyclins are a family of proteins involved in the progression of cells through the cell cycle. They are the "regulatory subunits of the heterodimeric protein kinases that control cell cycle events."
# Function
A cyclin forms a complex with its partner cyclin-dependent kinase (Cdk), which activates the latter's protein kinase function.
Cyclins are so named because their concentration varies in a cyclical fashion during the cell cycle; they are produced or degraded as needed in order to drive the cell through the different stages of the cell cycle.
When its concentrations in the cell are low, the cyclin detaches from the Cdk, inhibiting the enzyme's activity, probably by causing a protein chain to block the enzymatic site.
# Types
There are several different cyclins which are active in different parts of the cell cycle and which cause the Cdk to phosphorylate different substrates. However, there are several "orphan" cyclins for which no Cdk partner has been identified. For example, cyclin F is an orphan cyclin that is essential for G2/M transition.
Other specific types include:
- Cyclin D
- Cyclin E
- Cyclin A
- Cyclin B
# Domain structure
Cyclins contain two domains of similar all-alpha fold, N- and C-terminal.
# Human proteins with cyclin domains
CABLES2; CCNA1; CCNA2; CCNB1; CCNB2; CCNB3; CCNC; CCND1;
CCND2; CCND3; CCNE1; CCNE2; CCNF; CCNG1; CCNG2; CCNH;
CCNI; CCNJ; CCNJL; CCNK; CCNL1; CCNT1; CCNT2; CCNY;
CCNYL1; CNTD2; UDG2;
# History
Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of cyclin and cyclin-dependent kinase, central molecules in the regulation of the cell cycle. | Cyclin
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Cyclins are a family of proteins involved in the progression of cells through the cell cycle. They are the "regulatory subunits of the heterodimeric protein kinases that control cell cycle events."[citation needed]
# Function
A cyclin forms a complex with its partner cyclin-dependent kinase (Cdk), which activates the latter's protein kinase function.
Cyclins are so named because their concentration varies in a cyclical fashion during the cell cycle; they are produced or degraded as needed in order to drive the cell through the different stages of the cell cycle.
When its concentrations in the cell are low, the cyclin detaches from the Cdk, inhibiting the enzyme's activity, probably by causing a protein chain to block the enzymatic site.[1][2]
# Types
There are several different cyclins which are active in different parts of the cell cycle and which cause the Cdk to phosphorylate different substrates. However, there are several "orphan" cyclins for which no Cdk partner has been identified. For example, cyclin F is an orphan cyclin that is essential for G2/M transition.[3][4]
Other specific types include:
- Cyclin D
- Cyclin E
- Cyclin A
- Cyclin B
# Domain structure
Cyclins contain two domains of similar all-alpha fold, N- and C-terminal.
# Human proteins with cyclin domains
CABLES2; CCNA1; CCNA2; CCNB1; CCNB2; CCNB3; CCNC; CCND1;
CCND2; CCND3; CCNE1; CCNE2; CCNF; CCNG1; CCNG2; CCNH;
CCNI; CCNJ; CCNJL; CCNK; CCNL1; CCNT1; CCNT2; CCNY;
CCNYL1; CNTD2; UDG2;
# History
Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of cyclin and cyclin-dependent kinase, central molecules in the regulation of the cell cycle. | https://www.wikidoc.org/index.php/Cyclin | |
de430df1760d9959e2dd1ce59612cbb6e84f4b41 | wikidoc | Cytome | Cytome
# Overview
Cytomes are the cellular systems, subsystems, and functional components of the
body. The cytome is the collection of the complex and dynamic cellular processes (structure and function) underlying physiological processes. It describes the structural and functional heterogeneity of the cellular diversity of an organism. The study of cytomes is called Cytomics.
The Human Cytome Project is about the study of the biological system structure and function of an organism at the cytome level. It relates to Cytomics, which is the study of cell systems (cytomes) at a single cell level. At the Focus on Microscopy (FOM) meeting in Philadelphia, on Wednesday afternoon, 7 April 2004, the idea of a Human Cytome Project was for the first time discussed at a scientific meeting. | Cytome
# Overview
Cytomes are the cellular systems, subsystems, and functional components of the
body. The cytome is the collection of the complex and dynamic cellular processes (structure and function) underlying physiological processes. It describes the structural and functional heterogeneity of the cellular diversity of an organism. The study of cytomes is called Cytomics.
The Human Cytome Project is about the study of the biological system structure and function of an organism at the cytome level. It relates to Cytomics, which is the study of cell systems (cytomes) at a single cell level. At the Focus on Microscopy (FOM) meeting in Philadelphia, on Wednesday afternoon, 7 April 2004, the idea of a Human Cytome Project was for the first time discussed at a scientific meeting. | https://www.wikidoc.org/index.php/Cytome | |
57ecae6334dbd26acfc0101eca0c45d1496435af | wikidoc | Cytrel | Cytrel
Cytrel is a cellulose-based tobacco substitute used in some low-tar cigarette brands, famously comprising 25% of the Silk Cut brand.
Development began on a replacement for tobacco in cigarettes in the 1950s, to reduce 'undesirable tobacco smoke components' present in cigarettes. Cyrel was developed by Celanese Fiber Marketing Company.
The fiber went through a difficult development, with scientists struggling to achieve acceptable smoking, taste and manufacturing properties. However, the product was of a lower density than tobacco and useful as a bulking agent so development continued. After five revisions the Celanese Fiber Marketing Company released Type 308 to the market.
It was one of the NSM (New Smoking Materials) that came into popularity in the 1970s. | Cytrel
Cytrel is a cellulose-based tobacco substitute used in some low-tar cigarette brands, famously comprising 25% of the Silk Cut brand.
Development began on a replacement for tobacco in cigarettes in the 1950s, to reduce 'undesirable tobacco smoke components' present in cigarettes. Cyrel was developed by Celanese Fiber Marketing Company.
The fiber went through a difficult development, with scientists struggling to achieve acceptable smoking, taste and manufacturing properties. However, the product was of a lower density than tobacco and useful as a bulking agent so development continued. After five revisions the Celanese Fiber Marketing Company released Type 308 to the market.
It was one of the NSM (New Smoking Materials) that came into popularity in the 1970s. | https://www.wikidoc.org/index.php/Cytrel | |
c46bcbb28f491d54b123df55228d7b03dd0c9105 | wikidoc | DEPDC5 | DEPDC5
DEPDC5 (or DEP domain-containing 5) is a human protein of poorly understood function but has been associated with cancer in several studies. It is encoded by a gene of the same name, located on chromosome 22.
# Function
The function of DEPDC5 is not yet known, but it has been implicated in intracellular signal transduction based on homology between the DEP domains of DEPDC5 and Dishevelled-1 (DVL1).
Mutations in this gene have been associated to cases of focal epilepsy (doi:10.1038/ng.2601).
# Gene
In Homo sapiens, the DEPDC5 gene has been localized to the long arm of chromosome 22, 22q12.2-q12.3, between the PRRL14 and YWHAH genes. The clinical relevance of this gene includes an intronic SNP (rs1012068) that has been associated with a 2-fold hepatocellular carcinoma-risk increase.
# Structure
## Domains
### DEP
The DEP domain derives its name from the proteins Dishevelled, Egl-10 and Pleckstrin, each of which contain a variant of this domain. It spans 82 residues and is 343 amino acids from the C-terminus. A SWISS-MODEL predicts two beta sheets and three alpha helices contained within the domain.
While its exact function is not known, the DEPDC5 DEP domain has the highest structural similarity to the DEP domain of DVL1 when performing a CBLAST at NCBI. The alignment scores an Evalue of 1.00e-08 and indicates 30% identity between the DEP domains of the two proteins. In DVL1, the DEP domain is involved in localization of the protein to the plasma membrane as part of the Wnt signaling pathway.
### DUF 3608
The DUF 3608 domain sits 99 amino acids from the N-terminus and itself spans 280 amino acids. PELE predicts at least one beta sheet and two alpha helices within this domain. It also contains 26 highly conserved residues and several post-translation modifications. Both occurrences are addressed later in this article.
Evidence for the function of DUF 3608 has been uncovered in the yeast homolog Iml1p. Imlp1's DUF 3608 is thought to aid in binding to two protein partners, Npr2 and Npr3. Together, these three proteins form the Iml1-Npr2-Npr3 complex and are involved in "non-nitrogen starvation" autophagy regulation. The researchers who uncovered this propose renaming DUF 3608 to RANS (Required for Autophagy induced under Non-nitrogen Starvation conditions).
## Secondary Structure
Based on unanimous consensus by the secondary structure prediction tool PELE, DEPDC5 contains at least ten alpha helices and nine beta sheets. The locations of these secondary structures are illustrated in the image below: red highlights are alpha helices and blue highlights are beta sheets.
# Homology
## Orthologs
Fungi are the most distantly related organisms to contain a protein orthologous to human DEPDC5, including Saccharomyces cerevisiae and Albugo laibachii. In the fungi, the protein name is Iml1p, or vacuolar membrane-associated protein Iml1. Name deviations in other organisms include CG12090 (Drosophila) and AGAP007010 (mosquito). Conservation is high between humans and other vertebrate species, ranging from 74% identity in cichlids to 99% identity in chimpanzees.
The following table summarizes an analysis of 20 proteins orthologous to human DEPDC5.
30 residues have been conserved since animals and fungi diverged, with 26 of these located in the DUF 3608 domain. The following multiple sequence alignment illustrates this conservation of the DUF domain; representatives from invertebrate and fungal clades are aligned to the human DUF 3608 with completely conserved residues colored green.
## Paralogs
There are no known human DEPDC5 paralogs, but there are 64 human proteins containing a homologous DEP domain. There are also no identified paralogs for the yeast protein Iml1, the most distantly related ortholog of human DEPDC5.
# Expression
DEPDC5 expression has been characterized as ubiquitous in human tissue by RT-PCR analysis and in DNA microarray studies as displayed in the chart below.
DEPDC5 expression profile of 52 human tissues
One study on patients with hepatocellular carcinoma found higher DEPDC5 expression in tumor tissue than in non-tumor tissue. Conversely, a homozygous deletion of three genes, one being DEPDC5, was found in two glioblastoma cases. Other expression anomalies include zero expression in MDA-MB-231 breast cancer cell line and low expression in P116 (ZAP70 negative) cell line.
# Post-translational Modifications
The following post-translational modifications were predicted with the proteomic tools compiled at ExPASy and PhosphoSite Plus for the human DEPDC5 protein.
# Interaction
DEPDC5 may possibly interact with the proteasome subunit PSMA3 as evidenced by coimmunoprecipitation and the transcription factor MYC. DEPDC5 is in the "GATOR1" complex with NPRL2 and NPRL3. | DEPDC5
DEPDC5 (or DEP domain-containing 5) is a human protein of poorly understood function but has been associated with cancer in several studies.[1][2] It is encoded by a gene of the same name, located on chromosome 22.
# Function
The function of DEPDC5 is not yet known, but it has been implicated in intracellular signal transduction based on homology between the DEP domains of DEPDC5 and Dishevelled-1 (DVL1).[3]
Mutations in this gene have been associated to cases of focal epilepsy (doi:10.1038/ng.2601).
# Gene
In Homo sapiens, the DEPDC5 gene has been localized to the long arm of chromosome 22, 22q12.2-q12.3, between the PRRL14 and YWHAH genes. The clinical relevance of this gene includes an intronic SNP (rs1012068) that has been associated with a 2-fold hepatocellular carcinoma-risk increase.[1]
# Structure
## Domains
### DEP
The DEP domain derives its name from the proteins Dishevelled, Egl-10 and Pleckstrin, each of which contain a variant of this domain.[4] It spans 82 residues and is 343 amino acids from the C-terminus. A SWISS-MODEL predicts two beta sheets and three alpha helices contained within the domain.[5]
While its exact function is not known, the DEPDC5 DEP domain has the highest structural similarity to the DEP domain of DVL1 when performing a CBLAST at NCBI.[6] The alignment scores an Evalue of 1.00e-08 and indicates 30% identity between the DEP domains of the two proteins. In DVL1, the DEP domain is involved in localization of the protein to the plasma membrane as part of the Wnt signaling pathway.[7]
### DUF 3608
The DUF 3608 domain sits 99 amino acids from the N-terminus and itself spans 280 amino acids. PELE predicts at least one beta sheet and two alpha helices within this domain.[8] It also contains 26 highly conserved residues and several post-translation modifications. Both occurrences are addressed later in this article.
Evidence for the function of DUF 3608 has been uncovered in the yeast homolog Iml1p. Imlp1's DUF 3608 is thought to aid in binding to two protein partners, Npr2 and Npr3. Together, these three proteins form the Iml1-Npr2-Npr3 complex and are involved in "non-nitrogen starvation" autophagy regulation. The researchers who uncovered this propose renaming DUF 3608 to RANS (Required for Autophagy induced under Non-nitrogen Starvation conditions).[9]
## Secondary Structure
Based on unanimous consensus by the secondary structure prediction tool PELE, DEPDC5 contains at least ten alpha helices and nine beta sheets. The locations of these secondary structures are illustrated in the image below: red highlights are alpha helices and blue highlights are beta sheets.
# Homology
## Orthologs
Fungi are the most distantly related organisms to contain a protein orthologous to human DEPDC5, including Saccharomyces cerevisiae and Albugo laibachii. In the fungi, the protein name is Iml1p, or vacuolar membrane-associated protein Iml1. Name deviations in other organisms include CG12090 (Drosophila) and AGAP007010 (mosquito).[3] Conservation is high between humans and other vertebrate species, ranging from 74% identity in cichlids to 99% identity in chimpanzees.[10]
The following table summarizes an analysis of 20 proteins orthologous to human DEPDC5.
30 residues have been conserved since animals and fungi diverged, with 26 of these located in the DUF 3608 domain.[12] The following multiple sequence alignment illustrates this conservation of the DUF domain; representatives from invertebrate and fungal clades are aligned to the human DUF 3608 with completely conserved residues colored green.
## Paralogs
There are no known human DEPDC5 paralogs,[10] but there are 64 human proteins containing a homologous DEP domain.[13] There are also no identified paralogs for the yeast protein Iml1, the most distantly related ortholog of human DEPDC5.[10]
# Expression
DEPDC5 expression has been characterized as ubiquitous in human tissue by RT-PCR analysis[14] and in DNA microarray studies as displayed in the chart below.[15]
DEPDC5 expression profile of 52 human tissues
One study on patients with hepatocellular carcinoma found higher DEPDC5 expression in tumor tissue than in non-tumor tissue.[1] Conversely, a homozygous deletion of three genes, one being DEPDC5, was found in two glioblastoma cases.[2] Other expression anomalies include zero expression in MDA-MB-231 breast cancer cell line[16] and low expression in P116 (ZAP70 negative) cell line.[17]
# Post-translational Modifications
The following post-translational modifications were predicted with the proteomic tools compiled at ExPASy[18] and PhosphoSite Plus[19] for the human DEPDC5 protein.
# Interaction
DEPDC5 may possibly interact with the proteasome subunit PSMA3 as evidenced by coimmunoprecipitation[20] and the transcription factor MYC.[21] DEPDC5 is in the "GATOR1" complex with NPRL2 and NPRL3.[22] | https://www.wikidoc.org/index.php/DEPDC5 | |
6300b0acdb37c2768c5bfcd1ba9f1389c206934b | wikidoc | DEPTOR | DEPTOR
DEP domain-containing mTOR-interacting protein (DEPTOR) also known as DEP domain-containing protein 6 (DEPDC6) is a protein that in humans is encoded by the DEPTOR gene.
# Structure
The gene DEPTOR can be found only in vertebrates. In human, DEPTOR gene locates at chromosome 8, 8q24.12 with protein size 409 a.a. Human DEPTOR contains two N-terminal DEP domains and a C-terminal PDZ domain.
# Function
DEPTOR is involved in mTOR signaling pathway as an endogenous regulator. A direct interaction between DEPTOR and mTOR has been shown. Overexpression of DEPTOR downregulates the activity of mTORC1 and mTORC2 in vitro. mTORC1 and mTORC2 can both inhibit DEPTOR through phosphorylation.
# Metabolism
DEPTOR cell-autonomously regulates adipogenesis. In the muscle, Baf60c promotes a switch from oxidative to glycolytic myofiber type through DEPTOR-mediated Akt/PKB activation. Within the brain, DEPTOR is highly expressed in the hippocampus, the medio-basal hypothalamus and the circumventricular organs. Overexpression of DEPTOR in the medio-basal hypothalamus protects mice against high-fat diet-induced obesity by modulating Akt/PKB signaling.
# Clinical cancer
Although in most cancer, mTOR pathway is constitutively activated and the expression of DEPTOR is low, one study has found that DEPTOR is overexpressed in multiple myeloma cells and is necessary for their survival. | DEPTOR
DEP domain-containing mTOR-interacting protein (DEPTOR) also known as DEP domain-containing protein 6 (DEPDC6) is a protein that in humans is encoded by the DEPTOR gene.[1]
# Structure
The gene DEPTOR can be found only in vertebrates. In human, DEPTOR gene locates at chromosome 8, 8q24.12 with protein size 409 a.a.[2] Human DEPTOR contains two N-terminal DEP domains and a C-terminal PDZ domain.[3]
# Function
DEPTOR is involved in mTOR signaling pathway as an endogenous regulator. A direct interaction between DEPTOR and mTOR has been shown.[3] Overexpression of DEPTOR downregulates the activity of mTORC1 and mTORC2 in vitro. mTORC1 and mTORC2 can both inhibit DEPTOR through phosphorylation.[3]
# Metabolism
DEPTOR cell-autonomously regulates adipogenesis.[4] In the muscle, Baf60c promotes a switch from oxidative to glycolytic myofiber type through DEPTOR-mediated Akt/PKB activation.[5] Within the brain, DEPTOR is highly expressed in the hippocampus, the medio-basal hypothalamus and the circumventricular organs.[6] Overexpression of DEPTOR in the medio-basal hypothalamus protects mice against high-fat diet-induced obesity by modulating Akt/PKB signaling.[7]
# Clinical cancer
Although in most cancer, mTOR pathway is constitutively activated and the expression of DEPTOR is low, one study has found that DEPTOR is overexpressed in multiple myeloma cells and is necessary for their survival.[3] | https://www.wikidoc.org/index.php/DEPTOR | |
e59a602641dde3c5975b43742341c43582badcc4 | wikidoc | DESOXY | DESOXY
4-desoxymescaline, 4-methyl-3,5-dimethoxyphenethylamine, is a phenethylamine and mescaline analogue with psychedelic properties. It is usually known as DESOXY. It was discovered by Alexander Shulgin and published in PiHKAL.
# Legality
(US) In 1970 the Controlled Substances Act placed mescaline into Schedule I. It is similarly controlled in other nations. 4-desoxymescaline could be considered an analogue of mescaline, under the Federal Analogue Act, making it illegal to manufacture, buy, possess, or distribute without a DEA license.
## Dosage
A typical dosage is within the range of 40-120 mg and lasts 6-8 hours.
## Effects
The effects of DESOXY vary significantly from mescaline, despite their chemical similarity. Users report an elevated mood and some hallucinations, although nothing as intense as visuals reported on mescaline. There has been some suggestion that the dosage level of 40-120 mg might be too small to achieve mescaline-like effects, but since this compound has undergone only limited human experiments it may be unsafe to increase the dosage.
# Reference
- ↑ Template:CitePiHKAL | DESOXY
4-desoxymescaline, 4-methyl-3,5-dimethoxyphenethylamine, is a phenethylamine and mescaline analogue with psychedelic properties. It is usually known as DESOXY. It was discovered by Alexander Shulgin and published in PiHKAL.
# Legality
(US) In 1970 the Controlled Substances Act placed mescaline into Schedule I. It is similarly controlled in other nations. 4-desoxymescaline could be considered an analogue of mescaline, under the Federal Analogue Act, making it illegal to manufacture, buy, possess, or distribute without a DEA license.
## Dosage
A typical dosage is within the range of 40-120 mg and lasts 6-8 hours.[1]
## Effects
The effects of DESOXY vary significantly from mescaline, despite their chemical similarity. Users report an elevated mood and some hallucinations, although nothing as intense as visuals reported on mescaline. There has been some suggestion that the dosage level of 40-120 mg might be too small to achieve mescaline-like effects, but since this compound has undergone only limited human experiments it may be unsafe to increase the dosage.
# Reference
- ↑ Template:CitePiHKAL
# External links
- Erowid mescaline vault
- DESOXY entry in PiHKAL | https://www.wikidoc.org/index.php/DESOXY | |
0e09f51ab59cc4194da90f7ce3b89d9346286d95 | wikidoc | DFNB31 | DFNB31
Whirlin is a protein that in humans is encoded by the DFNB31 gene.
In rat brain, WHRN interacts with a calmodulin-dependent serine kinase, CASK, and may be involved in the formation of scaffolding protein complexes that facilitate synaptic transmission in the central nervous system (CNS). Mutations in this gene, also known as WHRN, cause autosomal recessive deafness.
# Model organisms
Model organisms have been used in the study of WHRN function. A conditional knockout mouse line, called Whrntm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty tests were carried out on mutant mice and two significant abnormalities were observed. Whrntm1a(EUCOMM)Wtsi homozygote mice show a moderate to severe hearing loss at 14 weeks. Female homozygous mutant animals also displayed an increased thermal nociceptive threshold in a hot plate test. | DFNB31
Whirlin is a protein that in humans is encoded by the DFNB31 gene.[1][2][3]
In rat brain, WHRN interacts with a calmodulin-dependent serine kinase, CASK, and may be involved in the formation of scaffolding protein complexes that facilitate synaptic transmission in the central nervous system (CNS).[4] Mutations in this gene, also known as WHRN, cause autosomal recessive deafness.[3]
# Model organisms
Model organisms have been used in the study of WHRN function. A conditional knockout mouse line, called Whrntm1a(EUCOMM)Wtsi[8][9] was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.[10][11][12]
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion.[6][13] Twenty tests were carried out on mutant mice and two significant abnormalities were observed.[6] Whrntm1a(EUCOMM)Wtsi homozygote mice show a moderate to severe hearing loss at 14 weeks. Female homozygous mutant animals also displayed an increased thermal nociceptive threshold in a hot plate test.[6] | https://www.wikidoc.org/index.php/DFNB31 | |
9d6b5f8b34b82698da8777b58b47904136f28373 | wikidoc | DGLUCY | DGLUCY
DGLUCY (D-glutamate cyclase) is a protein that in humans is encoded by the DGLUCY gene.
# Orthologs
The human gene, DGLUCY, is highly conserved in mammals and birds. Orthologs gathered from BLAST and BLAT searches reveal that the human DGLUCY mRNA sequence is conserved with a sequence identity of 98% in chimpanzees, 88% in mice, and 81% in platypus and chicken. The following table contains a list orthologs that were gathered from BLAST searches. Sequence alignments were performed using blastn to derive sequence identity, score, and E-values between the human c14orf159 variant 1 mRNA and its orthologs.
The protein that the human gene DGLUCY encodes has been found to be highly conserved among mammals, birds, amphibians, fish, tunicates, cnidarians, and echinoderms. However, no protein orthologs have been found in nematodes, arthropods, fungi, protists, plants, bacteria, or archea. Fungi and bacteria contain the DUF1445 conserved domain which is found in human c14orf159 and its orthologs. BLAST and BLAT searches have been utilized to find orthologs to the c14orf159 protein. The following table lists protein orthologs for the human protein with sequence identity, sequence similarity, scores, and E-values derived from blastp sequence comparisons.
# Post-translational modification
The protein product of the DGLUCY gene is predicted and was found to be translocated to mitochondrion.
Post-translational modifications are predicted for the protein DGLUCY. All predicted sites in human DGLUCY were compared to orthologs using multiple sequence alignments to determine likelihood of modification.
# Regulation
Estrogen receptor alpha, in the presence of estradiol, binds to the DGLUCY gene and likely regulates its expression. | DGLUCY
DGLUCY (D-glutamate cyclase) is a protein that in humans is encoded by the DGLUCY gene.[1]
# Orthologs
The human gene, DGLUCY, is highly conserved in mammals and birds.[2] Orthologs gathered from BLAST and BLAT searches reveal that the human DGLUCY mRNA sequence is conserved with a sequence identity of 98% in chimpanzees, 88% in mice, and 81% in platypus and chicken.[3][4] The following table contains a list orthologs that were gathered from BLAST searches. Sequence alignments were performed using blastn to derive sequence identity, score, and E-values between the human c14orf159 variant 1 mRNA and its orthologs.
The protein that the human gene DGLUCY encodes has been found to be highly conserved among mammals, birds, amphibians, fish, tunicates, cnidarians, and echinoderms. However, no protein orthologs have been found in nematodes, arthropods, fungi, protists, plants, bacteria, or archea. Fungi and bacteria contain the DUF1445 conserved domain which is found in human c14orf159 and its orthologs. BLAST and BLAT searches have been utilized to find orthologs to the c14orf159 protein. The following table lists protein orthologs for the human protein with sequence identity, sequence similarity, scores, and E-values derived from blastp sequence comparisons.[5]
# Post-translational modification
The protein product of the DGLUCY gene is predicted[1] and was found[6][7] to be translocated to mitochondrion.
Post-translational modifications are predicted for the protein DGLUCY. All predicted sites in human DGLUCY were compared to orthologs using multiple sequence alignments to determine likelihood of modification.[8][9][10][11]
[12]
# Regulation
Estrogen receptor alpha, in the presence of estradiol, binds to the DGLUCY gene and likely regulates its expression.[13] | https://www.wikidoc.org/index.php/DGLUCY | |
6def5824e702db427f11eb3c2ad2f4dbf886bfea | wikidoc | DHRS7B | DHRS7B
Dehydrogenase/reductase (SDR family) member 7B is an enzyme encoded by the DHRS7B gene in humans, found on chromosome 17p11.2. DHRS7B encodes a protein that is predicted to function in steroid hormone regulation. A deletion in the chromosomal region 17p11.2 has been associated with Smith-Magenis Syndrome, a genetic developmental disorder.
# Gene
## Overview
The DHRS7B gene is located on the positive strand of chromosome 17, beginning at position 21030258 and ending at position 21094836 (64579 bp). DHRS7B contains seven exons with no predicted alternate splice forms, resulting in an 1841 bp mRNA product.
Upstream of DHRS7B on the negative strand of chromosome 17p11.2 are the genes Coiled-coil domain containing 144 family, N-terminal-like (CCDC144NL) and Ubiquitin specific peptidase 22 (USP22). Downstream of DHSRS7B on the negative strand of chromosome 17p11.2 is the gene Transmembrane protein 11 (TMEM11), and on the positive strand is the gene Mitogen-activated protein kinase, kinase 3 (MAP2K3).
## Gene expression
Microarray and EST data indicates that the DHRS7B gene is highly expressed in the testes, thyroid, kidneys, and adipose tissues. There is moderate expression in the brain, pancreas, mammary glands, and ovaries. Finally, there is little expression in spleen, thymus, tonsils, bone marrow, and bladder.
# Protein structure
The DHRS7B gene has a predicted protein product that is 325 amino acids, a molecular weight of 35.1 kDa, and an isoelectric point of 9.867. There is one predicted transmembrane domain in the protein sequence, a large neutrally charged region spanning residues 18-38. No signal peptides have been identified in DHRS7B; cellular localization remains unclear.
DHRS7B is a member of the short chain dehydrogenase/reductase (SDR) superfamily and possesses characteristic features of an SDR within the protein sequence. The following table identifies sequences in the protein and corresponding function.
## Interactions
In humans, DHRS7B has been shown to physically interact with other proteins such as Mediator complex subunit 19 (MED19) and Brain and reproductive expressed-modulator protein (BRE). MED19 was found to interact with DHRS7B through a two hybrid screening approach and plays a role as a co-activator in regulated transcription of most RNA polymerase II dependent genes. BRE is a component of the BRCA1-A complex, which recognizes Lys-63 linked ubiquitinated histones H2A and H2AX DNA lesion sites (identified using anti-tag coimmunoprecipitation). Other proteins interacting with DHRS7B have only been identified through text-mining.
# Homology
## Orthologs
Conservation of the DHRS7B protein sequence has been observed highly in mammals; moderately in reptiles, birds, fish and amphibians; minimally in invertebrates, insects, and fungi.
## Paralogs
Paralogs of DHRS7B are all in the SDR superfamily and conservation of the SDR functional motifs was identified in a multiple sequence alignment.
# Clinical significance
DHRS7B has been identified in the Smith-Magenis Syndrome region, where a deletion in this chromosomal region (17p11.2) causes a genetic developmental disorder. In breast cancer cells expressing CD44 and CD24, DHRS7B expression was observed to be down regulated. CD44 is an antigen found on the surface of most cell types and functions as a receptor that binds tissue macromolecules. Additionally, it acts as an adhesion molecule for leukocytes on peripheral lymphoid organs and inflammation sites. CD24 is associated with B-cells, epithelial cells, and dendritic cells, functioning as an adhesion molecule and shown to enhance a tumor cell's capability of metastasizing. | DHRS7B
Dehydrogenase/reductase (SDR family) member 7B is an enzyme encoded by the DHRS7B gene in humans, found on chromosome 17p11.2.[1] DHRS7B encodes a protein that is predicted to function in steroid hormone regulation.[2][3][4] A deletion in the chromosomal region 17p11.2 has been associated with Smith-Magenis Syndrome, a genetic developmental disorder.[5]
# Gene
## Overview
The DHRS7B gene is located on the positive strand of chromosome 17, beginning at position 21030258 and ending at position 21094836 (64579 bp).[6] DHRS7B contains seven exons with no predicted alternate splice forms, resulting in an 1841 bp mRNA product.[6][7]
Upstream of DHRS7B on the negative strand of chromosome 17p11.2 are the genes Coiled-coil domain containing 144 family, N-terminal-like (CCDC144NL) and Ubiquitin specific peptidase 22 (USP22).[8] Downstream of DHSRS7B on the negative strand of chromosome 17p11.2 is the gene Transmembrane protein 11 (TMEM11), and on the positive strand is the gene Mitogen-activated protein kinase, kinase 3 (MAP2K3).[8]
## Gene expression
Microarray and EST data indicates that the DHRS7B gene is highly expressed in the testes, thyroid, kidneys, and adipose tissues. There is moderate expression in the brain, pancreas, mammary glands, and ovaries. Finally, there is little expression in spleen, thymus, tonsils, bone marrow, and bladder.[9][10]
# Protein structure
The DHRS7B gene has a predicted protein product that is 325 amino acids, a molecular weight of 35.1 kDa, and an isoelectric point of 9.867.[11][12] There is one predicted transmembrane domain in the protein sequence, a large neutrally charged region spanning residues 18-38.[11][12] No signal peptides have been identified in DHRS7B; cellular localization remains unclear.[13]
DHRS7B is a member of the short chain dehydrogenase/reductase (SDR) superfamily and possesses characteristic features of an SDR within the protein sequence. The following table identifies sequences in the protein and corresponding function.[14]
## Interactions
In humans, DHRS7B has been shown to physically interact with other proteins such as Mediator complex subunit 19 (MED19) and Brain and reproductive expressed-modulator protein (BRE).[15] MED19 was found to interact with DHRS7B through a two hybrid screening approach and plays a role as a co-activator in regulated transcription of most RNA polymerase II dependent genes.[16] BRE is a component of the BRCA1-A complex, which recognizes Lys-63 linked ubiquitinated histones H2A and H2AX DNA lesion sites (identified using anti-tag coimmunoprecipitation).[17] Other proteins interacting with DHRS7B have only been identified through text-mining.
# Homology
## Orthologs
Conservation of the DHRS7B protein sequence has been observed highly in mammals; moderately in reptiles, birds, fish and amphibians; minimally in invertebrates, insects, and fungi.[18]
## Paralogs
Paralogs of DHRS7B are all in the SDR superfamily and conservation of the SDR functional motifs was identified in a multiple sequence alignment.[18][19]
# Clinical significance
DHRS7B has been identified in the Smith-Magenis Syndrome region, where a deletion in this chromosomal region (17p11.2) causes a genetic developmental disorder.[4] In breast cancer cells expressing CD44 and CD24, DHRS7B expression was observed to be down regulated.[20] CD44 is an antigen found on the surface of most cell types and functions as a receptor that binds tissue macromolecules. Additionally, it acts as an adhesion molecule for leukocytes on peripheral lymphoid organs and inflammation sites. CD24 is associated with B-cells, epithelial cells, and dendritic cells, functioning as an adhesion molecule and shown to enhance a tumor cell's capability of metastasizing.[21] | https://www.wikidoc.org/index.php/DHRS7B | |
e1dc7c11c88a18e940c9cca6abb3e024de5e8530 | wikidoc | DHTKD1 | DHTKD1
Dehydrogenase E1 and transketolase domain containing 1 is a protein that in humans is encoded by the DHTKD1 gene. This gene encodes a component of a mitochondrial 2-oxoglutarate-dehydrogenase-complex-like protein involved in the degradation pathways of several amino acids, including lysine. Mutations in this gene are associated with 2-aminoadipic 2-oxoadipic aciduria and Charcot-Marie-Tooth Disease Type 2Q.
# Structure
The DHTKD1 gene encodes a protein that has 919 amino acids, and is one of two isoforms within the 2-oxoglutarate-dehydrogenase complex.
# Function
DHTKD1 is part of an OGDHc-like supercomplex that is responsible for a crucial step in the degradation pathways of L-lysine, L-hydroxylysine, and L-tryptophan. Specifically, this enzyme catalyzes the decarboxylation of 2-oxoadipate to glutaryl-CoA.
There is a strong correlation between DHTKD1 expression levels and ATP production, which signifies that DHTKD1 plays a critical role in energy production in mitochondria. Moreover, suppression of DHTKD1 results in decreased levels of biogenesis and increased levels of reactive oxygen species (ROS) within the mitochondria. Globally, this impairs cell growth and enhances cell apoptosis.
# Clinical significance
Mutations in the DHTKD1 gene are associated with alpha-aminoadipic and alpha-ketoadipic aciduria, an autosomal recessive inborn error of lysine, hydroxylysine, and tryptophan degradation. Only a handful of mutations have been observed in patients, including three missense mutations, two nonsense mutations, two splice donor mutations, one duplication, and one deletion and insertion. Two missense mutations are the most common cause of the deficiency. The clinical presentation of this disease in inconsistent.
Mutations in this gene could also cause neurological abnormalities. Indeed, one form of Charcot-Marie-Tooth (CMT) disease has been associated with DHTKD1, although the disease encompasses a wide spectrum of clinical neuropathies. Specifically, a hyterozygous nonsense mutation within the gene leads to decreased levels of DHTKD1 mRNA and proteins, and impaired ATP generation. This implicates this mutation as a causative agent for CMT-2 Disease. | DHTKD1
Dehydrogenase E1 and transketolase domain containing 1 is a protein that in humans is encoded by the DHTKD1 gene. This gene encodes a component of a mitochondrial 2-oxoglutarate-dehydrogenase-complex-like protein involved in the degradation pathways of several amino acids, including lysine. Mutations in this gene are associated with 2-aminoadipic 2-oxoadipic aciduria and Charcot-Marie-Tooth Disease Type 2Q.[1]
# Structure
The DHTKD1 gene encodes a protein that has 919 amino acids, and is one of two isoforms within the 2-oxoglutarate-dehydrogenase complex.[1]
# Function
DHTKD1 is part of an OGDHc-like supercomplex that is responsible for a crucial step in the degradation pathways of L-lysine, L-hydroxylysine, and L-tryptophan. Specifically, this enzyme catalyzes the decarboxylation of 2-oxoadipate to glutaryl-CoA.[2]
There is a strong correlation between DHTKD1 expression levels and ATP production, which signifies that DHTKD1 plays a critical role in energy production in mitochondria. Moreover, suppression of DHTKD1 results in decreased levels of biogenesis and increased levels of reactive oxygen species (ROS) within the mitochondria. Globally, this impairs cell growth and enhances cell apoptosis.[3]
# Clinical significance
Mutations in the DHTKD1 gene are associated with alpha-aminoadipic and alpha-ketoadipic aciduria, an autosomal recessive inborn error of lysine, hydroxylysine, and tryptophan degradation. Only a handful of mutations have been observed in patients, including three missense mutations, two nonsense mutations, two splice donor mutations, one duplication, and one deletion and insertion. Two missense mutations are the most common cause of the deficiency. The clinical presentation of this disease in inconsistent.[2][4]
Mutations in this gene could also cause neurological abnormalities.[3] Indeed, one form of Charcot-Marie-Tooth (CMT) disease has been associated with DHTKD1, although the disease encompasses a wide spectrum of clinical neuropathies. Specifically, a hyterozygous nonsense mutation within the gene leads to decreased levels of DHTKD1 mRNA and proteins, and impaired ATP generation. This implicates this mutation as a causative agent for CMT-2 Disease.[2] | https://www.wikidoc.org/index.php/DHTKD1 | |
ce4fe01d3b6d48823271cbbea2c2854298a04d06 | wikidoc | DIAPH1 | DIAPH1
Protein diaphanous homolog 1 is a protein that in humans is encoded by the DIAPH1 gene.
# Function
This gene is a homolog of the Drosophila diaphanous gene and belongs to the protein family of the formins, characterized by the formin homology 2 (FH2) domain. It has been linked to autosomal dominant, fully penetrant, nonsyndromic low-frequency progressive sensorineural hearing loss. Actin polymerization involves proteins known to interact with diaphanous protein in Drosophila and mouse. It has therefore been speculated that this gene may have a role in the regulation of actin polymerization in hair cells of the inner ear. Alternatively spliced transcript variants encoding distinct isoforms have been found for this gene.
# Interactions
DIAPH1 has been shown to interact with RHOA.
# Clinical significance
Mutations in this gene have been associated with macrothrombocytopenia and hearing loss, microcephaly, blindness, and early onset seizures
Its actions on platelet formation appear to occur at the level of the megakaryocyte where it is involved in cytoskeleton formation. | DIAPH1
Protein diaphanous homolog 1 is a protein that in humans is encoded by the DIAPH1 gene.[1][2][3]
# Function
This gene is a homolog of the Drosophila diaphanous gene and belongs to the protein family of the formins, characterized by the formin homology 2 (FH2) domain. It has been linked to autosomal dominant, fully penetrant, nonsyndromic low-frequency progressive sensorineural hearing loss. Actin polymerization involves proteins known to interact with diaphanous protein in Drosophila and mouse. It has therefore been speculated that this gene may have a role in the regulation of actin polymerization in hair cells of the inner ear. Alternatively spliced transcript variants encoding distinct isoforms have been found for this gene.[3]
# Interactions
DIAPH1 has been shown to interact with RHOA.[4]
# Clinical significance
Mutations in this gene have been associated with macrothrombocytopenia and hearing loss,[5] microcephaly, blindness, and early onset seizures[6]
Its actions on platelet formation appear to occur at the level of the megakaryocyte where it is involved in cytoskeleton formation.[7] | https://www.wikidoc.org/index.php/DIAPH1 | |
2bb3c4457125d2a29a4797281c9768e3bb80cc79 | wikidoc | DLGAP1 | DLGAP1
Disks large-associated protein 1 (DAP-1), also known as guanylate kinase-associated protein (GKAP), is a protein that in humans is encoded by the DLGAP1 gene. DAP-1 is known to be highly enriched in synaptosomal preparations of the brain, and present in the post-synaptic density.
# Function
This gene encodes the protein called guanylate kinase-associated protein (GKAP). GKAP binds to the SHANK and PSD-95 proteins, facilitating the assembly of the post-synaptic density of neurons. Dlgap1 has five 14-amino-acid repeats and three Pro-rich portions.
# Interactions
DLGAP1 has been shown to interact with:
- DLG1
- DLG4
- DYNLL1
- DYNLL2
- SHANK2
The interaction with PSD95 and S-SCAM is mediated by the GUK domain and it has been hypothesized that this might mean it can also interact with other GUK containing proteins. | DLGAP1
Disks large-associated protein 1 (DAP-1), also known as guanylate kinase-associated protein (GKAP), is a protein that in humans is encoded by the DLGAP1 gene. DAP-1 is known to be highly enriched in synaptosomal preparations of the brain, and present in the post-synaptic density.[1]
# Function
This gene encodes the protein called guanylate kinase-associated protein (GKAP). GKAP binds to the SHANK and PSD-95 proteins, facilitating the assembly of the post-synaptic density of neurons.[2] Dlgap1 has five 14-amino-acid repeats and three Pro-rich portions.
# Interactions
DLGAP1 has been shown to interact with:
- DLG1[3][4][5][6]
- DLG4[3][4][5][6][7][8]
- DYNLL1[7]
- DYNLL2[7]
- SHANK2[7][8]
The interaction with PSD95 and S-SCAM is mediated by the GUK domain[9] and it has been hypothesized that this might mean it can also interact with other GUK containing proteins. | https://www.wikidoc.org/index.php/DLGAP1 | |
a06363604f3199ce558bee2b2920545746ec583b | wikidoc | DLGAP2 | DLGAP2
Disks large-associated protein 2 is a protein that in humans is encoded by the DLGAP2 gene.
# Function
The product of this gene is one of the membrane-associated guanylate kinases localized at postsynaptic density in neuronal cells. These kinases are a family of signaling molecules expressed at various submembrane domains and contain the PDZ, SH3 and the guanylate kinase domains. This protein may play a role in the molecular organization of synapses and in neuronal cell signaling. Alternatively spliced transcript variants encoding different isoforms have been identified, but their full-length nature is not known.
# Interactions
DLGAP2 has been shown to interact with DLG4, the canonical synapse marker protein, which in turn binds to N-methyld-aspartate (NMDA) receptors and Shaker-type K+ channels.
# Clinical significance
As with many other synaptic genes, including its binding partner Shank2, DLGAP2 has been shown to be associated with Autism. | DLGAP2
Disks large-associated protein 2 is a protein that in humans is encoded by the DLGAP2 gene.[1][2][3]
# Function
The product of this gene is one of the membrane-associated guanylate kinases localized at postsynaptic density in neuronal cells. These kinases are a family of signaling molecules expressed at various submembrane domains and contain the PDZ, SH3 and the guanylate kinase domains. This protein may play a role in the molecular organization of synapses and in neuronal cell signaling. Alternatively spliced transcript variants encoding different isoforms have been identified, but their full-length nature is not known.[3]
# Interactions
DLGAP2 has been shown to interact with DLG4, the canonical synapse marker protein, which in turn binds to N-methyld-aspartate (NMDA) receptors and Shaker-type K+ channels.[4]
# Clinical significance
As with many other synaptic genes, including its binding partner Shank2, DLGAP2 has been shown to be associated with Autism.[5] | https://www.wikidoc.org/index.php/DLGAP2 | |
c516274822cb8097cdb866afe478b1f58d90bebd | wikidoc | DNAJA3 | DNAJA3
DnaJ homolog subfamily A member 3, mitochondrial, also known as Tumorous imaginal disc 1 (TID1), is a protein that in humans is encoded by the DNAJA3 gene on chromosome 16. This protein belongs to the DNAJ/Hsp40 protein family, which is known for binding and activating Hsp70 chaperone proteins to perform protein folding, degradation, and complex assembly. As a mitochondrial protein, it is involved in maintaining membrane potential and mitochondrial DNA (mtDNA) integrity, as well as cellular processes such as cell movement, growth, and death. Furthermore, it is associated with a broad range of diseases, including neurodegenerative diseases, inflammatory diseases, and cancers.
# Structure
As a member of the DNAJ/Hsp40 protein family, DNAJA3 contains a conserved DnaJ domain, which includes an HPD motif that interacts with Hsp70 to perform its cochaperone function. The DnaJ domain is composed of tetrahelical regions containing a tripeptide of histidine, proline and aspartic acid situated between two helices. In addition, this protein contains a glycine/phenylalanine (G/F) rich linker region and a central cysteine-rich region similar to a zinc finger repeat, both characteristic of type I DnaJ molecular chaperones. The mitochondrial targeting sequence at its N-terminal directs the localization of the protein to the mitochondrial matrix.
DNAJA3 possesses two alternatively spliced forms: a long isoform of 43 kDa and a short isoform of 40 kDa. The long isoform contains an additional 33 residues at its C-terminal compared to the short isoform, and this region is predicted to hinder the long isoform from regulating membrane potential.
# Function
DNAJA3 is a member of the DNAJ/Hsp40 protein family, which stimulates the ATPase activity of Hsp70 chaperones and plays critical roles in protein folding, degradation, and multiprotein complex assembly. DNAJA3 localizes to the mitochondria, where it interacts with the mitochondrial Hsp70 chaperone (mtHsp70) to carry out the chaperone system. This protein is crucial for maintaining a homogeneous distribution of mitochondrial membrane potential and the integrity of mtDNA. DNAJA3 homogenizes membrane potential through regulation of complex I aggregation, though the mechanism for maintaining mtDNA remains unknown. These functions then allow DNAJA3 to mediate mitochondrial fission through DRP1 and, by extension, cellular processes such as cell movement, growth, proliferation, differentiation, senescence, and apoptosis. However, though both isoforms of DNAJA3 are involved with cell survival, they are also observed to influence two opposing outcomes. The proapoptotic long isoform induces apoptosis by stimulating cytochrome C release and caspase activation in the mitochondria, whereas the antiapoptotic short isoform prevents cytochrome C release and, thus, apoptosis. In neuromuscular junctions, only the short isoform clusters acetylcholine receptors for efficient synaptic transmission. The two isoforms also differ in their specific mitochondrial localization, which may partially account for their different functions.
Before localization to the mitochondria, DNAJA3 is transiently retained in the cytosol, where it can also interact with cytosolic proteins and possibly function to transport these proteins.
# Clinical significance
This protein is implicated in several cancers, including skin cancer, breast cancer, and colorectal cancer. It is a key player in tumor suppression through interactions with oncogenic proteins, including ErbB2 and the p53 tumor suppressor protein. Under hypoxic conditions, DNAJA3 may directly influence p53 complex assembly or modification, or indirectly ubiquitinylate p53 through ubiquitin ligases like MDM2. Moreover, both p53 and DNAJA3 must be present in the mitochondria in order to induce apoptosis in the cell. In head and neck squamous cell carcinoma (HNSCC) cancer, DNAJA3 suppresses cell proliferation, anchorage-independent growth, cell motility, and cell invasion by attenuating EGFR and, downstream the signaling pathway, AKT. Thus, treatments promoting DNAJA3 expression and function may greatly aid the elimination of tumors.
Additionally, DNAJA3 is implicated in neurodegenerative diseases like Parkinson's disease by virtue of its key roles in chaperoning mitochondrial proteins and mediating mitochondrial morphology in conjunction with mtHsp70. Another disease, psoriasis, is a chronic inflammatory skin disease that results from the absence of DNAJA3 activity, which then results in the activation of MK5, increased phosphorylation of HSP27, increased actin cytoskeleton organization, and hyperthickened skin.
# Interactions
DNAJA3 has been shown to interact with:
- ErbB-2 receptor tyrosine kinase
- MK5
- HSPA9
- HSPA8,
- JAK2, and
- RASA1 | DNAJA3
DnaJ homolog subfamily A member 3, mitochondrial, also known as Tumorous imaginal disc 1 (TID1), is a protein that in humans is encoded by the DNAJA3 gene on chromosome 16.[1][2][3] This protein belongs to the DNAJ/Hsp40 protein family, which is known for binding and activating Hsp70 chaperone proteins to perform protein folding, degradation, and complex assembly.[2][3][4] As a mitochondrial protein, it is involved in maintaining membrane potential and mitochondrial DNA (mtDNA) integrity, as well as cellular processes such as cell movement, growth, and death.[2][3][5][6][7] Furthermore, it is associated with a broad range of diseases, including neurodegenerative diseases, inflammatory diseases, and cancers.[3][5][7][8]
# Structure
As a member of the DNAJ/Hsp40 protein family, DNAJA3 contains a conserved DnaJ domain, which includes an HPD motif that interacts with Hsp70 to perform its cochaperone function.[2][3][4][5][6] The DnaJ domain is composed of tetrahelical regions containing a tripeptide of histidine, proline and aspartic acid situated between two helices. In addition, this protein contains a glycine/phenylalanine (G/F) rich linker region and a central cysteine-rich region similar to a zinc finger repeat, both characteristic of type I DnaJ molecular chaperones.[4][5][6] The mitochondrial targeting sequence at its N-terminal directs the localization of the protein to the mitochondrial matrix.[4][5][6]
DNAJA3 possesses two alternatively spliced forms: a long isoform of 43 kDa and a short isoform of 40 kDa.[2][3][5][8] The long isoform contains an additional 33 residues at its C-terminal compared to the short isoform, and this region is predicted to hinder the long isoform from regulating membrane potential.[3]
# Function
DNAJA3 is a member of the DNAJ/Hsp40 protein family, which stimulates the ATPase activity of Hsp70 chaperones and plays critical roles in protein folding, degradation, and multiprotein complex assembly.[2][3][4] DNAJA3 localizes to the mitochondria, where it interacts with the mitochondrial Hsp70 chaperone (mtHsp70) to carry out the chaperone system.[2][3] This protein is crucial for maintaining a homogeneous distribution of mitochondrial membrane potential and the integrity of mtDNA. DNAJA3 homogenizes membrane potential through regulation of complex I aggregation, though the mechanism for maintaining mtDNA remains unknown.[3] These functions then allow DNAJA3 to mediate mitochondrial fission through DRP1 and, by extension, cellular processes such as cell movement, growth, proliferation, differentiation, senescence, and apoptosis.[2][3][5][6][7] However, though both isoforms of DNAJA3 are involved with cell survival, they are also observed to influence two opposing outcomes. The proapoptotic long isoform induces apoptosis by stimulating cytochrome C release and caspase activation in the mitochondria, whereas the antiapoptotic short isoform prevents cytochrome C release and, thus, apoptosis.[3][7] In neuromuscular junctions, only the short isoform clusters acetylcholine receptors for efficient synaptic transmission.[3] The two isoforms also differ in their specific mitochondrial localization, which may partially account for their different functions.[3][7]
Before localization to the mitochondria, DNAJA3 is transiently retained in the cytosol, where it can also interact with cytosolic proteins and possibly function to transport these proteins.[4][7]
# Clinical significance
This protein is implicated in several cancers, including skin cancer, breast cancer, and colorectal cancer.[8] It is a key player in tumor suppression through interactions with oncogenic proteins, including ErbB2 and the p53 tumor suppressor protein.[2][4] Under hypoxic conditions, DNAJA3 may directly influence p53 complex assembly or modification, or indirectly ubiquitinylate p53 through ubiquitin ligases like MDM2. Moreover, both p53 and DNAJA3 must be present in the mitochondria in order to induce apoptosis in the cell.[4] In head and neck squamous cell carcinoma (HNSCC) cancer, DNAJA3 suppresses cell proliferation, anchorage-independent growth, cell motility, and cell invasion by attenuating EGFR and, downstream the signaling pathway, AKT.[8] Thus, treatments promoting DNAJA3 expression and function may greatly aid the elimination of tumors.[4]
Additionally, DNAJA3 is implicated in neurodegenerative diseases like Parkinson's disease by virtue of its key roles in chaperoning mitochondrial proteins and mediating mitochondrial morphology in conjunction with mtHsp70.[3][5] Another disease, psoriasis, is a chronic inflammatory skin disease that results from the absence of DNAJA3 activity, which then results in the activation of MK5, increased phosphorylation of HSP27, increased actin cytoskeleton organization, and hyperthickened skin.[7]
# Interactions
DNAJA3 has been shown to interact with:
- ErbB-2 receptor tyrosine kinase[6]
- MK5[7]
- HSPA9[3]
- HSPA8,[9]
- JAK2,[9] and
- RASA1[10] | https://www.wikidoc.org/index.php/DNAJA3 | |
32786371ee0a30020df84883c851babfb15b932e | wikidoc | DNAJB6 | DNAJB6
DnaJ homolog subfamily B member 6 is a protein that in humans is encoded by the DNAJB6 gene.
# Function
This gene encodes a member of the DNAJ protein family. DNAJ family members are characterized by a highly conserved amino acid stretch called the 'J-domain' and function as one of the two major classes of molecular chaperones involved in a wide range of cellular events, such as protein folding and oligomeric protein complex assembly. This family member may also play a role in polyglutamine aggregation in specific neurons. Alternative splicing of this gene results in multiple transcript variants; however, not all variants have been fully described.
# Interactions
DNAJB6 has been shown to interact with Keratin 18. It has been also shown that the aggregation of Aβ42 (a process involved in e.g. Alzheimer's disease) is retarded by DNAJB6 in a concentration-dependent manner, extending to very low sub-stoichiometric molar ratios of chaperone to peptide. | DNAJB6
DnaJ homolog subfamily B member 6 is a protein that in humans is encoded by the DNAJB6 gene.[1][2][3]
# Function
This gene encodes a member of the DNAJ protein family. DNAJ family members are characterized by a highly conserved amino acid stretch called the 'J-domain' and function as one of the two major classes of molecular chaperones involved in a wide range of cellular events, such as protein folding and oligomeric protein complex assembly. This family member may also play a role in polyglutamine aggregation in specific neurons. Alternative splicing of this gene results in multiple transcript variants; however, not all variants have been fully described.[3]
# Interactions
DNAJB6 has been shown to interact with Keratin 18.[4] It has been also shown that the aggregation of Aβ42 (a process involved in e.g. Alzheimer's disease) is retarded by DNAJB6 in a concentration-dependent manner, extending to very low sub-stoichiometric molar ratios of chaperone to peptide.[5] | https://www.wikidoc.org/index.php/DNAJB6 | |
8001ce82bf18e92f9abbe3a4f360e873b778ed46 | wikidoc | DNAJC3 | DNAJC3
DnaJ homolog subfamily C member 3 is a protein that in humans is encoded by the DNAJC3 gene.
# Function
The protein encoded by this gene contains multiple tetratricopeptide repeat (TPR) motifs as well as the highly conserved J domain found in DNAJ chaperone family members. It is a member of the tetratricopeptide repeat family of proteins and acts as an inhibitor of the interferon-induced, dsRNA-activated protein kinase (PKR).
# Clinical significance
The DNAJC3 protein is an important apoptotic constituent. During a normal embryologic processes, or during cell injury (such as ischemia-reperfusion injury during heart attacks and strokes) or during developments and processes in cancer, an apoptotic cell undergoes structural changes including cell shrinkage, plasma membrane blebbing, nuclear condensation, and fragmentation of the DNA and nucleus. This is followed by fragmentation into apoptotic bodies that are quickly removed by phagocytes, thereby preventing an inflammatory response. It is a mode of cell death defined by characteristic morphological, biochemical and molecular changes. It was first described as a "shrinkage necrosis", and then this term was replaced by apoptosis to emphasize its role opposite mitosis in tissue kinetics. In later stages of apoptosis the entire cell becomes fragmented, forming a number of plasma membrane-bounded apoptotic bodies which contain nuclear and or cytoplasmic elements. The ultrastructural appearance of necrosis is quite different, the main features being mitochondrial swelling, plasma membrane breakdown and cellular disintegration. Apoptosis occurs in many physiological and pathological processes. It plays an important role during embryonal development as programmed cell death and accompanies a variety of normal involutional processes in which it serves as a mechanism to remove "unwanted" cells.
Moreover, an important role for DNAJC3 has been attributed to diabetes mellitus as well as multi system neurodegeneration. Diabetes mellitus and neurodegeneration are common diseases for which shared genetic factors are still only partly known. It was shown that loss of the BiP (immunoglobulin heavy-chain binding protein) co-chaperone DNAJC3 leads to diabetes mellitus and widespread neurodegeneration. Accordingly, three siblings were investigated with juvenile-onset diabetes and central and peripheral neurodegeneration, including ataxia, upper-motor-neuron damage, peripheral neuropathy, hearing loss, and cerebral atrophy. Subsequently, exome sequencing identified a homozygous stop mutation in DNAJC3. Further screening of a diabetes database with 226,194 individuals yielded eight phenotypically similar individuals and one family carrying a homozygous DNAJC3 deletion. DNAJC3 was absent in fibroblasts from all affected subjects in both families. To delineate the phenotypic and mutational spectrum and the genetic variability of DNAJC3, 8,603 exomes were further analyzed, including 506 from families affected by diabetes, ataxia, upper-motor-neuron damage, peripheral neuropathy, or hearing loss. This analysis revealed only one further loss-of-function allele in DNAJC3 and no further associations in subjects with only a subset of the features of the main phenotype. Notably, the DNAJC3 protein is also considered as an important marker for stress in the endoplasmatic reticulum.
# Interactions
DNAJC3 has been shown to interact with:
- EIF2AK2,
- EIF2AK3, and
- PRKRIR. | DNAJC3
DnaJ homolog subfamily C member 3 is a protein that in humans is encoded by the DNAJC3 gene.[1][2][3]
# Function
The protein encoded by this gene contains multiple tetratricopeptide repeat (TPR) motifs as well as the highly conserved J domain found in DNAJ chaperone family members. It is a member of the tetratricopeptide repeat family of proteins and acts as an inhibitor of the interferon-induced, dsRNA-activated protein kinase (PKR).[3]
# Clinical significance
The DNAJC3 protein is an important apoptotic constituent. During a normal embryologic processes, or during cell injury (such as ischemia-reperfusion injury during heart attacks and strokes) or during developments and processes in cancer, an apoptotic cell undergoes structural changes including cell shrinkage, plasma membrane blebbing, nuclear condensation, and fragmentation of the DNA and nucleus. This is followed by fragmentation into apoptotic bodies that are quickly removed by phagocytes, thereby preventing an inflammatory response.[4] It is a mode of cell death defined by characteristic morphological, biochemical and molecular changes. It was first described as a "shrinkage necrosis", and then this term was replaced by apoptosis to emphasize its role opposite mitosis in tissue kinetics. In later stages of apoptosis the entire cell becomes fragmented, forming a number of plasma membrane-bounded apoptotic bodies which contain nuclear and or cytoplasmic elements. The ultrastructural appearance of necrosis is quite different, the main features being mitochondrial swelling, plasma membrane breakdown and cellular disintegration. Apoptosis occurs in many physiological and pathological processes. It plays an important role during embryonal development as programmed cell death and accompanies a variety of normal involutional processes in which it serves as a mechanism to remove "unwanted" cells.
Moreover, an important role for DNAJC3 has been attributed to diabetes mellitus as well as multi system neurodegeneration.[5][6] Diabetes mellitus and neurodegeneration are common diseases for which shared genetic factors are still only partly known. It was shown that loss of the BiP (immunoglobulin heavy-chain binding protein) co-chaperone DNAJC3 leads to diabetes mellitus and widespread neurodegeneration. Accordingly, three siblings were investigated with juvenile-onset diabetes and central and peripheral neurodegeneration, including ataxia, upper-motor-neuron damage, peripheral neuropathy, hearing loss, and cerebral atrophy. Subsequently, exome sequencing identified a homozygous stop mutation in DNAJC3. Further screening of a diabetes database with 226,194 individuals yielded eight phenotypically similar individuals and one family carrying a homozygous DNAJC3 deletion. DNAJC3 was absent in fibroblasts from all affected subjects in both families. To delineate the phenotypic and mutational spectrum and the genetic variability of DNAJC3, 8,603 exomes were further analyzed, including 506 from families affected by diabetes, ataxia, upper-motor-neuron damage, peripheral neuropathy, or hearing loss. This analysis revealed only one further loss-of-function allele in DNAJC3 and no further associations in subjects with only a subset of the features of the main phenotype.[5] Notably, the DNAJC3 protein is also considered as an important marker for stress in the endoplasmatic reticulum.
[6]
# Interactions
DNAJC3 has been shown to interact with:
- EIF2AK2,[7][8]
- EIF2AK3,[9] and
- PRKRIR.[8] | https://www.wikidoc.org/index.php/DNAJC3 | |
8487613c00cd1040dc42f7db15b415d73958b8d3 | wikidoc | DNAJC5 | DNAJC5
DnaJ homolog subfamily C member 5, also known as cysteine string protein or CSP is a protein, that in humans encoded by the DNAJC5 gene. It was first described in 1990.
# Gene
In humans, the gene is located on the long arm of chromosome 20 (20q13.33) on the Watson (positive strand). The gene is 40,867 bases in length and the encoded protein has 198 amino acids with a predicted molecular weight of 22.149 kiloDaltons (kDa). The weight of the mature protein is 34kDa.
This gene is highly conserved and found both in invertebrates and vertebrates. In humans, a pseudogene of this gene is located on the short arm of chromosome 8.
# Structure
The organisation of the protein is as follows:
- an N-terminus phosphorylation site for protein kinase A
- a J domain (~70 amino acids)
- a linker region
- a cysteine motif consisting of 13–15 cysteines within a stretch of 25 amino acids. It is heavily palmitoylated in the cysteine string motif.
- a less conserved C-terminal domain
# Tissue distribution
This protein is abundant in neural tissue and displays a characteristic localization to synaptic and clathrin coated vesicles. It is also found on secretory vesicles in endocrine, neuroendocrine and exocrine cells. This protein makes up ~1% of the protein content of the synaptic vesicles. DNAJC5 appears to have a role in stimulated exocytosis.
# Function
The encoded protein is a member of the J protein family. These proteins function in many cellular processes by regulating the ATPase activity of 70 kDa heat shock proteins (Hsp70). DNAJC5 is a guanine nucleotide exchange factor for Gα proteins. CSPα plays a role in membrane trafficking and protein folding, and has been shown to have anti-neurodegenerative properties. It is known to play a role in cystic fibrosis and Huntington's disease.
This protein has been proposed as a key element of the synaptic molecular machinery devoted to the rescue of synaptic proteins that have been unfolded by activity dependent stress. Syntaxin 1A, a plasma membrane SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) critical for neurotransmission, forms a complex with CSPα, a G protein and an N-type calcium channel. Huntingtin may be able displace both syntaxin 1A and CSPα from N-type channels. CSP interacts with the calcium sensor protein synaptotagmin 9 via its linker domain.
Huntingtin-interacting protein 14, a palmitoyl transferase, is required for exocytosis and targeting of CSP to synaptic vesicles. The palmitoyl residues are transferred to the cysteine residues. If these resides are mutated membrane targeting is reduced or lost. The rat CSP forms a complex with Sgt (SGTA) and Hsc70 (HSPA8) located on the synaptic vesicle surface. This complex functions as an ATP-dependent chaperone that reactivates denatured substrates. Furthermore, the Csp/Sgt/Hsc70 complex appears to be important for maintenance of normal synapses.
Its expression may be increased with the use of lithium. Quercetin promotes formation of stable CSPα-CSPα dimers.
Cysteine-string protein increases the calcium sensitivity of neurotransmitter exocytosis.
# Interactions
DNAJC5 has been shown to interact with the cystic fibrosis transmembrane conductance regulator.
# Clinical significance
Mutations in this gene may cause neuronal ceroid lipofuscinosis. | DNAJC5
DnaJ homolog subfamily C member 5, also known as cysteine string protein or CSP is a protein, that in humans encoded by the DNAJC5 gene.[1] It was first described in 1990.[2]
# Gene
In humans, the gene is located on the long arm of chromosome 20 (20q13.33) on the Watson (positive strand). The gene is 40,867 bases in length and the encoded protein has 198 amino acids with a predicted molecular weight of 22.149 kiloDaltons (kDa). The weight of the mature protein is 34kDa.
This gene is highly conserved and found both in invertebrates and vertebrates. In humans, a pseudogene of this gene is located on the short arm of chromosome 8.
# Structure
The organisation of the protein is as follows:[3]
- an N-terminus phosphorylation site for protein kinase A
- a J domain (~70 amino acids)
- a linker region
- a cysteine motif consisting of 13–15 cysteines within a stretch of 25 amino acids. It is heavily palmitoylated in the cysteine string motif.
- a less conserved C-terminal domain
# Tissue distribution
This protein is abundant in neural tissue and displays a characteristic localization to synaptic and clathrin coated vesicles. It is also found on secretory vesicles in endocrine, neuroendocrine and exocrine cells. This protein makes up ~1% of the protein content of the synaptic vesicles.[4] DNAJC5 appears to have a role in stimulated exocytosis.[5]
# Function
The encoded protein is a member of the J protein family. These proteins function in many cellular processes by regulating the ATPase activity of 70 kDa heat shock proteins (Hsp70). DNAJC5 is a guanine nucleotide exchange factor for Gα proteins.[6] CSPα plays a role in membrane trafficking and protein folding, and has been shown to have anti-neurodegenerative properties. It is known to play a role in cystic fibrosis and Huntington's disease.[1]
This protein has been proposed as a key element of the synaptic molecular machinery devoted to the rescue of synaptic proteins that have been unfolded by activity dependent stress.[7][8] Syntaxin 1A, a plasma membrane SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) critical for neurotransmission, forms a complex with CSPα, a G protein and an N-type calcium channel. Huntingtin may be able displace both syntaxin 1A and CSPα from N-type channels.[9] CSP interacts with the calcium sensor protein synaptotagmin 9 via its linker domain.[10]
Huntingtin-interacting protein 14, a palmitoyl transferase, is required for exocytosis and targeting of CSP to synaptic vesicles. The palmitoyl residues are transferred to the cysteine residues. If these resides are mutated membrane targeting is reduced or lost.[11] The rat CSP forms a complex with Sgt (SGTA) and Hsc70 (HSPA8) located on the synaptic vesicle surface. This complex functions as an ATP-dependent chaperone that reactivates denatured substrates. Furthermore, the Csp/Sgt/Hsc70 complex appears to be important for maintenance of normal synapses.[3]
Its expression may be increased with the use of lithium.[12] Quercetin promotes formation of stable CSPα-CSPα dimers.[13]
Cysteine-string protein increases the calcium sensitivity of neurotransmitter exocytosis.[14]
# Interactions
DNAJC5 has been shown to interact with the cystic fibrosis transmembrane conductance regulator.[15]
# Clinical significance
Mutations in this gene may cause neuronal ceroid lipofuscinosis.[16] | https://www.wikidoc.org/index.php/DNAJC5 | |
e16e3920230499f59c33fc9a9f5e738f7e17e380 | wikidoc | DNMT3B | DNMT3B
DNA (cytosine-5-)-methyltransferase 3 beta, is an enzyme that in humans in encoded by the DNMT3B gene. Mutation in this gene are associated with immunodeficiency, centromere instability and facial anomalies syndrome.
# Function
CpG methylation is an epigenetic modification that is important for embryonic development, imprinting, and X-chromosome inactivation. Studies in mice have demonstrated that DNA methylation is required for mammalian development. This gene encodes a DNA methyltransferase which is thought to function in de novo methylation, rather than maintenance methylation. The protein localizes primarily to the nucleus and its expression is developmentally regulated. Eight alternatively spliced transcript variants have been described. The full length sequences of variants 4 and 5 have not been determined.
# Clinical significance
immunodeficiency-centromeric instability-facial anomalies (ICF) syndrome is a result of defects in lymphocyte maturation resulting from aberrant DNA methylation caused by mutations in the DNMT3B gene.
Variants of the gene can also contribute to nicotine dependency.
# Interactions
DNMT3B has been shown to interact with:
- CBX5,
- DNMT1,
- DNMT3A,
- KIF4A,
- NCAPG,
- SMC2,
- SUMO1 and
- UBE2I. | DNMT3B
DNA (cytosine-5-)-methyltransferase 3 beta, is an enzyme that in humans in encoded by the DNMT3B gene.[1] Mutation in this gene are associated with immunodeficiency, centromere instability and facial anomalies syndrome.[2]
# Function
CpG methylation is an epigenetic modification that is important for embryonic development, imprinting, and X-chromosome inactivation. Studies in mice have demonstrated that DNA methylation is required for mammalian development. This gene encodes a DNA methyltransferase which is thought to function in de novo methylation, rather than maintenance methylation. The protein localizes primarily to the nucleus and its expression is developmentally regulated. Eight alternatively spliced transcript variants have been described. The full length sequences of variants 4 and 5 have not been determined.[1]
# Clinical significance
immunodeficiency-centromeric instability-facial anomalies (ICF) syndrome is a result of defects in lymphocyte maturation resulting from aberrant DNA methylation caused by mutations in the DNMT3B gene.[2]
Variants of the gene can also contribute to nicotine dependency.[3]
# Interactions
DNMT3B has been shown to interact with:
- CBX5,[4]
- DNMT1,[4][5]
- DNMT3A,[4][5][6]
- KIF4A,[7]
- NCAPG,[7]
- SMC2,[7]
- SUMO1[8] and
- UBE2I.[8] | https://www.wikidoc.org/index.php/DNMT3B | |
483ecbaeb60099860fea5ff5d457a4678bb88fa9 | wikidoc | MD/PhD | MD/PhD
MD/PhD or DO/PhD refers to an education which includes both the medical training of a doctor (MD or DO) with the rigor of a scientific researcher (PhD). It can refer to the designation given to a person who has graduated from such an education, or an educational program which incorporates both curriculums.
# Profession
An MD/PhD or DO/PhD is usually more focused on research than a regular MD or DO. Regular doctors can also do research, but, all other things equal, the PhD degree usually allows a better chance for getting research positions and grant acceptance.
# Educational program
The dual degree program at a university is a selective, mostly underwritten program which produces medical doctors who wish to also focus on research. The applicants to a combined degree vs. a regular MD or DO degree lie mainly in the amount of research experience they have already attained, and the competitive nature of the degree (due to the usually free tuition and stipend) means that the grades and scores are usually higher, also.
The National Institutes of Health (NIH) has developed a grant to underwrite some universities' MD/PhD programs, called Medical Scientist Training Program. | MD/PhD
Template:Physician education and training in the United States
MD/PhD or DO/PhD refers to an education which includes both the medical training of a doctor (MD or DO) with the rigor of a scientific researcher (PhD). It can refer to the designation given to a person who has graduated from such an education, or an educational program which incorporates both curriculums.
# Profession
An MD/PhD or DO/PhD is usually more focused on research than a regular MD or DO. Regular doctors can also do research, but, all other things equal, the PhD degree usually allows a better chance for getting research positions and grant acceptance.
# Educational program
The dual degree program at a university is a selective, mostly underwritten program which produces medical doctors who wish to also focus on research. The applicants to a combined degree vs. a regular MD or DO degree lie mainly in the amount of research experience they have already attained, and the competitive nature of the degree (due to the usually free tuition and stipend) means that the grades and scores are usually higher, also.
The National Institutes of Health (NIH) has developed a grant to underwrite some universities' MD/PhD programs, called Medical Scientist Training Program. | https://www.wikidoc.org/index.php/DO/PhD | |
b6b834d0975c3823d1dff8e649b3bef7f136e4bd | wikidoc | DOPEY2 | DOPEY2
DOPEY2 is a human gene located just above the Down Syndrome chromosomal region (DSCR) located at 21p22.2 sub-band. Although the exact function of the this gene is not yet fully understood, it has been proven to play a role in multiple biological processes, and its over-expression (triplication) has been linked to multiple facets of the Down Syndrome phenotype, most notably mental retardation.
# Gene
The DOPEY2 gene is located on human chromosome 21, at chromosome band 21q22.12. This band is located in open reading frame 5, hence the alias C21orf5. DOPEY2 gene is composed of 137,493 bases making up 37 exons and 39 distinct gt-ag introns, all located between CBR3 and KIAA0136 genes.
Transcription produces 10 unique mRNAs, 8 alternatively spliced variants, and 2 unspliced forms. These unique mRNAs differ by varying truncation of the 3’ and 5’ ends, as well as the presence of 3 cassette exons. These mRNA variants range from 7691bp (mRNA variant DOPEY2.aAug10) to 315bp (mRNA variant DOPEY2.jAug10-unspliced) and are further described in Table 1 below.
The mRNA expressed and levels of expression differ based on the location and tissue type in the body, but overall has been found to be expressed ubiquitously. The highest expression has been found in differentiating, rather than proliferating, tissue zones. Transcript was identified with the highest confidence in the erythroleukemia, placental cells and overall in the brain, and at a medium confidence level in the perirhinal cortex, medial temporal lobe, colon, as well in the salivary and adrenal glands.
# Protein
Of the ten mRNAs produced, six of them are translated into viable proteins. Please see table above for more details. The largest, having a molecular weight of 258230 Da, and highest expressed protein, DOPEY2.a, is composed of 2298 amino acids that make up an N-terminal domain, seven transmembrane domains, and a C-terminal coiled coil stretch that forms a leucine-like zipper domain. Like other leucine zippers domains, DOPEY2's C-terminal is hypothesized to be involved in multiple protein-protein and transcription factor interactions. This indicates that DOPEY2 might act as a transcription co-activator; however, further research must be done to fully understand the precise physiological function.
# Protein Interactions
Very little work has been done on understanding the intricacies of the protein interactions; however, STRING has identified direct links with three proteins: MON2, TRIP12, and HECTD1. DOPEY2 is also indirectly associated with the following proteins: ARL16, ATP9A, ARL1, ATP9B, UBE3A, HERC5, HERC4, HACE1, UBE3C, and UBR5. See Figure 2 for interactions.
# Homology
Phylogenesis suggest that DOPEY2 can be traced back to a common ancestor of animals and fungi due to its highly conserved C-terminal domain DOPEY2 has 84 known orthologs and 158 speciation nodes in the gene tree. The most similar orthologs being in the chimpanzee (Pan troglodytes), dog (Canis familiaris), cow (Bos Taurus), as well as the rat and mouse (Rattus norvegicus and Mus musculus).
The only known paralog is DOPEY1.
# Sub-cellular localization
Gene Ontology (GO) has traced the DOPEY2 protein to 5 main areas: the Golgi membrane, the trans-Golgi network, cytosol, and extracellular endosome. COMPARTMENTS localization data places the highest confidence of localization to the extracellular exosome and the Golgi membrane.
Figure 1: Description of mRNA and Protein Variants:
# Function
As mentioned previously, the specific function and however, its function can be largely inferred through the study of similar genes. DOPEY2 has been found to be involved in the following processes: multicellular organism development in cell differentiation and developmental patterning, cognition, as well as endoplasmic reticulum organization and Golgi to endosome transport.
## Cell differentiation and patterning
The DOPEY2 ortholog, pad-1, in C. elegans, was found to have a role in cell differentiation and patterning. In an experiment where the pad-1 was silenced using RNA-mediated interference, the phenotype of the injected worm's offspring was fetal lethality. The reason being: most of the embryonic tissues did not undergo appropriate cell patterning during gastrulation. Abnormally positioned cells lead to misinformation of organs; the failed morphogenesis of embryo. A similar observation was made in the inactivation of the Dop1 gene, the DOPEY2 ortholog, in S. cerevisiae. The inactivation lead to abnormal cell positioning and subsequent death. Overexpression of the N-terminal in S. cerevisiae also resulted in a loss of proper growth polarity and abnormal asexual reproductive patterning. This function was further supported by the function of the ortholog DopA in A. nidulans, which similarly codes for a 207kDa protein that also contains leucine zipper-like domains. Its inactivation revealed its role directing alternations in cell division timing, growth polarity, as well as cell-specific gene expression, ultimately affecting organogenesis and cell differentiation.
## Endoplasmic reticulum and golgi transport
Dop1, an ortholog of DOPEY2, in S. cerevisiae was found to play an essential role in membrane organization. It was found that it forms a complex with another protein, Mon2, which recruits the pool of Dop1 from the Golgi. In a Mon2 knockout model, Dop1 was mislocalized, and in turn resulted in defective cycling between endosomes and the Golgi. In a Dop1 knockout model, severe defects in the endoplasmic reticulum organization. This Dop1 and Mon2 complex was also linked to traffic in the enocytic pathway.
# Clinical significance
## Cognition
DOPEY2 has been identified as a CNV region in Alzheimer's Disease subjects, and its triplication has been tied to various phenotypic aspects of Down Syndrome.
## Down syndrome
DOPEY2 has been associated with the Down Syndrome phenotype. When DOPEY2 was overexpressed in mice, abnormal lamination patterns of cortical cells was observed, as well as altered cortical, hippocampal, and cerebellar cells, regions that play key roles in memory and learning. These changes are similar to those observed in Down Syndrome patients. It is because of this that C21orf15 is now being studied as a new candidate gene for the mental retardation phenotype in Down Syndrome. | DOPEY2
DOPEY2 is a human gene located just above the Down Syndrome chromosomal region (DSCR) located at 21p22.2 sub-band.[1][2][3] Although the exact function of the this gene is not yet fully understood, it has been proven to play a role in multiple biological processes, and its over-expression (triplication) has been linked to multiple facets of the Down Syndrome phenotype, most notably mental retardation.[2]
# Gene
The DOPEY2 gene is located on human chromosome 21, at chromosome band 21q22.12.[1] This band is located in open reading frame 5, hence the alias C21orf5. DOPEY2 gene is composed of 137,493 bases making up 37 exons and 39 distinct gt-ag introns, all located between CBR3 and KIAA0136 genes.[2][4]
Transcription produces 10 unique mRNAs, 8 alternatively spliced variants, and 2 unspliced forms.[4] These unique mRNAs differ by varying truncation of the 3’ and 5’ ends, as well as the presence of 3 cassette exons.[4] These mRNA variants range from 7691bp (mRNA variant DOPEY2.aAug10) to 315bp (mRNA variant DOPEY2.jAug10-unspliced) and are further described in Table 1 below.[5]
The mRNA expressed and levels of expression differ based on the location and tissue type in the body, but overall has been found to be expressed ubiquitously.[2] The highest expression has been found in differentiating, rather than proliferating, tissue zones.[1] Transcript was identified with the highest confidence in the erythroleukemia, placental cells and overall in the brain, and at a medium confidence level in the perirhinal cortex, medial temporal lobe, colon, as well in the salivary and adrenal glands.[4]
# Protein
Of the ten mRNAs produced, six of them are translated into viable proteins. Please see table above for more details.[5] The largest, having a molecular weight of 258230 Da, and highest expressed protein, DOPEY2.a, is composed of 2298 amino acids that make up an N-terminal domain, seven transmembrane domains, and a C-terminal coiled coil stretch that forms a leucine-like zipper domain.[4] Like other leucine zippers domains, DOPEY2's C-terminal is hypothesized to be involved in multiple protein-protein and transcription factor interactions.[2] This indicates that DOPEY2 might act as a transcription co-activator; however, further research must be done to fully understand the precise physiological function.[2]
# Protein Interactions
Very little work has been done on understanding the intricacies of the protein interactions; however, STRING has identified direct links with three proteins: MON2, TRIP12, and HECTD1.[6] DOPEY2 is also indirectly associated with the following proteins: ARL16, ATP9A, ARL1, ATP9B, UBE3A, HERC5, HERC4, HACE1, UBE3C, and UBR5.[6] See Figure 2 for interactions.
# Homology
Phylogenesis suggest that DOPEY2 can be traced back to a common ancestor of animals and fungi due to its highly conserved C-terminal domain DOPEY2 has 84 known orthologs and 158 speciation nodes in the gene tree.[7] The most similar orthologs being in the chimpanzee (Pan troglodytes), dog (Canis familiaris), cow (Bos Taurus), as well as the rat and mouse (Rattus norvegicus and Mus musculus).[7]
The only known paralog is DOPEY1.[7]
# Sub-cellular localization
Gene Ontology (GO) has traced the DOPEY2 protein to 5 main areas: the Golgi membrane, the trans-Golgi network, cytosol, and extracellular endosome.[4] COMPARTMENTS localization data places the highest confidence of localization to the extracellular exosome and the Golgi membrane.[8]
Figure 1: Description of mRNA and Protein Variants:[5]
# Function
As mentioned previously, the specific function and however, its function can be largely inferred through the study of similar genes. DOPEY2 has been found to be involved in the following processes: multicellular organism development in cell differentiation and developmental patterning, cognition, as well as endoplasmic reticulum organization and Golgi to endosome transport.[1][2][9][10]
## Cell differentiation and patterning
The DOPEY2 ortholog, pad-1, in C. elegans, was found to have a role in cell differentiation and patterning. In an experiment where the pad-1 was silenced using RNA-mediated interference, the phenotype of the injected worm's offspring was fetal lethality.[1] The reason being: most of the embryonic tissues did not undergo appropriate cell patterning during gastrulation.[1] Abnormally positioned cells lead to misinformation of organs; the failed morphogenesis of embryo.[1] A similar observation was made in the inactivation of the Dop1 gene, the DOPEY2 ortholog, in S. cerevisiae.[2] The inactivation lead to abnormal cell positioning and subsequent death. Overexpression of the N-terminal in S. cerevisiae also resulted in a loss of proper growth polarity and abnormal asexual reproductive patterning.[2] This function was further supported by the function of the ortholog DopA in A. nidulans, which similarly codes for a 207kDa protein that also contains leucine zipper-like domains.[11] Its inactivation revealed its role directing alternations in cell division timing, growth polarity, as well as cell-specific gene expression, ultimately affecting organogenesis and cell differentiation.[11]
## Endoplasmic reticulum and golgi transport
Dop1, an ortholog of DOPEY2, in S. cerevisiae was found to play an essential role in membrane organization.[9] It was found that it forms a complex with another protein, Mon2, which recruits the pool of Dop1 from the Golgi.[9] In a Mon2 knockout model, Dop1 was mislocalized, and in turn resulted in defective cycling between endosomes and the Golgi.[9] In a Dop1 knockout model, severe defects in the endoplasmic reticulum organization.[9] This Dop1 and Mon2 complex was also linked to traffic in the enocytic pathway.[9]
# Clinical significance
## Cognition
DOPEY2 has been identified as a CNV region in Alzheimer's Disease subjects, and its triplication has been tied to various phenotypic aspects of Down Syndrome.[10]
## Down syndrome
DOPEY2 has been associated with the Down Syndrome phenotype.[2] When DOPEY2 was overexpressed in mice, abnormal lamination patterns of cortical cells was observed, as well as altered cortical, hippocampal, and cerebellar cells, regions that play key roles in memory and learning.[2] These changes are similar to those observed in Down Syndrome patients.[2] It is because of this that C21orf15 is now being studied as a new candidate gene for the mental retardation phenotype in Down Syndrome.[2] | https://www.wikidoc.org/index.php/DOPEY2 | |
a36722e8e816116d6fecf0d1e33992ca9b4454f3 | wikidoc | DPAGT1 | DPAGT1
UDP-N-acetylglucosamine—dolichyl-phosphate N-acetylglucosaminephosphotransferase is an enzyme that in humans is encoded by the DPAGT1 gene.
Mutations in DPAGT1 cause myasthenia .Selcen, D; Shen, X. M.; Brengman, J; Li, Y; Stans, A. A.; Wieben, E; Engel, A. G. (2014). "DPAGT1 myasthenia and myopathy: Genetic, phenotypic, and expression studies". Neurology. 82 (20): 1822–30. doi:10.1212/WNL.0000000000000435. PMC 4035711. PMID 24759841..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
The protein encoded by this gene is an enzyme that catalyzes the first step in the dolichol-linked oligosaccharide pathway (also see Genetic pathway) for glycoprotein biosynthesis. This enzyme belongs to the glycosyltransferase family 4. This protein is an integral membrane protein of the endoplasmic reticulum. The congenital disorder of glycosylation type Ij is caused by mutation in the gene encoding this enzyme. Alternatively spliced transcript variants encoding different isoforms have been identified. | DPAGT1
UDP-N-acetylglucosamine—dolichyl-phosphate N-acetylglucosaminephosphotransferase is an enzyme that in humans is encoded by the DPAGT1 gene.[1][2]
Mutations in DPAGT1 cause myasthenia .Selcen, D; Shen, X. M.; Brengman, J; Li, Y; Stans, A. A.; Wieben, E; Engel, A. G. (2014). "DPAGT1 myasthenia and myopathy: Genetic, phenotypic, and expression studies". Neurology. 82 (20): 1822–30. doi:10.1212/WNL.0000000000000435. PMC 4035711. PMID 24759841..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
The protein encoded by this gene is an enzyme that catalyzes the first step in the dolichol-linked oligosaccharide pathway (also see Genetic pathway) for glycoprotein biosynthesis. This enzyme belongs to the glycosyltransferase family 4. This protein is an integral membrane protein of the endoplasmic reticulum. The congenital disorder of glycosylation type Ij is caused by mutation in the gene encoding this enzyme. Alternatively spliced transcript variants encoding different isoforms have been identified.[2] | https://www.wikidoc.org/index.php/DPAGT1 | |
3258f8d50ac985ec7727d232925bca0b04ca7fa0 | wikidoc | DYRK1A | DYRK1A
Dual specificity tyrosine-phosphorylation-regulated kinase 1A is an enzyme that in humans is encoded by the DYRK1A gene. Alternative splicing of this gene generates several transcript variants differing from each other either in the 5' UTR or in the 3' coding region. These variants encode at least five different isoforms.
# Function
DYRK1A is a member of the dual-specificity tyrosine phosphorylation-regulated kinase (DYRK) family. This member contains a nuclear targeting signal sequence, a protein kinase domain, a leucine zipper motif, and a highly conservative 13-consecutive-histidine repeat. It catalyzes its autophosphorylation on serine/threonine and tyrosine residues. It may play a significant role in a signaling pathway regulating cell proliferation and may be involved in brain development. This gene is a homolog of Drosophila mnb (minibrain) gene and rat Dyrk gene.
Dyrk1a has also been shown to modulate plasma homocysteine level in a mouse model of overexpression.
# Clinical significance
DYRK1A is localized in the Down syndrome critical region of chromosome 21, and is considered to be a strong candidate gene for learning defects associated with Down syndrome. In addition, a polymorphism (SNP) in DYRK1A was found to be associated with HIV-1 replication in monocyte-derived macrophages, as well as with slower progression to AIDS in two independent cohorts of HIV-1-infected individuals. Mutations in DYRK1A are also associated with Autism spectrum disorder.
# Interactions
DYRK1A has been shown to interact with WDR68. | DYRK1A
Dual specificity tyrosine-phosphorylation-regulated kinase 1A is an enzyme that in humans is encoded by the DYRK1A gene.[1] Alternative splicing of this gene generates several transcript variants differing from each other either in the 5' UTR or in the 3' coding region.[2] These variants encode at least five different isoforms.[3]
# Function
DYRK1A is a member of the dual-specificity tyrosine phosphorylation-regulated kinase (DYRK) family. This member contains a nuclear targeting signal sequence, a protein kinase domain, a leucine zipper motif, and a highly conservative 13-consecutive-histidine repeat. It catalyzes its autophosphorylation on serine/threonine and tyrosine residues. It may play a significant role in a signaling pathway regulating cell proliferation and may be involved in brain development. This gene is a homolog of Drosophila mnb (minibrain) gene and rat Dyrk gene.[3]
Dyrk1a has also been shown to modulate plasma homocysteine level in a mouse model of overexpression.[4]
# Clinical significance
DYRK1A is localized in the Down syndrome critical region of chromosome 21, and is considered to be a strong candidate gene for learning defects associated with Down syndrome.[3] In addition, a polymorphism (SNP) in DYRK1A was found to be associated with HIV-1 replication in monocyte-derived macrophages, as well as with slower progression to AIDS in two independent cohorts of HIV-1-infected individuals.[2] Mutations in DYRK1A are also associated with Autism spectrum disorder.[5]
# Interactions
DYRK1A has been shown to interact with WDR68.[6] | https://www.wikidoc.org/index.php/DYRK1A | |
b43232c58192d0a431b8ced859886642a7b627bd | wikidoc | DYRK1B | DYRK1B
Dual specificity tyrosine-phosphorylation-regulated kinase 1B is an enzyme that in humans is encoded by the DYRK1B gene.
# Function
DYRK1B is a member of the DYRK family of protein kinases. DYRK1B contains a bipartite nuclear localization signal and is found mainly in muscle and testis. The protein is proposed to be involved in the regulation of nuclear functions. Three isoforms of DYRK1B have been identified differing in the presence of two alternatively spliced exons within the catalytic domain.
# Interactions
DYRK1B has been shown to interact with:
- PCBD1 and
- RANBP9.
# Clinical significance
One lone missense mutation in Dyrk1B gene (R102C) was found associated with an autosomal dominant early onset Coronary Artery Disease, juvenile-onset truncal obesity, severe hypertension, and type II diabetes mellitus - seen in subjects from a nomadic group in Iran. | DYRK1B
Dual specificity tyrosine-phosphorylation-regulated kinase 1B is an enzyme that in humans is encoded by the DYRK1B gene.[1][2]
# Function
DYRK1B is a member of the DYRK family of protein kinases. DYRK1B contains a bipartite nuclear localization signal and is found mainly in muscle and testis. The protein is proposed to be involved in the regulation of nuclear functions. Three isoforms of DYRK1B have been identified differing in the presence of two alternatively spliced exons within the catalytic domain.[2]
# Interactions
DYRK1B has been shown to interact with:
- PCBD1[3] and
- RANBP9.[4]
# Clinical significance
One lone missense mutation in Dyrk1B gene (R102C) was found associated with an autosomal dominant early onset Coronary Artery Disease, juvenile-onset truncal obesity, severe hypertension, and type II diabetes mellitus - seen in subjects from a nomadic group in Iran.[5] | https://www.wikidoc.org/index.php/DYRK1B | |
1aa499892b3f53eb36e7d51e3fab2a8caf728d21 | wikidoc | Dasani | Dasani
Dasani (pronounced Template:IPA) is a popular brand of bottled water from the Coca-Cola company, launched in 1999, after the success of Aquafina (produced by Coca-Cola-rival PepsiCo). It is one of many brands of Coca-Cola water products sold around the world.
Dasani water differs in composition between its different markets. For example, Coca-Cola intended to launch it as a natural spring water in France and Germany, although this never went ahead after bad publicity in the United Kingdom.
# Sizes, packages, and flavors
Regular Dasani water comes in the following sizes: 12 oz; 20 oz; 24 oz 'Sports Cap Bottle'; 1 L; 1.5 L; 300 mL; 12 oz fridge pack; 500 mL 6, 12, and 24-pack; and the 24 oz 6-pack.
The Dasani brand includes flavored water beverages, which use the sweetener sucralose (sold under the brand name "Splenda") as a sweetener. The flavors are lemon, grape, raspberry, and strawberry. The flavored variety comes in 20 fluid ounce bottles, 500 ml 6-packs, and 12 oz 8-packs.
The new Plus product line is similar to the flavored variety, only differing in the fact that these have vitamins added and come in Pomegranate-Blackberry, Orange-Tangerine, and Kiwi-Strawberry flavors. Dasani Plus comes in a 20 oz bottle.
# In various regions
## United States
Coca-Cola uses tap water from local municipal water supplies, filters it using the process of reverse osmosis and adds trace amounts of minerals, including Magnesium sulfate (Epsom salt), Potassium chloride (a sodium-free substitute for table salt), and common salt.
## Canada
Dasani was launched in Canada in 2000, a year after launching in the United States. The product was made available in Quebec shortly afterwards, in April 2001. Prior to Dasani launching in Quebec, Evian water bottles were sold on Coca-Cola refrigerator shelves.
There are four common Dasani bottle sizes sold in Canada: 300 mL, 500mL, 591 mL, 1 L, and 1.5 L. Bottles are sold individually and in packs of 6, 12, and 24.
The source of Dasani water in Canada is Brampton, Ontario. Dasani has <35 ppm of total dissolved mineral salts.
In early 2005, two flavored versions of Dasani were introduced to the public: Dasani With Lemon and Dasani With Raspberry. The two beverages are sweetened with sucralose. Dasani with Strawberry has since been introduced to the public.
## United Kingdom
Dasani was launched in the UK in January 2004. There were problems from the get-go. Early adverts referred to Dasani as "bottled spunk" or featured the tagline "can't live without spunk." In the UK, "spunk" is a euphemism for semen.
In March 2004, it became public through an article in The Independent newspaper that the tap water of Sidcup was being treated, bottled and sold under the Dasani brand name in the UK. Although Coca-Cola never implied that the water was being sourced from a spring or other natural source, they marketed it as being especially "pure". Hence, the public revelation of it being simply treated tap water caused a media stir.
The media made mocking parallels with a popular episode of the well-known BBC sitcom Only Fools and Horses in which the protagonist Del Boy attempts to pass off tap water as spring water. This scheme fails when the local reservoir becomes polluted (also because of Del) causing the bottled water to glow yellow. The episode is believed to have contributed to the severe negative reaction to Dasani by the press and public. Clips from this episode were shown in news reports and other programmes relating to the Dasani flop.
Two weeks later, UK authorities found a concentration of bromate in the product that could be considered harmful if consumed in large quantities. Dasani was potentially carcinogenic. Coca-Cola recalled half a million bottles and pulled the "Dasani" brand from the UK market on March 19, 2004. Shortly after, plans to introduce the brand on Continental Europe were announced to have been canceled as well. Ironically, bromate was not present in the tap water before Coca-Cola's treatment process. During that process the bromate was produced from the tap water's harmless bromide.
The withdrawal of the product and the resulting PR disaster has been likened to the New Coke fiasco.
Dasani did in fact make it to Ireland as it was sold quite freely in every major shop in the east of Ireland.
## South America
Dasani was introduced to the Brazilian market in mid-2003, renamed as Aquarius. It was introduced to the Chilean market in 2005, including releases in regular, lemon and tangerine flavors. It was released in Colombia in late 2005 with their three regular flavors. In 2005, Dasani was introduced in the Argentinian market with the flavours peach, lemon, citrus and regular. It was also released as a functional water in Mexico named as Ciel Dasani and it was available in four flavors: lemon-cucumber, papaya-carrot, grapefruit and mandarin-green tea, but it was discontinued in 2006. It was also released in Paraguay, Uruguay and Perú .
# References and footnotes
- ↑ Ashlee Vance (19th March 2004). "Coke's spunky water pulled from UK market". The Register. Check date values in: |date= (help).mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Lester Haines (11th March 2004). "Introducing Dasani – the water with added, er, protein". The Register. Check date values in: |date= (help)
- ↑ "Coke recalls controversial water". BBC. March 19, 2004. | Dasani
Template:Infobox Beverage
Dasani (pronounced Template:IPA) is a popular brand of bottled water from the Coca-Cola company, launched in 1999, after the success of Aquafina (produced by Coca-Cola-rival PepsiCo). It is one of many brands of Coca-Cola water products sold around the world.
Dasani water differs in composition between its different markets. For example, Coca-Cola intended to launch it as a natural spring water in France and Germany, although this never went ahead after bad publicity in the United Kingdom.
# Sizes, packages, and flavors
Regular Dasani water comes in the following sizes: 12 oz; 20 oz; 24 oz 'Sports Cap Bottle'; 1 L; 1.5 L; 300 mL; 12 oz fridge pack; 500 mL 6, 12, and 24-pack; and the 24 oz 6-pack.
The Dasani brand includes flavored water beverages, which use the sweetener sucralose (sold under the brand name "Splenda") as a sweetener. The flavors are lemon, grape, raspberry, and strawberry. The flavored variety comes in 20 fluid ounce bottles, 500 ml 6-packs, and 12 oz 8-packs.
The new Plus product line is similar to the flavored variety, only differing in the fact that these have vitamins added and come in Pomegranate-Blackberry, Orange-Tangerine, and Kiwi-Strawberry flavors. Dasani Plus comes in a 20 oz bottle.
# In various regions
## United States
Coca-Cola uses tap water from local municipal water supplies, filters it using the process of reverse osmosis and adds trace amounts of minerals, including Magnesium sulfate (Epsom salt), Potassium chloride (a sodium-free substitute for table salt), and common salt.
## Canada
Dasani was launched in Canada in 2000, a year after launching in the United States. The product was made available in Quebec shortly afterwards, in April 2001. Prior to Dasani launching in Quebec, Evian water bottles were sold on Coca-Cola refrigerator shelves.
There are four common Dasani bottle sizes sold in Canada: 300 mL, 500mL, 591 mL, 1 L, and 1.5 L. Bottles are sold individually and in packs of 6, 12, and 24.
The source of Dasani water in Canada is Brampton, Ontario. Dasani has <35 ppm of total dissolved mineral salts.
In early 2005, two flavored versions of Dasani were introduced to the public: Dasani With Lemon and Dasani With Raspberry. The two beverages are sweetened with sucralose. Dasani with Strawberry has since been introduced to the public.
## United Kingdom
Dasani was launched in the UK in January 2004. There were problems from the get-go. Early adverts referred to Dasani as "bottled spunk" or featured the tagline "can't live without spunk." In the UK, "spunk" is a euphemism for semen.[1][2]
In March 2004, it became public through an article in The Independent newspaper that the tap water of Sidcup was being treated, bottled and sold under the Dasani brand name in the UK. Although Coca-Cola never implied that the water was being sourced from a spring or other natural source, they marketed it as being especially "pure". Hence, the public revelation of it being simply treated tap water caused a media stir.
The media made mocking parallels with a popular episode of the well-known BBC sitcom Only Fools and Horses in which the protagonist Del Boy attempts to pass off tap water as spring water. This scheme fails when the local reservoir becomes polluted (also because of Del) causing the bottled water to glow yellow. The episode is believed to have contributed to the severe negative reaction to Dasani by the press and public. Clips from this episode were shown in news reports and other programmes relating to the Dasani flop.
Two weeks later, UK authorities found a concentration of bromate in the product that could be considered harmful if consumed in large quantities. Dasani was potentially carcinogenic. Coca-Cola recalled half a million bottles and pulled the "Dasani" brand from the UK market on March 19, 2004.[3] Shortly after, plans to introduce the brand on Continental Europe were announced to have been canceled as well. Ironically, bromate was not present in the tap water before Coca-Cola's treatment process. During that process the bromate was produced from the tap water's harmless bromide.
The withdrawal of the product and the resulting PR disaster has been likened to the New Coke fiasco.
Dasani did in fact make it to Ireland as it was sold quite freely in every major shop in the east of Ireland.
## South America
Dasani was introduced to the Brazilian market in mid-2003, renamed as Aquarius. It was introduced to the Chilean market in 2005, including releases in regular, lemon and tangerine flavors. It was released in Colombia in late 2005 with their three regular flavors. In 2005, Dasani was introduced in the Argentinian market with the flavours peach, lemon, citrus and regular. It was also released as a functional water in Mexico named as Ciel Dasani and it was available in four flavors: lemon-cucumber, papaya-carrot, grapefruit and mandarin-green tea, but it was discontinued in 2006. It was also released in Paraguay, Uruguay and Perú .
# References and footnotes
- ↑ Ashlee Vance (19th March 2004). "Coke's spunky water pulled from UK market". The Register. Check date values in: |date= (help).mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Lester Haines (11th March 2004). "Introducing Dasani – the water with added, er, protein". The Register. Check date values in: |date= (help)
- ↑ "Coke recalls controversial water". BBC. March 19, 2004.
# External links
- Dasani website for U.S. residents
- Coke Announces Dasani Water in February 1999
- What's in that bottle?, a January 2003 article from Consumer Reports
- Is America's $8 Billion Bottled Water Industry a Fraud?, a December 2003 article by E/The Environmental Magazine
- Things get worse with Coke, an explanatory article from The Guardian newspaper.
de:Dasani
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Dasani | |
08e8742e035b76c5d555bf7ce1dd16b497871c31 | wikidoc | Datura | Datura
Datura is a genus of 12-15 species of vespertine flowering plants belonging to the family Solanaceae. Their exact natural distribution is uncertain, due to extensive cultivation and naturalization throughout the temperate and tropical regions of the globe, but is most likely restricted to the Americas, from the United States south through Mexico (where the highest species diversity occurs) to the mid-latitudes of South America. Some species are reported by some authorities to be native to China, but this is not accepted by the Flora of China, where the three species present are treated as introductions from the Americas. (It also grows naturally throughout India and most of Australia). According to the old ayurvedic medicinal system (at least since 2000 BC) in India, this plant has versatile uses in medicinal preparations.
Datura is a woody-stalked, leafy herb growing up to 2 meters. It produces spiney seed pods and large white or purple trumpet-shaped flowers that face upward. Most parts of the plant contain atropine, scopolamine, and hyoscyamine. It has a long history of use both in S. America and Europe and is known for causing delerious states and poisonings in uninformed users.
Common names include jimson weed, Hell's Bells, Devil's weed, Devil's cucumber, thorn-apple (from the spiny fruit), pricklyburr (similarly), and somewhat paradoxically, both angel's trumpet and devil's trumpet (from their large trumpet-shaped flowers), or as Nathaniel Hawthorne refers to it in the the Scarlet Letter apple-peru. The word Datura comes from Hindi dhatūrā (thorn apple); record of this name dates back only to 1662 (OED). The Hindi derives this word from Sanskrit vedic literature that dates to long before 2000 BC.
They are large, vigorous annual plants or short-lived perennial plants, growing to 1-3 m tall. The leaves are alternate, 10-20 cm long and 5-18 cm broad, with a lobed or toothed margin. The flowers are erect or spreading (not pendulous), trumpet-shaped, 5-20 cm long and 4-12 cm broad at the mouth; color varies from white to yellow, pink, and pale purple. The fruit is a spiny capsule 4-10 cm long and 2-6 cm broad, splitting open when ripe to release the numerous seeds.
Datura species are used as food plants by the larvae of some Lepidoptera species including Hypercompe indecisa.
# Species
- Datura bernhardii
- Datura ceratocaula
- Datura discolor - Desert Thorn-apple
- Datura ferox
- Datura inoxia or Datura innoxia - Angel's Trumpet
- Datura kymatocarpa
- Datura lanosa
- Datura leichhardtii (syn. D. pruinosa) - Leichhardt's Datura
- Datura metel
- Datura quercifolia - Oak-leaf Thorn-apple
- Datura reburra
- Datura suaveolens - Known in Costa Rica as "Reina de la noche" (Night's Queen)
- Datura stramonium (syn. D. inermis) - Jimsonweed, Thorn-apple
- Datura wrightii - Sacred datura, Sacred Thorn-apple
Some species formerly included in Datura are now classified in the separate genus Brugmansia; this genus differs in being woody, making shrubs or small trees, and in having pendulous flowers. Other related genera include Hyoscyamus and Atropa.
its also used by sadhus as prayer flower for lord shiva
# Cultivation and uses
Datura contains the alkaloids scopolamine and atropine and has long been used as a poison and hallucinogen. The dose-response curve for the combination of alkaloids is very steep, so people who consume datura can easily take a potentially fatal overdose, hence its use as a poison. In the 1990s and 2000s, the United States media contained stories of adolescents and young adults dying or becoming seriously ill from intentionally ingesting datura.
## Records of use
Datura stramonium is also called jimsonweed. This name comes from the town of Jamestown, Virginia. Various versions of the story exist, but in the most common version, British soldiers sent to quell Bacon's Rebellion of 1676 were accidentally served this unfamiliar plant as food, causing many to be incapacitated for 11 days. Datura wrightii, also called sacred datura or western jimsonweed, has similar effects.
The effects of Datura can include a complete inability to differentiate reality from fantasy, blindness that lasts for days, and very bad "trips." Many experience accounts, generally quite negative, can be found at www.erowid.org. Numerous stories of datura-related deaths and critical illnesses can be found HERE.
# Cultural references
## In literature
- Martin Cruz Smith's novel,"Nightwing" gives an excellent, if fictional account of datura usage and Hopi folklore surrounding same.
- Jean M. Auel described use of datura in her Earth's Children series: In The Clan of the Cave Bear, the clan share a retrocognitive vision under influence of datura. In The Plains of Passage Ayla uses datura as an analgesic and sedative.
- In Paul Theroux's 2005 novel Blinding Light, a writer becomes addicted to a rare species of datura. Under its influence he is blind, but inspired, transcendently aware, and megalomaniacal.
- Datura is the plant given to pacify the mentally handicapped brother in William Faulkner's The Sound and the Fury.
- Datura is explained in Wade Davis's The Serpent and the Rainbow to be a critically important hallucinogen in a series of toxins and cultural practices that produce zombies, administered at the time of retrieval from the grave as an antidote to previously administered tetrodotoxin.
- The use of datura as a poison is mentioned in the novel The Eiger Sanction by Trevanian.
- Datura is a key entheogen in The Teachings of Don Juan: A Yaqui Way of Knowledge by Carlos Castaneda
- In the novel The Sundial by Maarten 't Hart, datura is used twice as a poison.
- Cape Cod by Thoreau contains a quote from Beverly's History of Virginia describing the effects of datura usage.
- Also in the autobiographical novel "Jesus Weed" by Gerald Taylor.
- In Hunter S. Thompson's Fear and Loathing in Las Vegas, Dr. Gonzo refers to a time he got sick from eating a large quantity of Jimson weed (in the section "A Terrible Experience with Extremely Dangerous Drugs").
- Datura as a psychoactive substance is featured in Leena Krohn's novel that has the Finnish name Datura tai harha jonka jokainen näkee; the novel has been translated at least to German, under the name Stechapfel.
- A discarded datura root grows into a tree over the abandoned boiler in Chapter 8 of John Steinbeck's "Cannery Row".
- Datura is the name of the evil woman who kidnaps Odd's friend in the book "Forever Odd" by Dean Koontz. He also refers to the actual tree in the same book, hence the relation between the two.
## In music
- Singer/songwriter Tori Amos penned a trance song entitled "Datura" for her 1999 album "To Venus and Back". The song features Amos reading a list of various plants that are growing in her garden over hypnotic piano and rhythms. She consistently mentions datura within the list, as if to indicate it is overgrowing and destroying her garden.
- Emcee MF Doom has a song of looped beats entitled "Datura Stramonium" from Volume 0 of his "Special Herbs' series.
- In the opera Lakmé by Léo Delibes, Lakmé dies after eating datura leaves.
- Datura is also the name of an Italian techno/trance group formed 1991 in Bologna by the musicians Ciro Pagano and Stefano Mazzavillani and the DJs Ricci & Cirillo. One of their biggest hit singles Yerba Del Diablo ("Devil's weed") also pays reference to the plant.
- The band Murder By Death mentions datura in their song "Killbot 2000" from their album "Who Will Survive and What Will be Left of Them."
- The psychedelic rock band Bardo Pond named a song "Datura" in his album "Set and Setting". Many other Bardo Pond album and song titles have been derived from the names of esoteric psychedelic substances.
- The guitarist Buckethead named a song "Datura" in his album "Electric Tears".
- Icelandic hard rock/stoner band takes its name from this plant(spelling it in Hindi, though "Dhaturah"), claiming that the plant has influenced its songwriting. In the song "The Devil is a Nice Guy" the singer/actor/keyboardist Kjartan describes his experience when he was strung out on Devil's weed and spent two days in the Icelandic Kárahnjúkar writing songs and chatting with the devil"
- The Australian psychedelic rock band Grey Daturas takes its name from the plant.
- The band Dane and the Death Machine's album Thanatron has a track entitled "Datura".
- Argentine band Babasonicos mentions datura in their song named Esther Narcotica.
## In film
- In the movie XXX the darts used to knock out Xander (Vin Diesel) and that he later uses to appear to kill an undercover policeman are referred to as 'Datura knockout darts' by their creator.
- A horror film by director Johnny Terris entitled 'Inside Inoxia' is based upon his personal experiences with Datura.
- Datura is one of the incredients in 'zombie powder' in the movie Serpent and the Rainbow.
## In games
- In The X-Files: Resist or Serve, Datura stramonium is used by Agent Dana Scully to, ironically, create a dart that "kills" zombies instantly.
- In Might and Magic VII: Day of the Destroyer, Datura is an ingredient that is used for creating potions.
- In Tsukihime, Kohaku has a garden of Datura flowers that are used to create sedatives and hallucinogens.
# Notes and references
- ↑ "Suspected Moonflower Intoxication (Ohio, 2002)" (HTML). CDC. Retrieved September 30. Unknown parameter |accessyear= ignored (|access-date= suggested) (help); Check date values in: |accessdate= (help).mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Attitude (UK) - November 1999 | Datura
Datura is a genus of 12-15 species of vespertine flowering plants belonging to the family Solanaceae. Their exact natural distribution is uncertain, due to extensive cultivation and naturalization throughout the temperate and tropical regions of the globe, but is most likely restricted to the Americas, from the United States south through Mexico (where the highest species diversity occurs) to the mid-latitudes of South America. Some species are reported by some authorities to be native to China, but this is not accepted by the Flora of China, where the three species present are treated as introductions from the Americas. (It also grows naturally throughout India and most of Australia). According to the old ayurvedic medicinal system (at least since 2000 BC) in India, this plant has versatile uses in medicinal preparations.
Datura is a woody-stalked, leafy herb growing up to 2 meters. It produces spiney seed pods and large white or purple trumpet-shaped flowers that face upward. Most parts of the plant contain atropine, scopolamine, and hyoscyamine. It has a long history of use both in S. America and Europe and is known for causing delerious states and poisonings in uninformed users.
Common names include jimson weed, Hell's Bells, Devil's weed, Devil's cucumber, thorn-apple (from the spiny fruit), pricklyburr (similarly), and somewhat paradoxically, both angel's trumpet and devil's trumpet (from their large trumpet-shaped flowers), or as Nathaniel Hawthorne refers to it in the the Scarlet Letter apple-peru. The word Datura comes from Hindi dhatūrā (thorn apple); record of this name dates back only to 1662 (OED). The Hindi derives this word from Sanskrit vedic literature that dates to long before 2000 BC[citation needed].
They are large, vigorous annual plants or short-lived perennial plants, growing to 1-3 m tall. The leaves are alternate, 10-20 cm long and 5-18 cm broad, with a lobed or toothed margin. The flowers are erect or spreading (not pendulous), trumpet-shaped, 5-20 cm long and 4-12 cm broad at the mouth; color varies from white to yellow, pink, and pale purple. The fruit is a spiny capsule 4-10 cm long and 2-6 cm broad, splitting open when ripe to release the numerous seeds.
Datura species are used as food plants by the larvae of some Lepidoptera species including Hypercompe indecisa.
# Species
- Datura bernhardii
- Datura ceratocaula
- Datura discolor - Desert Thorn-apple
- Datura ferox
- Datura inoxia or Datura innoxia - Angel's Trumpet
- Datura kymatocarpa
- Datura lanosa
- Datura leichhardtii (syn. D. pruinosa) - Leichhardt's Datura
- Datura metel
- Datura quercifolia - Oak-leaf Thorn-apple
- Datura reburra
- Datura suaveolens - Known in Costa Rica as "Reina de la noche" (Night's Queen)
- Datura stramonium (syn. D. inermis) - Jimsonweed, Thorn-apple
- Datura wrightii - Sacred datura, Sacred Thorn-apple
Some species formerly included in Datura are now classified in the separate genus Brugmansia; this genus differs in being woody, making shrubs or small trees, and in having pendulous flowers. Other related genera include Hyoscyamus and Atropa.
its also used by sadhus as prayer flower for lord shiva
# Cultivation and uses
Datura contains the alkaloids scopolamine and atropine and has long been used as a poison and hallucinogen. The dose-response curve for the combination of alkaloids is very[citation needed] steep, so people who consume datura can easily take a potentially fatal overdose, hence its use as a poison. In the 1990s and 2000s, the United States media contained stories of adolescents and young adults dying or becoming seriously ill from intentionally ingesting datura.[1]
## Records of use
Datura stramonium is also called jimsonweed. This name comes from the town of Jamestown, Virginia. Various versions of the story exist, but in the most common version, British soldiers sent to quell Bacon's Rebellion of 1676 were accidentally served this unfamiliar plant as food, causing many to be incapacitated for 11 days. Datura wrightii, also called sacred datura or western jimsonweed, has similar effects.
The effects of Datura can include a complete inability to differentiate reality from fantasy, blindness that lasts for days, and very bad "trips." Many experience accounts, generally quite negative, can be found at www.erowid.org. Numerous stories of datura-related deaths and critical illnesses can be found HERE.
# Cultural references
Template:Trivia
## In literature
- Martin Cruz Smith's novel,"Nightwing" gives an excellent, if fictional account of datura usage and Hopi folklore surrounding same.
- Jean M. Auel described use of datura in her Earth's Children series: In The Clan of the Cave Bear, the clan share a retrocognitive vision under influence of datura. In The Plains of Passage Ayla uses datura as an analgesic and sedative.
- In Paul Theroux's 2005 novel Blinding Light, a writer becomes addicted to a rare species of datura. Under its influence he is blind, but inspired, transcendently aware, and megalomaniacal.
- Datura is the plant given to pacify the mentally handicapped brother in William Faulkner's The Sound and the Fury.
- Datura is explained in Wade Davis's The Serpent and the Rainbow to be a critically important hallucinogen in a series of toxins and cultural practices that produce zombies, administered at the time of retrieval from the grave as an antidote to previously administered tetrodotoxin.
- The use of datura as a poison is mentioned in the novel The Eiger Sanction by Trevanian.
- Datura is a key entheogen in The Teachings of Don Juan: A Yaqui Way of Knowledge by Carlos Castaneda
- In the novel The Sundial by Maarten 't Hart, datura is used twice as a poison.
- Cape Cod by Thoreau contains a quote from Beverly's History of Virginia describing the effects of datura usage.
- Also in the autobiographical novel "Jesus Weed" by Gerald Taylor.
- In Hunter S. Thompson's Fear and Loathing in Las Vegas, Dr. Gonzo refers to a time he got sick from eating a large quantity of Jimson weed (in the section "A Terrible Experience with Extremely Dangerous Drugs").
- Datura as a psychoactive substance is featured in Leena Krohn's novel that has the Finnish name Datura tai harha jonka jokainen näkee; the novel has been translated at least to German, under the name Stechapfel.
- A discarded datura root grows into a tree over the abandoned boiler in Chapter 8 of John Steinbeck's "Cannery Row".
- Datura is the name of the evil woman who kidnaps Odd's friend in the book "Forever Odd" by Dean Koontz. He also refers to the actual tree in the same book, hence the relation between the two.
## In music
- Singer/songwriter Tori Amos penned a trance song entitled "Datura" for her 1999 album "To Venus and Back". The song features Amos reading a list of various plants that are growing in her garden over hypnotic piano and rhythms. She consistently mentions datura within the list, as if to indicate it is overgrowing and destroying her garden.[2]
- Emcee MF Doom has a song of looped beats entitled "Datura Stramonium" from Volume 0 of his "Special Herbs' series.
- In the opera Lakmé by Léo Delibes, Lakmé dies after eating datura leaves.
- Datura is also the name of an Italian techno/trance group formed 1991 in Bologna by the musicians Ciro Pagano and Stefano Mazzavillani and the DJs Ricci & Cirillo. One of their biggest hit singles Yerba Del Diablo ("Devil's weed") also pays reference to the plant.
- The band Murder By Death mentions datura in their song "Killbot 2000" from their album "Who Will Survive and What Will be Left of Them."
- The psychedelic rock band Bardo Pond named a song "Datura" in his album "Set and Setting". Many other Bardo Pond album and song titles have been derived from the names of esoteric psychedelic substances.
- The guitarist Buckethead named a song "Datura" in his album "Electric Tears".
- Icelandic hard rock/stoner band takes its name from this plant(spelling it in Hindi, though "Dhaturah"), claiming that the plant has influenced its songwriting. In the song "The Devil is a Nice Guy" the singer/actor/keyboardist Kjartan describes his experience when he was strung out on Devil's weed and spent two days in the Icelandic Kárahnjúkar writing songs and chatting with the devil"
- The Australian psychedelic rock band Grey Daturas takes its name from the plant.
- The band Dane and the Death Machine's album Thanatron has a track entitled "Datura".
- Argentine band Babasonicos mentions datura in their song named Esther Narcotica.
## In film
- In the movie XXX the darts used to knock out Xander (Vin Diesel) and that he later uses to appear to kill an undercover policeman are referred to as 'Datura knockout darts' by their creator.
- A horror film by director Johnny Terris entitled 'Inside Inoxia' is based upon his personal experiences with Datura.
- Datura is one of the incredients in 'zombie powder' in the movie Serpent and the Rainbow.
## In games
- In The X-Files: Resist or Serve, Datura stramonium is used by Agent Dana Scully to, ironically, create a dart that "kills" zombies instantly.
- In Might and Magic VII: Day of the Destroyer, Datura is an ingredient that is used for creating potions.
- In Tsukihime, Kohaku has a garden of Datura flowers that are used to create sedatives and hallucinogens.
# Notes and references
- ↑ "Suspected Moonflower Intoxication (Ohio, 2002)" (HTML). CDC. Retrieved September 30. Unknown parameter |accessyear= ignored (|access-date= suggested) (help); Check date values in: |accessdate= (help).mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Attitude (UK) - November 1999
# External links
- JimsonWeed: History, Perceptions, Traditional Uses, and Potential Therapeutic Benefits of the Genus Datura HerbalGram. 2006;69:40-50 © American Botanical Council by Kofi Busia & Fiona Heckels
- tratamientos Datura - Floripondio
- Germplasm Resources Information Network: Datura
- USDA Plant Profile: Datura
- Flora of China: Datura
- Account of accidental minor poisoning by Datura from the British Medical Journal
- Erowid Datura Vault
- Datura and Brugmansia species as Sacred Plants and Medicines
- Clinical Toxicology Review of Datura Poisoning
- Jimsonweed pictures and description at blackturtle.us
- Humorous song and discussion at YouTube: Moonflower
Fischer, Louis Gandhi USA 1954
da:Pigæble
de:Stechäpfel
it:Datura
he:דטורה
lt:Durnaropė
nl:Datura
sq:Datura
sv:Spikklubbor
ur:دھتورہ
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Datura | |
0652c71f250e325bc6a444b6a94df4a540a27ac9 | wikidoc | Lipase | Lipase
# Overview
A lipase is a water-soluble enzyme that catalyzes the hydrolysis of ester bonds in water–insoluble, lipid substrates. Lipases thus comprise a subclass of the esterases.
Lipases are ubiquitous throughout living organisms, and genes encoding lipases are even present in certain viruses.
# Function
Most lipases act at a specific position on the glycerol backbone of a lipid substrate (A1, A2 or A3).
In the example of human pancreatic lipase (HPL), which is the main enzyme responsible for breaking down fats in the human digestive system, a lipase acts to convert triglyceride substrates found in oils from food to monoglycerides and free fatty acids.
Myriad other lipase activities exist in nature, especially when the phospholipases and sphingomyelinases are considered.
# Structure
While a diverse array of genetically distinct lipase enzymes are found in nature, and represent several types of protein folds and catalytic mechanisms, most are built on an alpha/beta hydrolase fold (see image) and employ a chymotrypsin-like hydrolysis mechanism involving a serine nucleophile, an acid residue (usually aspartic acid), and a histidine.
# Location of action
Some lipases work within the interior spaces of living cells to degrade lipids.
- In the example of lysosomal lipase, the enzyme is confined within an organelle called the lysosome.
- Other lipase enzymes, such as pancreatic lipases, are found in the spaces outside of cells and have roles in the metabolism, absorption and transport of lipids throughout the body.
As biological membranes are integral to living cells and are largely composed of phospholipids, lipases play important roles in cell biology.
Furthermore, lipases are involved in diverse biological processes ranging from routine metabolism of dietary triglycerides to cell signaling and inflammation.
# Lipases of Humans
The main lipases in the human digestive system are human pancreatic lipase (HPL) and pancreatic lipase related protein 2 (PLRP2), which are secreted by the pancreas. Humans also have several other related enzymes, including hepatic lipase (HL), endothelial lipase, and lipoprotein lipase. Not all of these lipases function in the gut (see table).
Other lipases include LIPH, LIPI, LIPJ, LIPK, LIPM, and LIPN.
There also are a diverse array of phospholipases, but these are not always classified with the other lipases.
# Industrial Uses
Lipases from fungi and bacteria serve important roles in human practices as ancient as yogurt and cheese fermentation. However, lipases are also being exploited as cheap and versatile catalysts to degrade lipids in more modern applications. For instance, a biotechnology company has brought recombinant lipase enzymes to market for use in applications such as baking, laundry detergents and even as biocatalysts in alternative energy strategies to convert vegetable oil into fuel.
## Differential Diagnosis of Abnormalities in Lipase
## Increased Lipase
- Zinc acetate
- Pancreatitis
# Additional images
- General formula of a carboxylate ester
- Glycerol
- General structure of a triglyceride | Lipase
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
A lipase is a water-soluble enzyme that catalyzes the hydrolysis of ester bonds in water–insoluble, lipid substrates[1]. Lipases thus comprise a subclass of the esterases.
Lipases are ubiquitous throughout living organisms, and genes encoding lipases are even present in certain viruses. [2][3]
# Function
Most lipases act at a specific position on the glycerol backbone of a lipid substrate (A1, A2 or A3).
In the example of human pancreatic lipase (HPL)[4], which is the main enzyme responsible for breaking down fats in the human digestive system, a lipase acts to convert triglyceride substrates found in oils from food to monoglycerides and free fatty acids.
Myriad other lipase activities exist in nature, especially when the phospholipases[5] and sphingomyelinases[6] are considered.
# Structure
While a diverse array of genetically distinct lipase enzymes are found in nature, and represent several types of protein folds and catalytic mechanisms, most are built on an alpha/beta hydrolase fold [7][8][9] (see image[10]) and employ a chymotrypsin-like hydrolysis mechanism involving a serine nucleophile, an acid residue (usually aspartic acid), and a histidine[11][12].
# Location of action
Some lipases work within the interior spaces of living cells to degrade lipids.
- In the example of lysosomal lipase, the enzyme is confined within an organelle called the lysosome.
- Other lipase enzymes, such as pancreatic lipases, are found in the spaces outside of cells and have roles in the metabolism, absorption and transport of lipids throughout the body.
As biological membranes are integral to living cells and are largely composed of phospholipids, lipases play important roles in cell biology.
Furthermore, lipases are involved in diverse biological processes ranging from routine metabolism of dietary triglycerides to cell signaling[13] and inflammation[14].
# Lipases of Humans
The main lipases in the human digestive system are human pancreatic lipase (HPL) and pancreatic lipase related protein 2 (PLRP2), which are secreted by the pancreas. Humans also have several other related enzymes, including hepatic lipase (HL), endothelial lipase, and lipoprotein lipase. Not all of these lipases function in the gut (see table).
Other lipases include LIPH, LIPI, LIPJ, LIPK, LIPM, and LIPN.
There also are a diverse array of phospholipases, but these are not always classified with the other lipases.
# Industrial Uses
Lipases from fungi and bacteria serve important roles in human practices as ancient as yogurt and cheese fermentation. However, lipases are also being exploited as cheap and versatile catalysts to degrade lipids in more modern applications. For instance, a biotechnology company has brought recombinant lipase enzymes to market for use in applications such as baking, laundry detergents and even as biocatalysts [21] in alternative energy strategies to convert vegetable oil into fuel. [22][23]
## Differential Diagnosis of Abnormalities in Lipase
## Increased Lipase
- Zinc acetate
- Pancreatitis
# Additional images
- General formula of a carboxylate ester
- Glycerol
- General structure of a triglyceride | https://www.wikidoc.org/index.php/Ddx:Lipase | |
da0088a32dc11eab13cc728b79ff080e1e2d20ce | wikidoc | Pallor | Pallor
# Overview
Pallor is a reduced amount of oxyhemoglobin in skin or mucous membrane, a pale color which can be caused by illness, emotional shock or stress, avoiding excessive exposure to sunlight, anaemia or genetics. It is more evident on the face and palms. It can develop suddenly or gradually, depending on the cause.
Pallor is not usually clinically significant unless it is accompanied by a general pallor (pale lips, tongue, palms, mouth and other regions with mucous membranes). It is distinguished from similar symptoms such as hypopigmentation (loss of skin pigment).
Pale skin is also a very light skin tone most commonly associated with people of European descent, particularly people of Celtic and Scandinavian descent. In addition, people who avoid excessive sun exposure and thus avoid unhealthy sun tanning also tend to have paler complexions in comparison to their peers, particularly during summer.
# Physical examination findings of pallor and paleness
- White fingers
- White Nails
- White patches or blotches
- Hypopigmentation
# Causes
## Common Causes
## Causes by Organ System
## Causes in Alphabetical Order
- Abdominal aneurysm
- Abdominal cancer
- Acanthocytosis
- Achrestic anemia
- Acne-sol powder
- Acquired aplastic anemia
- Acquired idiopathic sideroblastic anemia
- Acquired prothrombin complex deficiency
- Acquired pure red cell aplasia
- Acquired aplastic anemia
- Acquired prothrombin complex deficiency
- Acromegaly
- Actinic cheilitis
- Acute arterial occlusion
- Acute biphenotypic leukemia
- Acute blood disorder
- Acute cholinergic dysautonomia
- Acute erythroleukemia
- Acute leukemia
- Acute megacaryoblastic leukemia
- Acute meningitis
- Acute myeloblastic leukemia
- Acute myelocytic leukemia
- Acute myelofibrosis
- Acute myeloid leukemia
- Acute non lymphoblastic leukemia
- Acute panmyelosis
- Acute tubulointerstitial nephritis
- Acute cholinergic dysautonomia
- Acute interstitial nephritis
- Acute myelofibrosis
- Adenosine triphosphatase deficiency
- Adrenal adenoma
- Adrenal cancer
- Adrenal gland hyperfunction
- Adrenal hemorrhage
- Adrenal hypertension
- Adrenal incidentaloma
- Adrenal medulla neoplasm
- Albinism
- Albinism deafness syndrome
- Alcohol-induced hypertension
- Aldicarb
- All-down syndrome
- Allergic tension-fatigue syndrome
- Alpha thalassemia
- Alpine syndrome
- Amphetamine abuse
- Anaphylaxis
- Anemia
- Anisocytosis
- Aortic arches defect
- Aortic dilatation
- Aplastic anemia
- Apnea
- Arrhythmia
- Asrar-facharzt-haque syndrome
- Athlete's foot
- Autoimmune hemolytic anemia
- Autoimmune thyroid disease
- Azotemia
- Back tumor
- Bacterial meningitis
- Banti's syndrome
- Basilar migraine
- Benacine
- Benign paroxysmal torticollis of infancy
- Beractant
- Beta thalassemia
- Bitolterol
- Black locust poisoning
- Bland-garland-white syndrome
- Blue diaper syndrome
- Bone marrow failure syndromes
- Bromophos
- Brompheniramine
- Calcium deficiency
- Carbaryl
- Cardiac tamponade
- Catovit
- Celiac disease
- Cephalosporin-induced immune hemolytic anemia
- Cerebelloparenchymal autosomal recessive disorder 3
- Chediak-higashi syndrome
- Chloramphenicol-induced sideroblastic anemia
- Chlorate salts
- Chlorfenvinphos
- Chlorpyrifos
- Chronic anemia
- Chronic arterial occlusive disease
- Chronic kidney disease
- Chronic leukemia
- Chronic myelogenous leukemia
- Chronic myeloid leukemia
- Chronic myelomonocytic leukemia
- Chronic orthostatic hypotension
- Cidofovir
- Ciliary dyskinesia-bronchiectasis
- Clotting disorder
- Cobalamin malabsorption
- Cocaine-induced hypertension
- Cold autoimmune hemolytic anemia
- Collagenous celiac disease
- Compartment syndrome
- Complete atrioventricular canal
- Congenital aplastic anemia
- Congenital arteriovenous shunt
- Congenital deficiency of intrinsic factor
- Congenital herpes simplex
- Congenital hypothyroidism
- Congenital spherocytic anemia
- Congenital vitamin b12 malabsorption
- Congenital aplastic anemia
- Congenital spherocytic hemolytic anemia
- Conn syndrome-induced hypertension
- Cooley syndrome
- Copper deficiency-induced sideroblastic anemia
- Corticosteroid-induced hypertension
- Coumaphos
- Cowden's syndrome
- Cushing's syndrome-induced hypertension
- Cyclic vomiting syndrome
- Cycloserine-induced sideroblastic anemia
- Cyclosporine-induced hypertension
- Dapsone
- Decreased mean cell haemoglobin
- Decreased mean cell volume
- Demeton-s-methyl
- Deponit
- Di guglielmo
- Diabetic hypoglycemia
- Diamond-blackfan syndrome
- Diazinon
- Dichlorvos
- Dicrotophos
- Diencephalic syndrome
- Dimetane
- Dimorphic anemia
- Dioxathion
- Disseminated intravascular coagulation
- Disulfoton
- Double outlet right ventricle
- Down syndrome
- Doxepine-induced immune hemolytic anemia
- D-plus hemolytic uremic syndrome
- Drug allergies
- Drug-induced hypertension
- Drug-induced immune hemolytic anemia
- Drug-induced sideroblastic anemia
- Dumping syndrome
- Dyskeratosis congenita
- Earthball poisoning
- Ecstasy
- Ectopic pregnancy
- Edema
- Ehlers danlos syndrome
- Epiglotitis
- Erythroblastopenia
- Erythrocyte enzyme defects
- Erythropoietin-induced hypertension
- Estren-dameshek syndrome
- Ethanol-induced sideroblastic anemia
- Ethion
- Evan's syndrome
- Fainting
- Familial hypopituitarism
- Familial hypothyroidism
- Familial myelofibrosis
- Familial selective vitamin b12 malabsorption
- Familial wilms' tumor
- Fanconi syndrome
- Fanconi's anemia
- Favism
- Fear
- Felty syndrome
- Fensulfothion
- Fenthion
- Flotch syndrome
- Flumadine
- Folate-deficiency anemia
- Folic acid deficiency anemia
- Foradil aerolizer
- Formoterol
- Friedel heid grosshans syndrome
- Frostbite
- Functioning pancreatic endocrine tumor
- Fungizone intravenous
- Gastrointestinal bleeding
- Glaze
- Glucagonoma syndrome
- Glucose-6-phosphate dehydrogenase deficiency
- Golden chain tree poisoning
- Goodpasture's syndrome
- Gorlin-bushkell-jensen syndrome
- Graeck-imerslund disease
- Grand mal epilepsy
- Grasbeck-imerslund disease
- Grass spider poisoning
- Grief
- Hashimoto's thyroiditis
- Heart attack
- Heat exhaustion
- Heat stroke
- Hemangioma thrombocytopenia syndrome
- Hemolytic anemia
- Hemolytic-uremic syndrome
- Hemophagocytic lymphohistiocytosis
- Hemophagocytic reticulosis
- Hemorrhage
- Hereditary spherocytosis
- Hip cancer
- Hodgkin's disease
- Hookworm
- Hyperadrenalism
- Hyperaldosteronism-induced hypertension
- Hyperchromic anemia
- Hypertension
- Hypoglycemia
- Hypopituitarism
- Hypotension
- Hypothermia
- Hypothyroidism
- Hypovolemia
- Idiopathic pulmonary hemosiderosis
- Illness
- Imerslund-najman-grasbeck syndrome
- Incontinentia pigmenti
- Infective endocarditis
- Inherited hemolytic-uremic syndrome
- Inherited spherocytic anemia
- Intestinal bleeding
- Iron deficiency anemia
- Iron poisoning
- Isoniazid-induced sideroblastic anemia
- Jervell and lange-nielsen syndrome
- Juvenile megaloblastic anemia
- Juvenile rheumatoid arthritis
- Kasabach merritt syndrome
- Kentucky coffee tea poisoning
- Kotzot-richter syndrome
- Lack of sun exposure
- Lead-containing paint
- Lederer's anemia
- Leprosy
- Leukemia
- Leukonychia totalis
- Lichen sclerosis
- Licorice-induced hypertension
- Liver tumors
- Loeys-dietz syndrome
- Lymphoblastic lymphoma
- Lymphomatous thyroiditis
- Macrocytic hyperchromic anemia
- Malabsorption
- Malaria
- Malathion
- Mallory-weiss syndrome
- Marchiafava-micheli disease
- Marine turtle poisoning
- Mazindol
- Megaloblastic anemia
- Megalocytic-normochromic anemia
- Mende syndrome
- Menorrhagia
- Methidathion
- Methiocarb
- Methomyl
- Methyldopa-induced immune hemolytic anemia
- Microcytic anemia
- Microcytic hyperchromic anemia
- Microcytic hypochromic anemia
- Minitran
- Mixed connective tissue disease
- Monoamine oxidase inhibitors
- Motion sickness
- Multiple endocrine neoplasia
- Multiple myeloma
- Mycetoma
- Myelodysplastic disease
- Myelofibrosis
- Myelogenous leukemia
- Myeloproliferative disease
- Myelpathic anemia
- Myxoedema
- Neuroblastoma
- Neurofibromatosis
- Nitrates
- Nitrek
- Nitro tab
- Nitro-bid
- Nitrocine
- Nitro-derm
- Nitrodisc
- Nitro-dur
- Nitrogard
- Nitroglycerin
- Nitroglyn
- Nitrol
- Nitrolingual
- Nitrong
- Nitroquick
- Nitrostat
- Nitro-time
- Non-hereditary spherocytic anemia
- Non-hodgkin's lymphoma
- Normal genetic variation
- Normochromic anemia
- Normocytic anemia
- Nutritional anemia
- Obesity
- Oculocutaneous albinism
- Oral thrush
- Orotic aciduria hereditary
- Oroticaciduria
- Orotidylic decarboxylase deficiency
- Orthostatic hypotension
- Orthostatic intolerance
- Osteogenesis imperfecta
- Pachyonychia congenita recessive
- Pancreatic carcinoma
- Pancytopenia
- Panhypopituitarism
- Parathion
- Patent ductus arteriosus
- Pearson's anemia
- Penicillin-induced immune hemolytic anemia
- Peripheral vascular disease
- Pernicious anemia
- Persistent patency of the arterial duct
- Phenylketonuria
- Pheochromocytoma
- Phosdrin
- Physical exertion
- Plasmacytoma anaplastic
- Platelet function disorders
- Plummer-vinson syndrome
- Poems syndrome
- Poikilocytic anemia
- Post streptococcal glomerulonephritis
- Posthemorrhagic anemia
- Pramipexole
- Primary autoimmune hemolytic anemia
- Profenofos
- Prolintane
- Pseudoxanthoma elasticum
- Pulmonary edema
- Pure red cell aplasia
- Pyruvate kinase deficiency
- Quinidine-induced immune hemolytic anemia
- Raynaud's disease
- Reflex sympathetic dystrophy syndrome
- Refractory celiac disease
- Reiter’s syndrome
- Renal hypertension
- Renovascular hypertension
- Resistant hypertension
- Reticuloendotheliosis
- Retinopathy
- Rheumatic fever
- Rib tumor
- Riedel syndrome
- Rimantidine
- Rosai-dorfman disease
- Sandifer syndrome
- Sanorex
- Sarrouy disease
- Scurvy
- Sea sickness
- Secondary autoimmune hemolytic anemia
- Secondary hypertension
- Selective vitamin b12 malabsorption with proteinuria
- Severe asthma
- Severe gastroenteritis
- Shaken baby syndrome
- Sheehan syndrome
- Shock
- Sickle cell anemia
- Sideroblastic anemia
- Sideropenic anemia
- Sleep deprivation
- Small intestine cancer
- Sneddon syndrome
- Solitary extramedullary plasmacytoma
- Spherocytic anemia
- Spherocytosis
- Spinal shock
- Spleen cancer
- Spur-cell anemia
- Stomach upset
- Stress
- Suffocation
- Sulfasalazine
- Sulphonamide
- Survanta
- Temporal arteritis
- Temporal lobe epilepsy
- Terbufos
- Tetraethyl pyrophosphate
- Thalassemia
- Thrombocytopenia
- Thrombotic thrombocytopenic purpura
- Thrush
- Thyroid agenesis
- Thyroid hormone plasma membrane transport defect
- Thyroid agenesis
- Tornalate
- Toxin-induced sideroblastic anemia
- Transderm-nitro
- Transient bullous dermolysis of the newborn
- Transient erythroblastopenia
- Traumatic spreading depression syndrome
- Trichorhinophalangeal syndrome type i
- Triploid syndrome
- Tropical sprue
- Tuberculosis
- Turner syndrome
- Type 1 diabetes
- Uremia
- Vertigo
- Vestibulocochlear dysfunction
- Vitaligo
- Vitamin b12 deficiency
- Vitiligo
- Volume depletion
- Waardenburg syndrome
- Waldenstrom macroglobulinemia
- Warm autoimmune hemolytic anemia
- Waterhouse-friederichsen syndrome
- Weinstein kliman scully syndrome
- William's syndrome
- Wilms tumor
- Wiskott-aldrich syndrome
- Wolf-parkinson-white syndrome
- Wt limb blood syndrome
- Xerocytosis
- X-linked alpha thalassemia mental retardation syndrome
- X-linked sideroblastic anemia | Pallor
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Associate Editor(s)-in-Chief: Luke Rusowicz-Orazem, B.S.
# Overview
Pallor is a reduced amount of oxyhemoglobin in skin or mucous membrane, a pale color which can be caused by illness, emotional shock or stress, avoiding excessive exposure to sunlight, anaemia or genetics. It is more evident on the face and palms. It can develop suddenly or gradually, depending on the cause.
Pallor is not usually clinically significant unless it is accompanied by a general pallor (pale lips, tongue, palms, mouth and other regions with mucous membranes). It is distinguished from similar symptoms such as hypopigmentation (loss of skin pigment).
Pale skin is also a very light skin tone most commonly associated with people of European descent, particularly people of Celtic and Scandinavian descent. In addition, people who avoid excessive sun exposure and thus avoid unhealthy sun tanning also tend to have paler complexions in comparison to their peers, particularly during summer.
# Physical examination findings of pallor and paleness
- White fingers
- White Nails
- White patches or blotches
- Hypopigmentation
# Causes
## Common Causes
## Causes by Organ System
## Causes in Alphabetical Order
- Abdominal aneurysm
- Abdominal cancer
- Acanthocytosis
- Achrestic anemia
- Acne-sol powder
- Acquired aplastic anemia
- Acquired idiopathic sideroblastic anemia
- Acquired prothrombin complex deficiency
- Acquired pure red cell aplasia
- Acquired aplastic anemia
- Acquired prothrombin complex deficiency
- Acromegaly
- Actinic cheilitis
- Acute arterial occlusion
- Acute biphenotypic leukemia
- Acute blood disorder
- Acute cholinergic dysautonomia
- Acute erythroleukemia
- Acute leukemia
- Acute megacaryoblastic leukemia
- Acute meningitis
- Acute myeloblastic leukemia
- Acute myelocytic leukemia
- Acute myelofibrosis
- Acute myeloid leukemia
- Acute non lymphoblastic leukemia
- Acute panmyelosis
- Acute tubulointerstitial nephritis
- Acute cholinergic dysautonomia
- Acute interstitial nephritis
- Acute myelofibrosis
- Adenosine triphosphatase deficiency
- Adrenal adenoma
- Adrenal cancer
- Adrenal gland hyperfunction
- Adrenal hemorrhage
- Adrenal hypertension
- Adrenal incidentaloma
- Adrenal medulla neoplasm
- Albinism
- Albinism deafness syndrome
- Alcohol-induced hypertension
- Aldicarb
- All-down syndrome
- Allergic tension-fatigue syndrome
- Alpha thalassemia
- Alpine syndrome
- Amphetamine abuse
- Anaphylaxis
- Anemia
- Anisocytosis
- Aortic arches defect
- Aortic dilatation
- Aplastic anemia
- Apnea
- Arrhythmia
- Asrar-facharzt-haque syndrome
- Athlete's foot
- Autoimmune hemolytic anemia
- Autoimmune thyroid disease
- Azotemia
- Back tumor
- Bacterial meningitis
- Banti's syndrome
- Basilar migraine
- Benacine
- Benign paroxysmal torticollis of infancy
- Beractant
- Beta thalassemia
- Bitolterol
- Black locust poisoning
- Bland-garland-white syndrome
- Blue diaper syndrome
- Bone marrow failure syndromes
- Bromophos
- Brompheniramine
- Calcium deficiency
- Carbaryl
- Cardiac tamponade
- Catovit
- Celiac disease
- Cephalosporin-induced immune hemolytic anemia
- Cerebelloparenchymal autosomal recessive disorder 3
- Chediak-higashi syndrome
- Chloramphenicol-induced sideroblastic anemia
- Chlorate salts
- Chlorfenvinphos
- Chlorpyrifos
- Chronic anemia
- Chronic arterial occlusive disease
- Chronic kidney disease
- Chronic leukemia
- Chronic myelogenous leukemia
- Chronic myeloid leukemia
- Chronic myelomonocytic leukemia
- Chronic orthostatic hypotension
- Cidofovir
- Ciliary dyskinesia-bronchiectasis
- Clotting disorder
- Cobalamin malabsorption
- Cocaine-induced hypertension
- Cold autoimmune hemolytic anemia
- Collagenous celiac disease
- Compartment syndrome
- Complete atrioventricular canal
- Congenital aplastic anemia
- Congenital arteriovenous shunt
- Congenital deficiency of intrinsic factor
- Congenital herpes simplex
- Congenital hypothyroidism
- Congenital spherocytic anemia
- Congenital vitamin b12 malabsorption
- Congenital aplastic anemia
- Congenital spherocytic hemolytic anemia
- Conn syndrome-induced hypertension
- Cooley syndrome
- Copper deficiency-induced sideroblastic anemia
- Corticosteroid-induced hypertension
- Coumaphos
- Cowden's syndrome
- Cushing's syndrome-induced hypertension
- Cyclic vomiting syndrome
- Cycloserine-induced sideroblastic anemia
- Cyclosporine-induced hypertension
- Dapsone
- Decreased mean cell haemoglobin
- Decreased mean cell volume
- Demeton-s-methyl
- Deponit
- Di guglielmo
- Diabetic hypoglycemia
- Diamond-blackfan syndrome
- Diazinon
- Dichlorvos
- Dicrotophos
- Diencephalic syndrome
- Dimetane
- Dimorphic anemia
- Dioxathion
- Disseminated intravascular coagulation
- Disulfoton
- Double outlet right ventricle
- Down syndrome
- Doxepine-induced immune hemolytic anemia
- D-plus hemolytic uremic syndrome
- Drug allergies
- Drug-induced hypertension
- Drug-induced immune hemolytic anemia
- Drug-induced sideroblastic anemia
- Dumping syndrome
- Dyskeratosis congenita
- Earthball poisoning
- Ecstasy
- Ectopic pregnancy
- Edema
- Ehlers danlos syndrome
- Epiglotitis
- Erythroblastopenia
- Erythrocyte enzyme defects
- Erythropoietin-induced hypertension
- Estren-dameshek syndrome
- Ethanol-induced sideroblastic anemia
- Ethion
- Evan's syndrome
- Fainting
- Familial hypopituitarism
- Familial hypothyroidism
- Familial myelofibrosis
- Familial selective vitamin b12 malabsorption
- Familial wilms' tumor
- Fanconi syndrome
- Fanconi's anemia
- Favism
- Fear
- Felty syndrome
- Fensulfothion
- Fenthion
- Flotch syndrome
- Flumadine
- Folate-deficiency anemia
- Folic acid deficiency anemia
- Foradil aerolizer
- Formoterol
- Friedel heid grosshans syndrome
- Frostbite
- Functioning pancreatic endocrine tumor
- Fungizone intravenous
- Gastrointestinal bleeding
- Glaze
- Glucagonoma syndrome
- Glucose-6-phosphate dehydrogenase deficiency
- Golden chain tree poisoning
- Goodpasture's syndrome
- Gorlin-bushkell-jensen syndrome
- Graeck-imerslund disease
- Grand mal epilepsy
- Grasbeck-imerslund disease
- Grass spider poisoning
- Grief
- Hashimoto's thyroiditis
- Heart attack
- Heat exhaustion
- Heat stroke
- Hemangioma thrombocytopenia syndrome
- Hemolytic anemia
- Hemolytic-uremic syndrome
- Hemophagocytic lymphohistiocytosis
- Hemophagocytic reticulosis
- Hemorrhage
- Hereditary spherocytosis
- Hip cancer
- Hodgkin's disease
- Hookworm
- Hyperadrenalism
- Hyperaldosteronism-induced hypertension
- Hyperchromic anemia
- Hypertension
- Hypoglycemia
- Hypopituitarism
- Hypotension
- Hypothermia
- Hypothyroidism
- Hypovolemia
- Idiopathic pulmonary hemosiderosis
- Illness
- Imerslund-najman-grasbeck syndrome
- Incontinentia pigmenti
- Infective endocarditis
- Inherited hemolytic-uremic syndrome
- Inherited spherocytic anemia
- Intestinal bleeding
- Iron deficiency anemia
- Iron poisoning
- Isoniazid-induced sideroblastic anemia
- Jervell and lange-nielsen syndrome
- Juvenile megaloblastic anemia
- Juvenile rheumatoid arthritis
- Kasabach merritt syndrome
- Kentucky coffee tea poisoning
- Kotzot-richter syndrome
- Lack of sun exposure
- Lead-containing paint
- Lederer's anemia
- Leprosy
- Leukemia
- Leukonychia totalis
- Lichen sclerosis
- Licorice-induced hypertension
- Liver tumors
- Loeys-dietz syndrome
- Lymphoblastic lymphoma
- Lymphomatous thyroiditis
- Macrocytic hyperchromic anemia
- Malabsorption
- Malaria
- Malathion
- Mallory-weiss syndrome
- Marchiafava-micheli disease
- Marine turtle poisoning
- Mazindol
- Megaloblastic anemia
- Megalocytic-normochromic anemia
- Mende syndrome
- Menorrhagia
- Methidathion
- Methiocarb
- Methomyl
- Methyldopa-induced immune hemolytic anemia
- Microcytic anemia
- Microcytic hyperchromic anemia
- Microcytic hypochromic anemia
- Minitran
- Mixed connective tissue disease
- Monoamine oxidase inhibitors
- Motion sickness
- Multiple endocrine neoplasia
- Multiple myeloma
- Mycetoma
- Myelodysplastic disease
- Myelofibrosis
- Myelogenous leukemia
- Myeloproliferative disease
- Myelpathic anemia
- Myxoedema
- Neuroblastoma
- Neurofibromatosis
- Nitrates
- Nitrek
- Nitro tab
- Nitro-bid
- Nitrocine
- Nitro-derm
- Nitrodisc
- Nitro-dur
- Nitrogard
- Nitroglycerin
- Nitroglyn
- Nitrol
- Nitrolingual
- Nitrong
- Nitroquick
- Nitrostat
- Nitro-time
- Non-hereditary spherocytic anemia
- Non-hodgkin's lymphoma
- Normal genetic variation
- Normochromic anemia
- Normocytic anemia
- Nutritional anemia
- Obesity
- Oculocutaneous albinism
- Oral thrush
- Orotic aciduria hereditary
- Oroticaciduria
- Orotidylic decarboxylase deficiency
- Orthostatic hypotension
- Orthostatic intolerance
- Osteogenesis imperfecta
- Pachyonychia congenita recessive
- Pancreatic carcinoma
- Pancytopenia
- Panhypopituitarism
- Parathion
- Patent ductus arteriosus
- Pearson's anemia
- Penicillin-induced immune hemolytic anemia
- Peripheral vascular disease
- Pernicious anemia
- Persistent patency of the arterial duct
- Phenylketonuria
- Pheochromocytoma
- Phosdrin
- Physical exertion
- Plasmacytoma anaplastic
- Platelet function disorders
- Plummer-vinson syndrome
- Poems syndrome
- Poikilocytic anemia
- Post streptococcal glomerulonephritis
- Posthemorrhagic anemia
- Pramipexole
- Primary autoimmune hemolytic anemia
- Profenofos
- Prolintane
- Pseudoxanthoma elasticum
- Pulmonary edema
- Pure red cell aplasia
- Pyruvate kinase deficiency
- Quinidine-induced immune hemolytic anemia
- Raynaud's disease
- Reflex sympathetic dystrophy syndrome
- Refractory celiac disease
- Reiter’s syndrome
- Renal hypertension
- Renovascular hypertension
- Resistant hypertension
- Reticuloendotheliosis
- Retinopathy
- Rheumatic fever
- Rib tumor
- Riedel syndrome
- Rimantidine
- Rosai-dorfman disease
- Sandifer syndrome
- Sanorex
- Sarrouy disease
- Scurvy
- Sea sickness
- Secondary autoimmune hemolytic anemia
- Secondary hypertension
- Selective vitamin b12 malabsorption with proteinuria
- Severe asthma
- Severe gastroenteritis
- Shaken baby syndrome
- Sheehan syndrome
- Shock
- Sickle cell anemia
- Sideroblastic anemia
- Sideropenic anemia
- Sleep deprivation
- Small intestine cancer
- Sneddon syndrome
- Solitary extramedullary plasmacytoma
- Spherocytic anemia
- Spherocytosis
- Spinal shock
- Spleen cancer
- Spur-cell anemia
- Stomach upset
- Stress
- Suffocation
- Sulfasalazine
- Sulphonamide
- Survanta
- Temporal arteritis
- Temporal lobe epilepsy
- Terbufos
- Tetraethyl pyrophosphate
- Thalassemia
- Thrombocytopenia
- Thrombotic thrombocytopenic purpura
- Thrush
- Thyroid agenesis
- Thyroid hormone plasma membrane transport defect
- Thyroid agenesis
- Tornalate
- Toxin-induced sideroblastic anemia
- Transderm-nitro
- Transient bullous dermolysis of the newborn
- Transient erythroblastopenia
- Traumatic spreading depression syndrome
- Trichorhinophalangeal syndrome type i
- Triploid syndrome
- Tropical sprue
- Tuberculosis
- Turner syndrome
- Type 1 diabetes
- Uremia
- Vertigo
- Vestibulocochlear dysfunction
- Vitaligo
- Vitamin b12 deficiency
- Vitiligo
- Volume depletion
- Waardenburg syndrome
- Waldenstrom macroglobulinemia
- Warm autoimmune hemolytic anemia
- Waterhouse-friederichsen syndrome
- Weinstein kliman scully syndrome
- William's syndrome
- Wilms tumor
- Wiskott-aldrich syndrome
- Wolf-parkinson-white syndrome
- Wt limb blood syndrome
- Xerocytosis
- X-linked alpha thalassemia mental retardation syndrome
- X-linked sideroblastic anemia | https://www.wikidoc.org/index.php/Ddx:Pallor | |
c87dea8057ea96b414354f19890ef7cc04bcb366 | wikidoc | Pyuria | Pyuria
To view a comprehensive algorithm of common findings of urine composition and urine output, click here
# Overview
Pyuria is a condition in which urine contains 10 or more white cells/mm³. Gram stain and leukocyte esterase might be positive. Pyuria might be a sign of a bacterial or non bacterial urinary tract infection, genitourinary abnormalities, inflammatory disorders, and systemic diseases. Pyuria may be classified into sterile pyuria or bacteriuria. Treatment is not required for pyuria. However, underlying diseases must be treated.
# Definition
Pyuria is a condition in which urine contains pus. Definition of pyuria is as follow:
- Presence of 10 or more white cells/mm³ in a urine specimen
- Positive result on Gram’s stain of an unspun urine specimen
- Positive leukocyte esterase on urinary dipstick test
Pyuria might be a sign of a bacterial or non bacterial urinary tract infection.
# Classification
Pyuria may be classified based on the presence of detectable infection as shown below:
# Pyuria Differential Diagnosis
Differentiating the diseases that can cause pyuria:
To review differential diagnosis of sterile pyuria, click here.
# Treatment
- Sterile pyuria
- Pathogen-directed antimicrobial therapy
- Renal Tuberculosis
- Preferred regimen: (Isoniazid 300 mg PO qd for 2 months AND Rifampicin 450-600 mg qd for 2 months AND Ethambutol 15-25 mg/kg PO qd for 2 months AND Pyrazinamide 1500 mg for 2 months) THEN (Isoniazid 300 mg PO qd for 4-6 months AND Rifampicin 450-600 mg qd for 4-6 months)
- Gonorrhea
- Preferred regimen: Ceftriaxone 250 mg IM in a single dose THEN (Azithromycin 1 g PO in a single dose OR Doxycycline 100 mg PO bid for 7 days)
- Chlamydia
- Preferred regimen: Azithromycin 1 g PO in single dose OR Doxycycline 100 mg PO bid for 7 days
- Alternative regimen: Erythromycin base 500 mg PO qid for 7 days
- Mycoplasma and Ureaplasma
- Preferred regimen: Azithromycin OR Levofloxacin OR Moxifloxacin
- Genital herpes
- Preferred regimen: Acyclovir 400 mg PO tid for 7–10 days or Acyclovir 200 mg PO five times a day for 7–10 days OR Famciclovir 250 mg PO tid for 7–10 days OR Valacyclovir 1 g PO bid for 7 days
- Trichomoniasis
- Preferred regimen: Metronidazole 2 g PO in a single dose OR Tinidazole 2 g PO in a single dose
- Fungal infections
- Preferred regimen, Candida albicans: Fluconazole 100 mg PO qd for 2-5 days
- Preferred regimen, non-albicans Candida: Amphotericin B 0.1 mg/kg/day IV for 2-5 days OR Amphotericin B bladder irrigation 5-50 mg/L of sterile water qd for 2-5 days
- Schistosomiasis
- Preferred regimen: Praziquantel 20 mg/kg PO bid for 1–2 days | Pyuria
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Associate Editor(s)-in-Chief: Sadaf Sharfaei M.D.[2]
To view a comprehensive algorithm of common findings of urine composition and urine output, click here
# Overview
Pyuria is a condition in which urine contains 10 or more white cells/mm³. Gram stain and leukocyte esterase might be positive. Pyuria might be a sign of a bacterial or non bacterial urinary tract infection, genitourinary abnormalities, inflammatory disorders, and systemic diseases. Pyuria may be classified into sterile pyuria or bacteriuria. Treatment is not required for pyuria. However, underlying diseases must be treated.
# Definition
Pyuria is a condition in which urine contains pus. Definition of pyuria is as follow:[1]
- Presence of 10 or more white cells/mm³ in a urine specimen
- Positive result on Gram’s stain of an unspun urine specimen
- Positive leukocyte esterase on urinary dipstick test
Pyuria might be a sign of a bacterial or non bacterial urinary tract infection.
# Classification
Pyuria may be classified based on the presence of detectable infection as shown below:[2][3]
# Pyuria Differential Diagnosis
Differentiating the diseases that can cause pyuria:[4][5][6][7]
To review differential diagnosis of sterile pyuria, click here.
# Treatment
- Sterile pyuria
- Pathogen-directed antimicrobial therapy[54]
- Renal Tuberculosis
- Preferred regimen: (Isoniazid 300 mg PO qd for 2 months AND Rifampicin 450-600 mg qd for 2 months AND Ethambutol 15-25 mg/kg PO qd for 2 months AND Pyrazinamide 1500 mg for 2 months) THEN (Isoniazid 300 mg PO qd for 4-6 months AND Rifampicin 450-600 mg qd for 4-6 months)
- Gonorrhea
- Preferred regimen: Ceftriaxone 250 mg IM in a single dose THEN (Azithromycin 1 g PO in a single dose OR Doxycycline 100 mg PO bid for 7 days)
- Chlamydia
- Preferred regimen: Azithromycin 1 g PO in single dose OR Doxycycline 100 mg PO bid for 7 days
- Alternative regimen: Erythromycin base 500 mg PO qid for 7 days
- Mycoplasma and Ureaplasma
- Preferred regimen: Azithromycin OR Levofloxacin OR Moxifloxacin
- Genital herpes
- Preferred regimen: Acyclovir 400 mg PO tid for 7–10 days or Acyclovir 200 mg PO five times a day for 7–10 days OR Famciclovir 250 mg PO tid for 7–10 days OR Valacyclovir 1 g PO bid for 7 days
- Trichomoniasis
- Preferred regimen: Metronidazole 2 g PO in a single dose OR Tinidazole 2 g PO in a single dose
- Fungal infections[55]
- Preferred regimen, Candida albicans: Fluconazole 100 mg PO qd for 2-5 days
- Preferred regimen, non-albicans Candida: Amphotericin B 0.1 mg/kg/day IV for 2-5 days OR Amphotericin B bladder irrigation 5-50 mg/L of sterile water qd for 2-5 days
- Schistosomiasis
- Preferred regimen: Praziquantel 20 mg/kg PO bid for 1–2 days | https://www.wikidoc.org/index.php/Ddx:Pyuria | |
9dc4b3b0850cd24149e7ac60987b9bd4c242a475 | wikidoc | Sodium | Sodium
# Overview
Sodium (IPA: Template:IPA) is a chemical element which has the symbol Na (Latin: natrium), atomic number 11, atomic mass 22.9898 g/mol, common oxidation number +1. Sodium is a soft, silvery white, highly reactive element and is a member of the alkali metals within "group 1" (formerly known as ‘group IA’). It has only one stable isotope, 23Na. Sodium was first isolated by Sir Humphry Davy in 1807 by passing an electric current through molten sodium hydroxide. Sodium quickly oxidizes in air and is violently reactive with water, so it must be stored in an inert medium, such as kerosene. Sodium is present in great quantities in the earth's oceans as sodium chloride. It is also a component of many minerals, and it is an essential element for animal life. As such, it is classified as a “dietary inorganic macro-mineral.”
# Notable characteristics
Compared with other alkali metals, sodium is generally less reactive than potassium and more so than lithium, in accordance with "periodic law": for example, their reaction in water, chlorine gas, etc.; the reactivity of their nitrates, chlorates, perchlorates, etc. The density of elements generally increase with increasing atomic number, but potassium is less dense than sodium.
Owing to its high reactivity, sodium is found in nature only as a compound and never as the free element. Sodium reacts exothermically with water: small pea-sized pieces will bounce across the surface of the water until they are consumed by it, whereas large pieces will explode. While sodium reacts with water at room temperature, the sodium piece melts with the heat of the reaction to form a sphere, if the reacting sodium piece is large enough. The reaction with water produces very caustic sodium hydroxide and highly flammable hydrogen gas. These are extreme hazards (see Precautions section below). When burned in air, sodium forms sodium peroxide Na2O2, or with limited oxygen, the oxide Na2O (unlike lithium, the nitride is not formed). If burned in oxygen under pressure, sodium superoxide NaO2 will be produced.
When sodium or its compounds are introduced into a flame it will contribute a bright yellow.
Sodium ions are necessary for regulation of blood and body fluids, transmission of nerve impulses, heart activity, and certain metabolic functions. Interestingly, sodium is needed by animals, which maintain high concentrations in their blood and extracellular fluids, but the ion is not needed by plants. A completely plant-based diet, therefore, will be very low in sodium. This requires some herbivores to obtain their sodium from salt licks and other mineral sources. The animal need for sodium is probably the reason for the highly-conserved ability to taste the sodium ion as "salty." Receptors for the pure salty taste respond best to sodium, and otherwise only to a few other small monovalent cations (Li+, NH4+, and to some extent also K+). Calcium chloride also tastes somewhat salty, but also quite bitter.
The most common sodium salt, sodium chloride (table salt), used for seasoning (for example the English word "salad" refers to salt) and warm-climate food preservation, such as pickling and making jerky (the high osmotic content of salt inhibits bacterial and fungal growth). As such, salt has been an important commodity in human activities (the English word salary refers to salarium, the perquisite ("perk") given to Roman soldiers for the purpose of buying salt). The human requirement for sodium in the diet is less than 500 mg per day, which is typically less than a tenth as much as many diets "seasoned to taste." Most people consume far more sodium than is physiologically needed. For certain people with salt-sensitive blood pressure, this extra intake may cause a negative effect on health.
# Precautions
Extreme care is required in handling elemental/metallic sodium. Sodium is potentially explosive in water (depending on quantity) and is a caustic poison, since it is rapidly converted to sodium hydroxide on contact with moisture. The powdered form may combust spontaneously in air or oxygen. Sodium must be stored either in an inert (oxygen and moisture free) atmosphere (such as nitrogen or argon), or under a liquid hydrocarbon such as mineral oil or kerosene.
The reaction of sodium and water is a familiar one in chemistry labs, and is reasonably safe if amounts of sodium smaller than a pencil eraser are used and the reaction is done behind a plastic shield by people wearing eye protection. However, the sodium-water reaction does not scale up well, and is treacherous when larger amounts of sodium are used. Larger pieces of sodium melt under the heat of the reaction, and the molten ball of metal is buoyed up by hydrogen and may appear to be stably reacting with water, until splashing covers more of the reaction mass, causing thermal runaway and an explosion which scatters molten sodium, lye solution, and sometimes flame. (18.5 g explosion ) This behavior is unpredictable, and among the alkali metals it is usually sodium which invites this surprise phenomenon, because lithium is not reactive enough to do it, and potassium is so reactive that chemistry students are not tempted to try the reaction with larger potassium pieces.
Sodium is much more reactive than magnesium; a reactivity which can be further enhanced due to sodium's much lower melting point. When sodium catches fire in air (as opposed to just the hydrogen gas generated from water by means of its reaction with sodium) it more easily produces temperatures high enough to melt the sodium, exposing more of its surface to the air and spreading the fire.
Few common fire extinguishers work on sodium fires. Water, of course, exacerbates sodium fires, as do water-based foams. CO2 and Halon are often ineffective on sodium fires, which reignite when the extinguisher dissipates. Among the very few materials effective on a sodium fire are Pyromet and Met-L-X. Pyromet is a NaCl/(NH4)2HPO4 mix, with flow/anti-clump agents. It smothers the fire, drains away heat, and melts to form an impermeable crust. This is the standard dry-powder canister fire extinguisher for all classes of fires. Met-L-X is mostly sodium chloride, NaCl, with approximately 5% Saran plastic as a crust-former, and flow/anti-clumping agents. It is most commonly hand-applied, with a scoop. Other extreme fire extinguishing materials include ], a graphite based dry powder with an organophosphate flame retardant; and ], a Na2CO3-based material.
Because of the reaction scale problems discussed above, disposing of large quantities of sodium (more than 10 to 100 grams) must be done through a licensed hazardous materials disposer. Smaller quantities may be broken up and neutralized carefully with ethanol (which has a much slower reaction than water), or even methanol (where the reaction is more rapid than ethanol's but still less than in water), but care should nevertheless be taken, as the caustic products from the ethanol or methanol reaction are just as hazardous to eyes and skin as those from water. After the alcohol reaction appears complete, and all pieces of reaction debris have been broken up or dissolved, a mixture of alcohol and water, then pure water, may then be carefully used for a final cleaning. This should be allowed to stand a few minutes until the reaction products are diluted more thoroughly and flushed down the drain. The purpose of the final water soak and wash of any reaction mass which may contain sodium is to ensure that alcohol does not carry unreacted sodium into the sink trap, where a water reaction may generate hydrogen in the trap space which can then be potentially ignited, causing a confined sink trap explosion.
# Physiology and sodium ions
Sodium ions play a diverse and important role in many physiological processes. Excitable animal cells, for example, rely on the entry of Na+ to cause a depolarization. An example of this is signal transduction in the human central nervous system, which depends on sodium ion motion across the nerve cell membrane, in all nerves.
Some potent neurotoxins, such as batrachotoxin, increase the sodium ion permeability of the cell membranes in nerves and muscles, causing a massive and irreversible depolarization of the membranes, with potentially fatal consequences. However, drugs with smaller effects on sodium ion motion in nerves may have diverse pharmacological effects which range from anti-depressant to anti-seizure actions.
Sodium is the primary cation (positive ion) in extracellular fluids in animals and humans. These fluids, such as blood plasma and extracellular fluids in other tissues, bathe cells and carry out transport functions for nutrients and wastes. Sodium is also the principal cation in seawater, although the concentration there is about 3.8 times what it is normally in extracellular body fluids.
Although the system for maintaining optimal salt and water balance in the body is a complex one, one of the primary ways in which the human body keeps track of loss of body water is that osmoreceptors in the hypothalamus sense a balance of sodium and water concentration in extracellular fluids. Relative loss of body water will cause sodium concentration to rise higher than normal, a condition known as hypernatremia. This ordinarily results in thirst. Conversely, an excess of body water caused by drinking will result in too little sodium in the blood (hyponatremia), a condition which is again sensed by the hypothalamus, causing a decrease in vasopressin hormone secretion from the posterior pituitary, and a consequent loss of water in the urine, which acts to restore blood sodium concentrations to normal.
Severely dehydrated persons, such as people rescued from ocean or desert survival situations, usually have very high blood sodium concentrations. These must be very carefully and slowly returned to normal, since too-rapid correction of hypernatremia may result in brain damage from cellular swelling, as water moves suddenly into cells with high osmolar content.
Because the hypothalamus/osmoreceptor system ordinarily works well to cause drinking or urination to restore the body's sodium concentrations to normal, this system can be used in medical treatment to regulate the body's total fluid content, by first controlling the body's sodium content. Thus, when a powerful diuretic drug is given which causes the kidneys to excrete sodium, the effect is accompanied by an excretion of body water (water loss accompanies sodium loss). This happens because the kidney is unable to efficiently retain water while excreting large amounts of sodium. In addition, after sodium excretion, the osmoreceptor system may sense lowered sodium concentration in the blood, and then directs compensatory urinary loss of water, in order to correct the hyponatremia, or (low-blood-sodium) state. | Sodium
Template:Infobox sodium
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Associate Editor-In-Chief: Cafer Zorkun, M.D., Ph.D. [2]
# Overview
Sodium (IPA: Template:IPA) is a chemical element which has the symbol Na (Latin: natrium), atomic number 11, atomic mass 22.9898 g/mol, common oxidation number +1. Sodium is a soft, silvery white, highly reactive element and is a member of the alkali metals within "group 1" (formerly known as ‘group IA’). It has only one stable isotope, 23Na. Sodium was first isolated by Sir Humphry Davy in 1807 by passing an electric current through molten sodium hydroxide. Sodium quickly oxidizes in air and is violently reactive with water, so it must be stored in an inert medium, such as kerosene. Sodium is present in great quantities in the earth's oceans as sodium chloride. It is also a component of many minerals, and it is an essential element for animal life. As such, it is classified as a “dietary inorganic macro-mineral.”
# Notable characteristics
Compared with other alkali metals, sodium is generally less reactive than potassium and more so than lithium, in accordance with "periodic law": for example, their reaction in water, chlorine gas, etc.; the reactivity of their nitrates, chlorates, perchlorates, etc. The density of elements generally increase with increasing atomic number, but potassium is less dense than sodium.
Owing to its high reactivity, sodium is found in nature only as a compound and never as the free element. Sodium reacts exothermically with water: small pea-sized pieces will bounce across the surface of the water until they are consumed by it, whereas large pieces will explode. While sodium reacts with water at room temperature, the sodium piece melts with the heat of the reaction to form a sphere, if the reacting sodium piece is large enough. The reaction with water produces very caustic sodium hydroxide and highly flammable hydrogen gas. These are extreme hazards (see Precautions section below). When burned in air, sodium forms sodium peroxide Na2O2, or with limited oxygen, the oxide Na2O (unlike lithium, the nitride is not formed). If burned in oxygen under pressure, sodium superoxide NaO2 will be produced.
When sodium or its compounds are introduced into a flame it will contribute a bright yellow.
Sodium ions are necessary for regulation of blood and body fluids, transmission of nerve impulses, heart activity, and certain metabolic functions. Interestingly, sodium is needed by animals, which maintain high concentrations in their blood and extracellular fluids, but the ion is not needed by plants. A completely plant-based diet, therefore, will be very low in sodium. This requires some herbivores to obtain their sodium from salt licks and other mineral sources. The animal need for sodium is probably the reason for the highly-conserved ability to taste the sodium ion as "salty." Receptors for the pure salty taste respond best to sodium, and otherwise only to a few other small monovalent cations (Li+, NH4+, and to some extent also K+). Calcium chloride also tastes somewhat salty, but also quite bitter.
The most common sodium salt, sodium chloride (table salt), used for seasoning (for example the English word "salad" refers to salt) and warm-climate food preservation, such as pickling and making jerky (the high osmotic content of salt inhibits bacterial and fungal growth). As such, salt has been an important commodity in human activities (the English word salary refers to salarium, the perquisite ("perk") given to Roman soldiers for the purpose of buying salt). The human requirement for sodium in the diet is less than 500 mg per day, which is typically less than a tenth as much as many diets "seasoned to taste." Most people consume far more sodium than is physiologically needed. For certain people with salt-sensitive blood pressure, this extra intake may cause a negative effect on health.
# Precautions
Extreme care is required in handling elemental/metallic sodium. Sodium is potentially explosive in water (depending on quantity) and is a caustic poison, since it is rapidly converted to sodium hydroxide on contact with moisture. The powdered form may combust spontaneously in air or oxygen. Sodium must be stored either in an inert (oxygen and moisture free) atmosphere (such as nitrogen or argon), or under a liquid hydrocarbon such as mineral oil or kerosene.
The reaction of sodium and water is a familiar one in chemistry labs, and is reasonably safe if amounts of sodium smaller than a pencil eraser are used and the reaction is done behind a plastic shield by people wearing eye protection. However, the sodium-water reaction does not scale up well, and is treacherous when larger amounts of sodium are used. Larger pieces of sodium melt under the heat of the reaction, and the molten ball of metal is buoyed up by hydrogen and may appear to be stably reacting with water, until splashing covers more of the reaction mass, causing thermal runaway and an explosion which scatters molten sodium, lye solution, and sometimes flame. (18.5 g explosion [3]) This behavior is unpredictable, and among the alkali metals it is usually sodium which invites this surprise phenomenon, because lithium is not reactive enough to do it, and potassium is so reactive that chemistry students are not tempted to try the reaction with larger potassium pieces.
Sodium is much more reactive than magnesium; a reactivity which can be further enhanced due to sodium's much lower melting point. When sodium catches fire in air (as opposed to just the hydrogen gas generated from water by means of its reaction with sodium) it more easily produces temperatures high enough to melt the sodium, exposing more of its surface to the air and spreading the fire.
Few common fire extinguishers work on sodium fires. Water, of course, exacerbates sodium fires, as do water-based foams. CO2 and Halon are often ineffective on sodium fires, which reignite when the extinguisher dissipates. Among the very few materials effective on a sodium fire are Pyromet and Met-L-X. Pyromet is a NaCl/(NH4)2HPO4 mix, with flow/anti-clump agents. It smothers the fire, drains away heat, and melts to form an impermeable crust. This is the standard dry-powder canister fire extinguisher for all classes of fires. Met-L-X is mostly sodium chloride, NaCl, with approximately 5% Saran plastic as a crust-former, and flow/anti-clumping agents. It is most commonly hand-applied, with a scoop. Other extreme fire extinguishing materials include [[Lith+]], a graphite based dry powder with an organophosphate flame retardant; and [[Na+]], a Na2CO3-based material.
Because of the reaction scale problems discussed above, disposing of large quantities of sodium (more than 10 to 100 grams) must be done through a licensed hazardous materials disposer. Smaller quantities may be broken up and neutralized carefully with ethanol (which has a much slower reaction than water), or even methanol (where the reaction is more rapid than ethanol's but still less than in water), but care should nevertheless be taken, as the caustic products from the ethanol or methanol reaction are just as hazardous to eyes and skin as those from water. After the alcohol reaction appears complete, and all pieces of reaction debris have been broken up or dissolved, a mixture of alcohol and water, then pure water, may then be carefully used for a final cleaning. This should be allowed to stand a few minutes until the reaction products are diluted more thoroughly and flushed down the drain. The purpose of the final water soak and wash of any reaction mass which may contain sodium is to ensure that alcohol does not carry unreacted sodium into the sink trap, where a water reaction may generate hydrogen in the trap space which can then be potentially ignited, causing a confined sink trap explosion.
# Physiology and sodium ions
Sodium ions play a diverse and important role in many physiological processes. Excitable animal cells, for example, rely on the entry of Na+ to cause a depolarization. An example of this is signal transduction in the human central nervous system, which depends on sodium ion motion across the nerve cell membrane, in all nerves.
Some potent neurotoxins, such as batrachotoxin, increase the sodium ion permeability of the cell membranes in nerves and muscles, causing a massive and irreversible depolarization of the membranes, with potentially fatal consequences. However, drugs with smaller effects on sodium ion motion in nerves may have diverse pharmacological effects which range from anti-depressant to anti-seizure actions.
Sodium is the primary cation (positive ion) in extracellular fluids in animals and humans. These fluids, such as blood plasma and extracellular fluids in other tissues, bathe cells and carry out transport functions for nutrients and wastes. Sodium is also the principal cation in seawater, although the concentration there is about 3.8 times what it is normally in extracellular body fluids.
Although the system for maintaining optimal salt and water balance in the body is a complex one, one of the primary ways in which the human body keeps track of loss of body water is that osmoreceptors in the hypothalamus sense a balance of sodium and water concentration in extracellular fluids. Relative loss of body water will cause sodium concentration to rise higher than normal, a condition known as hypernatremia. This ordinarily results in thirst. Conversely, an excess of body water caused by drinking will result in too little sodium in the blood (hyponatremia), a condition which is again sensed by the hypothalamus, causing a decrease in vasopressin hormone secretion from the posterior pituitary, and a consequent loss of water in the urine, which acts to restore blood sodium concentrations to normal.
Severely dehydrated persons, such as people rescued from ocean or desert survival situations, usually have very high blood sodium concentrations. These must be very carefully and slowly returned to normal, since too-rapid correction of hypernatremia may result in brain damage from cellular swelling, as water moves suddenly into cells with high osmolar content.
Because the hypothalamus/osmoreceptor system ordinarily works well to cause drinking or urination to restore the body's sodium concentrations to normal, this system can be used in medical treatment to regulate the body's total fluid content, by first controlling the body's sodium content. Thus, when a powerful diuretic drug is given which causes the kidneys to excrete sodium, the effect is accompanied by an excretion of body water (water loss accompanies sodium loss). This happens because the kidney is unable to efficiently retain water while excreting large amounts of sodium. In addition, after sodium excretion, the osmoreceptor system may sense lowered sodium concentration in the blood, and then directs compensatory urinary loss of water, in order to correct the hyponatremia, or (low-blood-sodium) state. | https://www.wikidoc.org/index.php/Ddx:Sodium | |
3ad0e2a4027ec65ad7881edc44e9fca5e831bf1e | wikidoc | Striae | Striae
# Overview
Stretch marks or striae, as they are called in dermatology, are a form of scarring on the skin with a silvery white hue. They are caused by tearing of the dermis, and over time can diminish but not disappear completely. Stretch marks are generally associated with pregnancy, obesity, and can develop during rapid muscle growth from body building. Stretch marks are the result of the rapid stretching of the skin associated with rapid growth (e.g. puberty) or weight gain (e.g. pregnancy), and anabolic steroid use. Although the skin is fairly elastic, rapid stretching of the skin will leave permanent stretch marks. (Source: WD Writers). Stretch marks (also referred to as striae distensae. Medical terminology for these kinds of markings include striae atrophicae, vergetures, striae cutis distensae, striae gravidarum (in cases where it is caused by pregnancy), lineae atrophicae, striae distensae, linea albicante, or simply striae.
# Differential diagnosis of causes
## Common causes
The glucocorticoid hormones responsible for the development of stretch marks affect the epidermis by preventing the fibroblasts from forming collagen and elastin fibers, necessary to keep rapidly growing skin taut. This creates a lack of supportive material, as the skin is stretched and leads to dermal and epidermal tearing. If the epidermis and the dermis has been penetrated laser will not remove the stretch marks. Drugs like prednisolone can cause striae.
## Drug Causes
- Betamethasone dipropionate, Betamethasone valerate
# Complete differential diagnosis of causes of Striae in alphabetical oder
In alphabetical order.
- ACTH
- Cirrhosis
- Clocortolone pivalate
- Cushing's Syndrome
- Ehlers-Danlos Syndrome
- Estrogens
- Glucocorticoids
- Infection
- Influenza
- Lactation
- Obesity
- Paratyphoid
- Pregnancy
- Progesterone
- Puberty
- Scarlet Fever
- Topical corticosteroid overuse
- Tuberculosis
- Typhoid Fever
- Weight lifting
## Symptoms and signs
They first appear as reddish or purple lines, but tend to gradually fade to a lighter color. The affected areas appear empty and soft to the touch.
Human skin has three different layers: the epidermis (outer layer), the dermis (middle layer), and the subcutaneous stratum (innermost layer). Stretch marks occur in the dermis, the resilient middle layer that helps the skin retain its shape.
No stretch marks will form as long as there is support within the dermis. Stretching plays more of a role in where the marks occur and in what direction they run. Stretching alone is not the cause.
Stretch marks can appear anywhere on the body. See like this Stretch Marks
They are most likely to appear in places where larger amounts of fat is stored. Most common places are the abdomen (especially near the belly-button), breasts, upper arms, underarms, thighs (both inner and outer), hips, and buttocks. They pose no health risk in and of themselves, and do not compromise the body's ability to function normally and repair itself.
## Physical Examination
### Abdomen
Image courtesy of Professor Peter Anderson DVM PhD and published with permission © PEIR, University of Alabama at Birmingham, Department of Pathology
- Abdominal straiae (vertical)
- Stretch marks near the Navel Image courtesy of Professor Peter Anderson DVM PhD and published with permission © PEIR, University of Alabama at Birmingham, Department of Pathology
# Prevention and cure
Between 75% and 90% of women develop stretch marks to some degree during pregnancy. The sustained hormonal levels as a result of pregnancy usually means stretch marks may appear during the sixth or seventh month.
Only one randomised controlled study has been published which claimed to test whether oils or creams prevent the development of stretchmarks. This study found a daily application of Gotu Kola extract, vitamin E, and collagen hydrolysates can significantly reduce the likelihood of susceptible women developing stretchmarks during pregnancy.
Though cocoa butter is an effective moisturizer, no research studies have shown its ability to either prevent stretchmarks, or improve their appearance once a stretchmark has already formed.
Various treatments are available for the purpose of improving the appearance of existing stretch marks, including laser treatments, dermabrasion, and prescription retinoids. Used daily for one month, they resulted in significant improvement in the appearance of a stretchmark's length, depth, and irregular surface area. Some cream manufacturers claim the best results are achieved on recent stretch marks; however, few studies exist to support these claims.
A recent study in the journal "Dermatologic Surgery" has shown that radiofrequency combined with 585-nm pulsed dye laser treatment gave "good and very good" subjective improvement in stretch marks in 89.2% of 37 patients, although further studies will be required to follow up on these results. In addition, the use of a pulsed dye laser has shown to increase pigmentation in darker skinned individuals with repeated treatments.
A surgical procedure for removing lower abdominal stretch marks is the tummy tuck, which removes the skin below the navel where stretch marks frequently occur.
A new modality, fractional laser resurfacing, offers a novel approach to treating striae. Using scattered pulses of light only a fraction of the scar is zapped by the laser over the course of several treatments. This creates microscopic wounds and as such is a "no downtime" procedure. The body responds to each treatment by producing new collagen and epithelium. In a 2007 clinical trial, 5-6 treatments has resulted in striae improving by as much as 75 percent. | Striae
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Stretch marks or striae, as they are called in dermatology, are a form of scarring on the skin with a silvery white hue. They are caused by tearing of the dermis, and over time can diminish but not disappear completely. Stretch marks are generally associated with pregnancy, obesity, and can develop during rapid muscle growth from body building. Stretch marks are the result of the rapid stretching of the skin associated with rapid growth (e.g. puberty) or weight gain (e.g. pregnancy), and anabolic steroid use. Although the skin is fairly elastic, rapid stretching of the skin will leave permanent stretch marks. (Source: WD Writers). Stretch marks (also referred to as striae distensae. Medical terminology for these kinds of markings include striae atrophicae, vergetures, striae cutis distensae, striae gravidarum (in cases where it is caused by pregnancy), lineae atrophicae, striae distensae, linea albicante, or simply striae.
# Differential diagnosis of causes
## Common causes
The glucocorticoid hormones responsible for the development of stretch marks affect the epidermis by preventing the fibroblasts from forming collagen and elastin fibers, necessary to keep rapidly growing skin taut. This creates a lack of supportive material, as the skin is stretched and leads to dermal and epidermal tearing. If the epidermis and the dermis has been penetrated laser will not remove the stretch marks. Drugs like prednisolone can cause striae.
## Drug Causes
- Betamethasone dipropionate, Betamethasone valerate
# Complete differential diagnosis of causes of Striae in alphabetical oder
In alphabetical order. [1] [2]
- ACTH
- Cirrhosis
- Clocortolone pivalate
- Cushing's Syndrome
- Ehlers-Danlos Syndrome
- Estrogens
- Glucocorticoids
- Infection
- Influenza
- Lactation
- Obesity
- Paratyphoid
- Pregnancy
- Progesterone
- Puberty
- Scarlet Fever
- Topical corticosteroid overuse
- Tuberculosis
- Typhoid Fever
- Weight lifting
## Symptoms and signs
They first appear as reddish or purple lines, but tend to gradually fade to a lighter color. The affected areas appear empty and soft to the touch.
Human skin has three different layers: the epidermis (outer layer), the dermis (middle layer), and the subcutaneous stratum (innermost layer). Stretch marks occur in the dermis, the resilient middle layer that helps the skin retain its shape.
No stretch marks will form as long as there is support within the dermis. Stretching plays more of a role in where the marks occur and in what direction they run. Stretching alone is not the cause.
Stretch marks can appear anywhere on the body. See like this Stretch Marks
They are most likely to appear in places where larger amounts of fat is stored. Most common places are the abdomen (especially near the belly-button), breasts, upper arms, underarms, thighs (both inner and outer), hips, and buttocks. They pose no health risk in and of themselves, and do not compromise the body's ability to function normally and repair itself.
## Physical Examination
### Abdomen
Image courtesy of Professor Peter Anderson DVM PhD and published with permission © PEIR, University of Alabama at Birmingham, Department of Pathology
- Abdominal straiae (vertical)[3]
- Stretch marks near the Navel Image courtesy of Professor Peter Anderson DVM PhD and published with permission © PEIR, University of Alabama at Birmingham, Department of Pathology
# Prevention and cure
Between 75% and 90% of women develop stretch marks to some degree during pregnancy. The sustained hormonal levels as a result of pregnancy usually means stretch marks may appear during the sixth or seventh month.
Only one randomised controlled study has been published which claimed to test whether oils or creams prevent the development of stretchmarks. This study found a daily application of Gotu Kola extract, vitamin E, and collagen hydrolysates can significantly reduce the likelihood of susceptible women developing stretchmarks during pregnancy.[4]
Though cocoa butter is an effective moisturizer, no research studies have shown its ability to either prevent stretchmarks, or improve their appearance once a stretchmark has already formed.
Various treatments are available for the purpose of improving the appearance of existing stretch marks, including laser treatments, dermabrasion, and prescription retinoids. Used daily for one month, they resulted in significant improvement in the appearance of a stretchmark's length, depth, and irregular surface area. Some cream manufacturers claim the best results are achieved on recent stretch marks; however, few studies exist to support these claims.
A recent study in the journal "Dermatologic Surgery" has shown that radiofrequency combined with 585-nm pulsed dye laser treatment gave "good and very good" subjective improvement in stretch marks in 89.2% of 37 patients, although further studies will be required to follow up on these results. In addition, the use of a pulsed dye laser has shown to increase pigmentation in darker skinned individuals with repeated treatments.[5]
A surgical procedure for removing lower abdominal stretch marks is the tummy tuck, which removes the skin below the navel where stretch marks frequently occur.
A new modality, fractional laser resurfacing, offers a novel approach to treating striae. Using scattered pulses of light only a fraction of the scar is zapped by the laser over the course of several treatments. This creates microscopic wounds and as such is a "no downtime" procedure. The body responds to each treatment by producing new collagen and epithelium. In a 2007 clinical trial, 5-6 treatments has resulted in striae improving by as much as 75 percent.[6] | https://www.wikidoc.org/index.php/Ddx:Striae_Distensae | |
678b596175494abebcee17846b87ae60fbba2974 | wikidoc | Tetany | Tetany
# Overview
Tetany is a medical sign, the involuntary contraction of muscles, caused by diseases and other conditions that increase the action potential frequency. The muscle cramps caused by the disease tetanus are due to a blocking of the inhibition to the neurons that supply muscles and are not classified as tetany. Tetany has two meanings, though both are related to the muscular system.
- Tetany (action potential summation)
- Tetany (medical sign)
The terms "tetany" and "tetanus" are distinct.
Tetany must be distinguished from the following:
- Muscle twitches
- Cramps
- Carpopedal spasm
# Mechanism
When the membrane potential is upset, for instance by low levels of ions (such as calcium) in the blood (hypocalcaemia), neurons will depolarize too easily. In the case of hypocalcaemia, calcium ions are drawn away from their association with the voltage-gated sodium channels thus sensitising them. The upset to membrane potential is therefore caused by an influx of sodium to the cell, not directly by the hypocalcaemia. As a result, too many action potentials are sent to muscles causing spasm.
# Causes
The usual cause of tetany is lack of calcium, but excess of phosphate (high phosphate-to-calcium ratio) can also trigger the spasms. Milk-and-alkali tetany is an example of this imbalance.
Underfunction of the parathyroid gland can lead to tetany.
Low levels of carbon dioxide causes tetany by altering the albumin binding of calcium such that the ionised (physiologically influencing) fraction of calcium is reduced; the most common reason for low carbon dioxide levels is hyperventilation.
# Diagnosis
The nineteenth-century clinician Professor Armand Trousseau devised the trick of occluding the brachial artery by squeezing to trigger the cramps in the fingers (Trousseau sign).
# Differential Diagnosis
In alphabetical order In alphabetical order.
- Addisonian crisis
- After multiple transfusions
- Alcoholism
- Brain injury
- Burns
- Cirrhosis
- Conn's Syndrome
- Diabetic precoma
- Drugs such as Oxcarbazepine
- Hemolytic crisis
- Hypercalcemia
- Hyperkalemic tetany
- Hyperventilation
- Hypocalcemia
- Hypoparathyroidism
- Intoxication
- Lack of Vitamin D
- Lactation
- Long term diuretic medication
- Malabsorbtion syndrome
- Neuropathy
- Pancreatitis
- Pregnancy
- Pseudohypoparathyroidism
- Recurring vomiting
- Renal failure
- Toxins
- Uncontrolled intravenous potassium supply
- Zollinger-Ellison Syndrome | Tetany
Template:Search infobox
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Tetany is a medical sign, the involuntary contraction of muscles, caused by diseases and other conditions that increase the action potential frequency. The muscle cramps caused by the disease tetanus are due to a blocking of the inhibition to the neurons that supply muscles and are not classified as tetany. Tetany has two meanings, though both are related to the muscular system.
- Tetany (action potential summation)
- Tetany (medical sign)
The terms "tetany" and "tetanus" are distinct.
Tetany must be distinguished from the following:
- Muscle twitches
- Cramps
- Carpopedal spasm
# Mechanism
When the membrane potential is upset, for instance by low levels of ions (such as calcium) in the blood (hypocalcaemia), neurons will depolarize too easily. In the case of hypocalcaemia, calcium ions are drawn away from their association with the voltage-gated sodium channels thus sensitising them. The upset to membrane potential is therefore caused by an influx of sodium to the cell, not directly by the hypocalcaemia. As a result, too many action potentials are sent to muscles causing spasm.
# Causes
The usual cause of tetany is lack of calcium, but excess of phosphate (high phosphate-to-calcium ratio) can also trigger the spasms. Milk-and-alkali tetany is an example of this imbalance.
Underfunction of the parathyroid gland can lead to tetany.
Low levels of carbon dioxide causes tetany by altering the albumin binding of calcium such that the ionised (physiologically influencing) fraction of calcium is reduced; the most common reason for low carbon dioxide levels is hyperventilation.
# Diagnosis
The nineteenth-century clinician Professor Armand Trousseau devised the trick of occluding the brachial artery by squeezing to trigger the cramps in the fingers (Trousseau sign).
# Differential Diagnosis
In alphabetical order In alphabetical order. [1] [2]
- Addisonian crisis
- After multiple transfusions
- Alcoholism
- Brain injury
- Burns
- Cirrhosis
- Conn's Syndrome
- Diabetic precoma
- Drugs such as Oxcarbazepine
- Hemolytic crisis
- Hypercalcemia
- Hyperkalemic tetany
- Hyperventilation
- Hypocalcemia
- Hypoparathyroidism
- Intoxication
- Lack of Vitamin D
- Lactation
- Long term diuretic medication
- Malabsorbtion syndrome
- Neuropathy
- Pancreatitis
- Pregnancy
- Pseudohypoparathyroidism
- Recurring vomiting
- Renal failure
- Toxins
- Uncontrolled intravenous potassium supply
- Zollinger-Ellison Syndrome | https://www.wikidoc.org/index.php/Ddx:Tetany | |
287a19bda7fd8f0c3bf1e49240101faf3f26f54e | wikidoc | Deanol | Deanol
# Overview
Dimethylaminoethanol, also known as dimethylethanolamine (DMAE and DMEA respectively), is a primary alcohol. This compound also goes by the names of N,N-dimethyl-2-aminoethanol, beta-dimethylaminoethyl alcohol, beta-hydroxyethyldimethylamine and Deanol. It is a transparent, pale-yellow liquid.
# Biochemical significance
Dimethylaminoethanol is related to choline and may be a biochemical precursor to the neurotransmitter acetylcholine, although this conclusion has been disputed based on a 1977 rat experiment. It is commonly believed that dimethylaminoethanol is methylated to produce choline in the brain, but this has been shown not to be the case (in a rat experiment). It is known that dimethylaminoethanol is processed by the liver into choline; however, in a rat experiment the choline molecule is charged and cannot pass the blood–brain barrier. In the brain, DMAE is instead bound to phospholipids in place of choline to produce phosphatidyl-dimethylaminoethanol. This is then incorporated into nerve membranes, increasing fluidity and permeability, and acting as an antioxidant.
# Uses
## Industrial uses
Dimethylaminoethanol is used as a curing agent for polyurethanes and epoxy resins. It is also used in mass quantities for water treatment, and to some extent in the coatings industry. It is used in the synthesis of dyestuffs, textile auxiliaries, pharmaceuticals, emulsifiers, and corrosion inhibitors. It is also an additive to paint removers, boiler water and amino resins. It forms a number of salts with melting points below room temperature (ionic liquids) such as N,N-dimethylethanolammonium acetate and N,N-dimethylethanolammonium octanoate, which have been used as alternatives to conventional solvents.
2-Dimethylaminoethyl chloride hydrochloride is an intermediate made from dimethylaminoethanol that is widely used for the manufacture of pharmaceuticals.
## Biomedical research
Short-term studies have shown an increase in vigilance and alertness with a positive influence on mood following administration of DMAE, vitamins, and minerals in individuals suffering from borderline emotional disturbance. Research for ADHD has been promising, though inconclusive. A study showed dimethylaminoethanol to decrease the average life span of aged quail. Three other studies showed an increase in lifespan of mice
The bitartrate salt of DMAE, i.e. 2-dimethylaminoethanol (+)-bitartrate, is sold as a dietary supplement. It is a white powder providing 37% DMAE. | Deanol
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
# Overview
Dimethylaminoethanol, also known as dimethylethanolamine (DMAE and DMEA respectively), is a primary alcohol. This compound also goes by the names of N,N-dimethyl-2-aminoethanol, beta-dimethylaminoethyl alcohol, beta-hydroxyethyldimethylamine and Deanol. It is a transparent, pale-yellow liquid.
# Biochemical significance
Dimethylaminoethanol is related to choline and may be a biochemical precursor to the neurotransmitter acetylcholine, although this conclusion has been disputed based on a 1977 rat experiment.[2] It is commonly believed that dimethylaminoethanol is methylated to produce choline in the brain, but this has been shown not to be the case (in a rat experiment).[2] It is known that dimethylaminoethanol is processed by the liver into choline; however, in a rat experiment the choline molecule is charged and cannot pass the blood–brain barrier.[2] In the brain, DMAE is instead bound to phospholipids in place of choline to produce phosphatidyl-dimethylaminoethanol. This is then incorporated into nerve membranes, increasing fluidity and permeability, and acting as an antioxidant.[3]
# Uses
## Industrial uses
Dimethylaminoethanol is used as a curing agent for polyurethanes and epoxy resins. It is also used in mass quantities for water treatment, and to some extent in the coatings industry. It is used in the synthesis of dyestuffs, textile auxiliaries, pharmaceuticals, emulsifiers, and corrosion inhibitors. It is also an additive to paint removers, boiler water and amino resins. It forms a number of salts with melting points below room temperature (ionic liquids) such as N,N-dimethylethanolammonium acetate and N,N-dimethylethanolammonium octanoate, which have been used as alternatives to conventional solvents.[4]
2-Dimethylaminoethyl chloride hydrochloride is an intermediate made from dimethylaminoethanol that is widely used for the manufacture of pharmaceuticals.[5]
## Biomedical research
Short-term studies have shown an increase in vigilance and alertness with a positive influence on mood following administration of DMAE, vitamins, and minerals in individuals suffering from borderline emotional disturbance.[6] Research for ADHD has been promising, though inconclusive.[7] A study showed dimethylaminoethanol to decrease the average life span of aged quail.[8] Three other studies showed an increase in lifespan of mice [9]
The bitartrate salt of DMAE, i.e. 2-dimethylaminoethanol (+)-bitartrate, is sold as a dietary supplement.[10] It is a white powder providing 37% DMAE.[11] | https://www.wikidoc.org/index.php/Deanol | |
8794e6a4fe7b833963451e1a830a12a24ad116b6 | wikidoc | Libido | Libido
# Overview
Libido in its common usage means sexual desire; however, more technical definitions, such as those found in the work of Carl Jung, are more general, referring to libido as the free creative—or psychic—energy an individual has to put toward personal development or individuation.
Sigmund Freud (the father of modern psychology) popularized the term and defined libido as the instinct energy or force, contained in what Freud called the identification, largely the loss of the consciousness component of the psychology. Freud pointed out that these libidinal drives can conflict with the conventions of civilized behavior. It is this need to conform to society and control the libido that leads to tension and disturbance in the individual, prompting the use of ego defenses to dissipate the psychic energy of these unmet and mostly unconscious needs into other forms. Excessive use of ego defenses results in neurosis. A primary goal of psychological analysis is to bring the drives of the identitification into consciousness, allowing them to be met directly and thus reducing the patient's reliance on ego defenses.
# Historical Perspective
According to Swiss psychologist Carl Gustav Jung, the libido is identified as psychic energy. Duality (opposition) that creates the energy (or libido) of the psyche, which Jung asserts expresses itself only through symbols: "It is the energy that manifests itself in the life process and is perceived subjectively as striving and desire." (Ellenberger, 697)
Defined more narrowly, libido also refers to an individual's urge to engage in sexual activity. In this sense, the antonym of libido is destrudo.
# Libido Impairment
Sometimes sexual desire can be impaired or reduced. Factors of reduced libido can be both psychological and physical. Loss of libido may or may not correlate with infertility.
## Psychological Factors
Reduction in libido can occur from psychological causes such as loss of privacy and/or intimacy, stress, distraction or depression. It may also derive from the presence of environmental stressors such as prolonged exposure to elevated sound levels or bright light.
A comprehensive list:
- Depression
- Stress/fatigue
- Childhood sexual abuse/assault/trauma
- Body image issues
- Affair/attraction outside marriage
- Lack of interest/attraction in partner
- Performance anxiety
## Physical Factors
Physical factors that can affect libido are lifestyle factors, medications and, accordingly to studies, the attractiveness and biological fitness of one's partner.
### Lifestyle
Being very underweight, severely obese, or malnourished can cause a low libido due to disruptions in normal hormonal levels.
### Medications
Reduced libido is also often iatrogenic and can be caused by many medications, such as hormonal contraception, SSRIs and other antidepressants, Oxcarbazepine, Chlordiazepoxide, Atropine and beta blockers. In some cases iatrogenic impotence or other sexual dysfunction can be permanent, as in PSSD.
Testosterone is one of the hormones controlling libido in human beings. Emerging research is showing that hormonal contraception methods like "the pill" (which rely on estrogen and progesterone together) are causing low libido in females by elevating levels of Sex hormone binding globulin (SHBG). SHBG binds to sex hormones, including testosterone, rendering them unavailable. Research is showing that even after ending a hormonal contraceptive method, SHBG levels remain elevated and no reliable data exists to predict when this phenomenon will diminish. Some question whether "the pill" and other hormonal methods (Depo-Provera, Norplant, etc) have permanently altered gene expression by epigenetic mechanisms. Affected women may seek herbal and hormonal therapies. Left untreated, women with low testosterone levels will experience loss of libido, relationship stress and loss of bone and muscle and tissue mass throughout their lives. (Low testosterone may also be behind certain kinds of depression and low energy states.)
Conversely, increased androgen steroids (e.g. testosterone) generally have a positive correlation with libido in both sexes.
### Menstrual Cycle
A study done in Canada suggests that men's libido levels are also sometimes correlated to their partner's monthly cycle. Women's libido is correlated to their menstrual cycle. Many women experience heightened sexual desire in the several days immediately before ovulation.
# Related Chapters
- Cathexis
- Conatus
- Death drive
- Destrudo
- Eros
- Lust
- Mortido
- Sexual attraction
- Self preservation | Libido
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [2]
Template:Psychoanalysis
# Overview
Libido in its common usage means sexual desire; however, more technical definitions, such as those found in the work of Carl Jung, are more general, referring to libido as the free creative—or psychic—energy an individual has to put toward personal development or individuation.
Sigmund Freud (the father of modern psychology) popularized the term and defined libido as the instinct energy or force, contained in what Freud called the identification, largely the loss of the consciousness component of the psychology. Freud pointed out that these libidinal drives can conflict with the conventions of civilized behavior. It is this need to conform to society and control the libido that leads to tension and disturbance in the individual, prompting the use of ego defenses to dissipate the psychic energy of these unmet and mostly unconscious needs into other forms. Excessive use of ego defenses results in neurosis. A primary goal of psychological analysis is to bring the drives of the identitification into consciousness, allowing them to be met directly and thus reducing the patient's reliance on ego defenses.[1]
# Historical Perspective
According to Swiss psychologist Carl Gustav Jung, the libido is identified as psychic energy. Duality (opposition) that creates the energy (or libido) of the psyche, which Jung asserts expresses itself only through symbols: "It is the energy that manifests itself in the life process and is perceived subjectively as striving and desire." (Ellenberger, 697)
Defined more narrowly, libido also refers to an individual's urge to engage in sexual activity. In this sense, the antonym of libido is destrudo.
# Libido Impairment
Sometimes sexual desire can be impaired or reduced. Factors of reduced libido can be both psychological and physical. Loss of libido may or may not correlate with infertility.
## Psychological Factors
Reduction in libido can occur from psychological causes such as loss of privacy and/or intimacy, stress, distraction or depression. It may also derive from the presence of environmental stressors such as prolonged exposure to elevated sound levels or bright light.
A comprehensive list:
- Depression
- Stress/fatigue
- Childhood sexual abuse/assault/trauma
- Body image issues
- Affair/attraction outside marriage
- Lack of interest/attraction in partner
- Performance anxiety
## Physical Factors
Physical factors that can affect libido are lifestyle factors, medications and, accordingly to studies, the attractiveness and biological fitness of one's partner. [2]
### Lifestyle
Being very underweight, severely obese,[3] or malnourished can cause a low libido due to disruptions in normal hormonal levels.
### Medications
Reduced libido is also often iatrogenic and can be caused by many medications, such as hormonal contraception, SSRIs and other antidepressants, Oxcarbazepine, Chlordiazepoxide, Atropine and beta blockers. In some cases iatrogenic impotence or other sexual dysfunction can be permanent, as in PSSD.
Testosterone is one of the hormones controlling libido in human beings. Emerging research[4] is showing that hormonal contraception methods like "the pill" (which rely on estrogen and progesterone together) are causing low libido in females by elevating levels of Sex hormone binding globulin (SHBG). SHBG binds to sex hormones, including testosterone, rendering them unavailable. Research is showing that even after ending a hormonal contraceptive method, SHBG levels remain elevated and no reliable data exists to predict when this phenomenon will diminish.[5] Some question whether "the pill" and other hormonal methods (Depo-Provera, Norplant, etc) have permanently altered gene expression by epigenetic mechanisms. Affected women may seek herbal and hormonal therapies. Left untreated, women with low testosterone levels will experience loss of libido, relationship stress and loss of bone and muscle and tissue mass throughout their lives. (Low testosterone may also be behind certain kinds of depression and low energy states.)
Conversely, increased androgen steroids (e.g. testosterone) generally have a positive correlation with libido in both sexes.
### Menstrual Cycle
A study done in Canada suggests that men's libido levels are also sometimes correlated to their partner's monthly cycle. Women's libido is correlated to their menstrual cycle. Many women experience heightened sexual desire in the several days immediately before ovulation.[6]
# Related Chapters
- Cathexis
- Conatus
- Death drive
- Destrudo
- Eros
- Lust
- Mortido
- Sexual attraction
- Self preservation | https://www.wikidoc.org/index.php/Decreased_libido | |
6876ee24a5a78937e895c1953375358132aa4d85 | wikidoc | Fascia | Fascia
Fascia (făsh'ē-ə), pl. fas·ci·ae (făsh'ē-ē), adj. fascial (făsh'ē-əl) (from latin: a band) is the soft tissue component of the connective tissue system that permeates the human body. It interpenetrates and surrounds muscles, bones, organs, nerves, blood vessels and other structures. Fascia is an uninterrupted, three-dimensional web of tissue that extends from head to toe, from front to back, from interior to exterior. It is responsible for maintaining structural integrity; for providing support and protection; and acts as a shock absorber. Fascia has an essential role in hemodynamic and biochemical processes, and provides the matrix that allows for intercellular communication. Fascia functions as the body's first line of defense against pathogenic agents and infections. After injury, it is the fascia that creates an environment for tissue repair.
# Three layers of the fascia
- Superficial fascia is found in the subcutis in most regions of the body, blending with the reticular layer of the dermis. It is present on the face, over the upper portion of the sternocleidomastoid, at the nape of the neck, and overlying the sternum. It is comprised mainly of loose areolar connective tissue and adipose and is the layer that primarily determines the shape of a body. In addition to its subcutaneous presence, this type of fascia surrounds organs and glands, neurovascular bundles, and is found at many other locations where it fills otherwise unoccupied space. It serves as a storage medium of fat and water; as a passageway for lymph, nerve and blood vessels; and as a protective padding to cushion and insulate.
- Deep fascia is the dense fibrous connective tissue that interpenetrates and surrounds the muscles, bones, nerves and blood vessels of the body. It provides connection and communication in the form of aponeuroses, ligaments, tendons, retinacula, joint capsules, and septa. The deep fasciae envelop all bone (periosteum and endosteum); cartilage (perichondrium), and blood vessels (tunica externa) and become specialized in muscles (epimysium, perimysium, and endomysium) and nerves (epineurium, perineurium, and endoneurium). The high density of collagen fibers is what gives the deep fascia its strength and integrity. The amount of elastin fibers determines how much extensibility and resiliancy it will have.
- The galea aponeurotica and the temporal fascia
- The fascia of the diaphragm
- The aponeurosis of the external abdominal oblique
- The deep ligaments of the arch of the foot
- Visceral Fascia suspends the organs within their cavities and wraps them in layers of connective tissue membranes. Each of the organs is covered in a double layer of fascia; these layers are separated by a thin serous membrane. The outermost wall of the organ is known as the parietal layer, whereas the skin of the organ is known as the visceral layer. The organs have specialized names for their visceral fasciae. In the brain, they are known as meninges; in the heart they are known as pericardia; in the lungs, they are known as pleura; and in the abdomen, they are known as peritonea.
- The meninges
- The pleurae
- The pericardium and the left cupola of the diaphragm
- The peritoneum and renal fascia
# Fascial dynamics
Fascia is a highly adaptable tissue. Due to its elastic property, superficial fascia can stretch to accommodate the deposition of adipose that accompanies both ordinary and prenatal weight gain. After pregnancy and weight loss, the superficial fascia slowly reverts to its original level of tension.
Visceral fascia is less extensible than superficial fascia. Due to its suspensory role of the organs, it needs to maintain its tone rather consistently. If it is too lax, it contributes to organ prolapse, yet if it is hypertonic, it restricts proper organ motility.
Deep fascia is also less extensible than superficial fascia. It is essentially avascular , but is richly innervated with sensory receptors that report the presence of pain (nociceptors); change in movement (proprioceptors); change in pressure and vibration (mechanoreceptors); change in the chemical milieu (chemoreceptors); and fluctuation in temperature (thermoreceptors). , Deep fascia is able to respond to sensory input by contracting; by relaxing; or by adding, reducing, or changing its composition through the process of fascial remodeling.
Deep fascia can contract. What happens during the fight-or-flight response is an example of rapid fascial contraction . In response to a real or imagined threat to the organism, the body responds with a temporary increase in the stiffness of the fascia. Bolstered with tensioned fascia, people are able to perform extraordinary feats of strength and speed under emergency conditions. How fascia contracts is still not well understood, but appears to involve the activity of myofibroblasts. Myofibroblasts are fascial cells that are created as a response to mechanical stress. In a two step process, fibroblasts differentiate into proto-myofibroblasts that with continued mechanical stress, become differentiated myofibroblasts. Fibroblasts cannot contract, but myofibroblasts are able to contract in a smooth muscle-like manner.
The deep fascia can also relax. By monitoring changes in muscular tension, joint position, rate of movement, pressure, and vibration, mechanoreceptors in the deep fascia are capable of initiating relaxation. Deep fascia can relax rapidly in response to sudden muscular overload or rapid movements. Golgi tendon organs operate as a feedback mechanism by causing myofascial relaxation before muscle force becomes so great that tendons might be torn. Pacinian corpuscles sense changes in pressure and vibration to monitor the rate of acceleration of movement. They will intiate a sudden relaxatory response if movement happens too fast. Deep fascia can also relax slowly as some mechanoreceptors are designed to report changes over a longer period of time. Unlike the Golgi tendon organs, Golgi receptors report joint position independent of muscle contraction. This helps the body to know where the bones are at any given moment. Ruffini endings respond to regular stretching and to slow sustained pressure. In addition to initiating fascial relaxation, they contribute to full-body relaxation by inhibiting sympathetic activity which slows down heart rate and respiration.
When contraction persists, fascia will respond with the addition of new material. Fibroblasts secrete collagen and other proteins into the extracellular matrix where they bind to existing proteins, making the composition thicker and less extensible. Although this potentiates the tensile strength of the fascia, it can unfortunately restrict the very structures it aims to protect. The pathologies resulting from fascial restrictions range from a mild decrease in joint range of motion to severe fascial binding of muscles, nerves and blood vessels, as in compartment syndrome of the leg. However, if fascial contraction can be interrupted long enough, a reverse form of fascial remodeling occurs. The fascia will normalize its composition and tone and the extra material that was generated by prolonged contraction will be ingested by macrophages within the extracellular matrix.
Like mechanoreceptors, chemoreceptors in deep fascia also have the ability to promote fascial relaxation. We tend to think of relaxation as a good thing, however fascia needs to maintain some degree of tension. This is especially true of ligaments. To maintain joint integrity, they need to provide adequate tension between bony surfaces. If a ligament is too lax, injury becomes more likely. Certain chemicals, including hormones, can influence the composition of the ligaments. An example of this is seen in the menstrual cycle, where hormones are secreted to create changes in the uterine and pelvic floor fascia. The hormones are not site-specific, however, and chemoreceptors in other ligaments of the body can be receptive to them as well. The ligaments of the knee may be one of the areas where this happens, as a significant association between the ovulatory phase of the menstrual cycle and an increased likelihood for an anterior cruciate ligament injury has been demonstrated.
It has been suggested that manipulation of the fascia by acupuncture needles is responsible for the physical sensation of qi flowing along meridians in the body.
# Fascial pathology
- Adhesions
- Adhesive capsulitis
- Benign joint hypermobility syndrome
- Calcific tendinitis
- Cardiac tamponade
- Carpal Tunnel Syndrome
- Cellulitis
- Compartment syndrome
- Constrictive pericarditis
- Dermatomyositis
- Dupuytren's contracture
- Ehlers-Danlos syndrome
- Eosinophilic fasciitis
- Fibromyalgia
- Hemopneumothorax
- Hemothorax
- Hernia
- Marfan's syndrome
- Meningitis
- Mixed connective tissue disease
- Myofascial pain syndrome
- Necrotizing fasciitis
- Pericardial effusion
- Pericarditis
- Peritonitis
- Plantar fasciitis
- Pleural effusion
- Pleurisy
- Pneumoperitoneum
- Pneumothorax
- Polyarteritis nodosa
- Rheumatoid arthritis
- Scars
- Scleroderma
- Scoliosis
- Sprain
- Systemic lupus erythematosus
- Tendinitis
- Wegener's granulomatosis
# Classification by region | Fascia
Template:Infobox Anatomy
Template:WikiDoc Cardiology News
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Fascia (făsh'ē-ə), pl. fas·ci·ae (făsh'ē-ē), adj. fascial (făsh'ē-əl) (from latin: a band) is the soft tissue component of the connective tissue system that permeates the human body. It interpenetrates and surrounds muscles, bones, organs, nerves, blood vessels and other structures. Fascia is an uninterrupted, three-dimensional web of tissue that extends from head to toe, from front to back, from interior to exterior. It is responsible for maintaining structural integrity; for providing support and protection; and acts as a shock absorber. Fascia has an essential role in hemodynamic and biochemical processes, and provides the matrix that allows for intercellular communication. Fascia functions as the body's first line of defense against pathogenic agents and infections. After injury, it is the fascia that creates an environment for tissue repair. [1]
# Three layers of the fascia
- Superficial fascia is found in the subcutis in most regions of the body, blending with the reticular layer of the dermis. [2] It is present on the face, over the upper portion of the sternocleidomastoid, at the nape of the neck, and overlying the sternum. [3] It is comprised mainly of loose areolar connective tissue and adipose and is the layer that primarily determines the shape of a body. In addition to its subcutaneous presence, this type of fascia surrounds organs and glands, neurovascular bundles, and is found at many other locations where it fills otherwise unoccupied space. It serves as a storage medium of fat and water; as a passageway for lymph, nerve and blood vessels; and as a protective padding to cushion and insulate. [4]
- Deep fascia is the dense fibrous connective tissue that interpenetrates and surrounds the muscles, bones, nerves and blood vessels of the body. It provides connection and communication in the form of aponeuroses, ligaments, tendons, retinacula, joint capsules, and septa. The deep fasciae envelop all bone (periosteum and endosteum); cartilage (perichondrium), and blood vessels (tunica externa) and become specialized in muscles (epimysium, perimysium, and endomysium) and nerves (epineurium, perineurium, and endoneurium). The high density of collagen fibers is what gives the deep fascia its strength and integrity. The amount of elastin fibers determines how much extensibility and resiliancy it will have. [5]
- The galea aponeurotica and the temporal fascia
- The fascia of the diaphragm
- The aponeurosis of the external abdominal oblique
- The deep ligaments of the arch of the foot
- Visceral Fascia suspends the organs within their cavities and wraps them in layers of connective tissue membranes. Each of the organs is covered in a double layer of fascia; these layers are separated by a thin serous membrane. The outermost wall of the organ is known as the parietal layer, whereas the skin of the organ is known as the visceral layer. The organs have specialized names for their visceral fasciae. In the brain, they are known as meninges; in the heart they are known as pericardia; in the lungs, they are known as pleura; and in the abdomen, they are known as peritonea. [6]
- The meninges
- The pleurae
- The pericardium and the left cupola of the diaphragm
- The peritoneum and renal fascia
# Fascial dynamics
Fascia is a highly adaptable tissue. Due to its elastic property, superficial fascia can stretch to accommodate the deposition of adipose that accompanies both ordinary and prenatal weight gain. After pregnancy and weight loss, the superficial fascia slowly reverts to its original level of tension.
Visceral fascia is less extensible than superficial fascia. Due to its suspensory role of the organs, it needs to maintain its tone rather consistently. If it is too lax, it contributes to organ prolapse, yet if it is hypertonic, it restricts proper organ motility. [7]
Deep fascia is also less extensible than superficial fascia. It is essentially avascular [8], but is richly innervated with sensory receptors that report the presence of pain (nociceptors); change in movement (proprioceptors); change in pressure and vibration (mechanoreceptors); change in the chemical milieu (chemoreceptors); and fluctuation in temperature (thermoreceptors). [9], [10] Deep fascia is able to respond to sensory input by contracting; by relaxing; or by adding, reducing, or changing its composition through the process of fascial remodeling. [11]
Deep fascia can contract. What happens during the fight-or-flight response is an example of rapid fascial contraction . In response to a real or imagined threat to the organism, the body responds with a temporary increase in the stiffness of the fascia. Bolstered with tensioned fascia, people are able to perform extraordinary feats of strength and speed under emergency conditions. [12] How fascia contracts is still not well understood, but appears to involve the activity of myofibroblasts. Myofibroblasts are fascial cells that are created as a response to mechanical stress. In a two step process, fibroblasts differentiate into proto-myofibroblasts that with continued mechanical stress, become differentiated myofibroblasts. [13] Fibroblasts cannot contract, but myofibroblasts are able to contract in a smooth muscle-like manner. [14]
The deep fascia can also relax. By monitoring changes in muscular tension, joint position, rate of movement, pressure, and vibration, mechanoreceptors in the deep fascia are capable of initiating relaxation. Deep fascia can relax rapidly in response to sudden muscular overload or rapid movements. Golgi tendon organs operate as a feedback mechanism by causing myofascial relaxation before muscle force becomes so great that tendons might be torn. Pacinian corpuscles sense changes in pressure and vibration to monitor the rate of acceleration of movement. They will intiate a sudden relaxatory response if movement happens too fast. [15] Deep fascia can also relax slowly as some mechanoreceptors are designed to report changes over a longer period of time. Unlike the Golgi tendon organs, Golgi receptors report joint position independent of muscle contraction. This helps the body to know where the bones are at any given moment. Ruffini endings respond to regular stretching and to slow sustained pressure. In addition to initiating fascial relaxation, they contribute to full-body relaxation by inhibiting sympathetic activity which slows down heart rate and respiration. [16] [17]
When contraction persists, fascia will respond with the addition of new material. Fibroblasts secrete collagen and other proteins into the extracellular matrix where they bind to existing proteins, making the composition thicker and less extensible. Although this potentiates the tensile strength of the fascia, it can unfortunately restrict the very structures it aims to protect. The pathologies resulting from fascial restrictions range from a mild decrease in joint range of motion to severe fascial binding of muscles, nerves and blood vessels, as in compartment syndrome of the leg. However, if fascial contraction can be interrupted long enough, a reverse form of fascial remodeling occurs. The fascia will normalize its composition and tone and the extra material that was generated by prolonged contraction will be ingested by macrophages within the extracellular matrix. [18]
Like mechanoreceptors, chemoreceptors in deep fascia also have the ability to promote fascial relaxation. We tend to think of relaxation as a good thing, however fascia needs to maintain some degree of tension. This is especially true of ligaments. To maintain joint integrity, they need to provide adequate tension between bony surfaces. If a ligament is too lax, injury becomes more likely. Certain chemicals, including hormones, can influence the composition of the ligaments. An example of this is seen in the menstrual cycle, where hormones are secreted to create changes in the uterine and pelvic floor fascia. The hormones are not site-specific, however, and chemoreceptors in other ligaments of the body can be receptive to them as well. The ligaments of the knee may be one of the areas where this happens, as a significant association between the ovulatory phase of the menstrual cycle and an increased likelihood for an anterior cruciate ligament injury has been demonstrated. [19] [20]
It has been suggested that manipulation of the fascia by acupuncture needles is responsible for the physical sensation of qi flowing along meridians in the body.[21]
# Fascial pathology
- Adhesions
- Adhesive capsulitis
- Benign joint hypermobility syndrome
- Calcific tendinitis
- Cardiac tamponade
- Carpal Tunnel Syndrome
- Cellulitis
- Compartment syndrome
- Constrictive pericarditis
- Dermatomyositis
- Dupuytren's contracture
- Ehlers-Danlos syndrome
- Eosinophilic fasciitis
- Fibromyalgia
- Hemopneumothorax
- Hemothorax
- Hernia
- Marfan's syndrome
- Meningitis
- Mixed connective tissue disease
- Myofascial pain syndrome
- Necrotizing fasciitis
- Pericardial effusion
- Pericarditis
- Peritonitis
- Plantar fasciitis
- Pleural effusion
- Pleurisy
- Pneumoperitoneum
- Pneumothorax
- Polyarteritis nodosa
- Rheumatoid arthritis
- Scars
- Scleroderma
- Scoliosis
- Sprain
- Systemic lupus erythematosus
- Tendinitis
- Wegener's granulomatosis
# Classification by region
[22], [23], [24], [25], [26], [27], [28], [29] | https://www.wikidoc.org/index.php/Deep_fascia | |
b69f1cc125cb12f90b2f34c42554b0c86267e026 | wikidoc | Reflex | Reflex
A reflex action is an automatic (involuntary) neuromuscular action elicited by a defined stimulus. In most contexts, especially involving humans, a reflex action is mediated via the reflex arc (although this is not always true in other animals, or in more casual usage of the term 'reflex'.)
# Mechanism
A reflex action or reflex is a biological control system linking stimulus to response and mediated by a reflex arc. Reflexes can be built-in or learnt. For example, a person stepping on a sharp object would initiate the reflex action through the creation of a nociceptive stimulus within specialized sense receptors located in the skin tissue of the foot. The resulting stimulus would be transmitted through an afferent nerve to the spinal cord. This stimulus is usually processed by an interneuron to create an immediate response to nociception by initiating a motor response to withdraw from the pain-producing object. This retraction would occur as the sensation is arriving in the brain and producing the subjective perception of pain, which would result in a more cognitive evaluation of the situation.
Reflexes are tested as part of a neurological examination to assess damage to or functioning of the central and peripheral nervous system.
Reflexes may be trained, such as during repetition of motor actions during sport practice, or the linking of stimuli with autonomic reactions during classical conditioning.
# Reaction time
For a reflex, reaction time or latency is the time from the onset of a stimulus until the organism responds.
In humans, reaction time to visual stimuli is typically 150 to 300 milliseconds.
# Human reflexes
Reflex actions include:
## Tendon reflexes and stretch reflexes
The deep tendon reflexes provide information on the integrity of the central and peripheral nervous system. Generally, decreased reflexes indicate a peripheral problem, and lively or exaggerated reflexes a central one.
- Biceps stretch reflex (C5, C6)
- Brachioradialis reflex (C5, C6)
- Triceps stretch reflex (C7, C8)
- Patellar reflex or knee-jerk reflex (L3, L4)
- Achilles reflex (S1, S2)
- Plantar reflex or Babinski reflex (L5, S1, S2)
While the reflexes above are stimulated mechanically, the term H-reflex refers to the analogous reflex stimulated electrically, and Tonic vibration reflex for those stimulated by vibration.
## Reflexes involving cranial nerves
## Reflexes in infants only
Newborn babies have a number of other reflexes which are not seen in adults, referred to as primitive reflexes. These include:
- Asymmetrical tonic neck reflex (ATNR)
- Grasp reflex
- Hand-to-mouth reflex
- Moro reflex, also known as the startle reflex
- Sucking
- Symmetrical tonic neck reflex (STNR)
- Tonic labyrinthine reflex (TLR)
## Other reflexes
Other reflexes found in the human nervous system include:
- Anocutaneous reflex
- Crossed extensor reflex
- Escape reflex
- Jaw jerk reflex
- Mammalian diving reflex
- Oculocardiac reflex
- Optokinetic reflex
- Photic sneeze reflex
- Scratch reflex
- Withdrawal reflex
Processes such as breathing, digestion, and the maintenance of the heartbeat can also be regarded as reflex actions, according to some definitions of the term. | Reflex
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
A reflex action is an automatic (involuntary) neuromuscular action elicited by a defined stimulus.[1] In most contexts, especially involving humans, a reflex action is mediated via the reflex arc (although this is not always true in other animals, or in more casual usage of the term 'reflex'.)
# Mechanism
A reflex action or reflex is a biological control system linking stimulus to response and mediated by a reflex arc. Reflexes can be built-in or learnt. For example, a person stepping on a sharp object would initiate the reflex action through the creation of a nociceptive stimulus within specialized sense receptors located in the skin tissue of the foot. The resulting stimulus would be transmitted through an afferent nerve to the spinal cord. This stimulus is usually processed by an interneuron to create an immediate response to nociception by initiating a motor response to withdraw from the pain-producing object. This retraction would occur as the sensation is arriving in the brain and producing the subjective perception of pain, which would result in a more cognitive evaluation of the situation.
Reflexes are tested as part of a neurological examination to assess damage to or functioning of the central and peripheral nervous system.
Reflexes may be trained, such as during repetition of motor actions during sport practice, or the linking of stimuli with autonomic reactions during classical conditioning.
# Reaction time
For a reflex, reaction time or latency is the time from the onset of a stimulus until the organism responds.
In humans, reaction time to visual stimuli is typically 150 to 300 milliseconds.[2]
# Human reflexes
Reflex actions include:
## Tendon reflexes and stretch reflexes
The deep tendon reflexes provide information on the integrity of the central and peripheral nervous system. Generally, decreased reflexes indicate a peripheral problem, and lively or exaggerated reflexes a central one.
- Biceps stretch reflex (C5, C6)
- Brachioradialis reflex (C5, C6)
- Triceps stretch reflex (C7, C8)
- Patellar reflex or knee-jerk reflex (L3, L4)
- Achilles reflex (S1, S2)
- Plantar reflex or Babinski reflex (L5, S1, S2)
While the reflexes above are stimulated mechanically, the term H-reflex refers to the analogous reflex stimulated electrically, and Tonic vibration reflex for those stimulated by vibration.
## Reflexes involving cranial nerves
## Reflexes in infants only
Newborn babies have a number of other reflexes which are not seen in adults, referred to as primitive reflexes.[3] These include:
- Asymmetrical tonic neck reflex (ATNR)
- Grasp reflex
- Hand-to-mouth reflex
- Moro reflex, also known as the startle reflex
- Sucking
- Symmetrical tonic neck reflex (STNR)
- Tonic labyrinthine reflex (TLR)
## Other reflexes
Other reflexes found in the human nervous system include:
- Anocutaneous reflex
- Crossed extensor reflex
- Escape reflex
- Jaw jerk reflex
- Mammalian diving reflex
- Oculocardiac reflex
- Optokinetic reflex
- Photic sneeze reflex
- Scratch reflex
- Withdrawal reflex
Processes such as breathing, digestion, and the maintenance of the heartbeat can also be regarded as reflex actions, according to some definitions of the term. | https://www.wikidoc.org/index.php/Deep_tendon_reflex | |
0ca8e19722fc612e8a1a53742b0cc5e2e6d6f0a9 | wikidoc | Defect | Defect
In geometry, the defect (or deficit) of a vertex of a polyhedron is the amount by which the sum of the angles of the faces at the vertex falls short of a full circle. If the sum of the angles exceeds a full circle, as occurs in some vertices of most (not all) non-convex polyhedra, then the defect is negative. If a polyhedron is convex, then the defects of all of its vertices are positive.
The concept of defect extends to higher dimensions as the amount by which the sum of the dihedral angles of the cells at a peak falls short of a full circle.
(According to the Oxford English Dictionary, one of the senses of the word "defect" is "The quantity or amount by which anything falls short; in Math. a part by which a figure or quantity is wanting or deficient.")
# Examples
The defect of any of the vertices of a regular dodecahedron (in which three regular pentagons meet at each vertex) is 36°, or π/5 radians, or 1/10 of a circle. Each of the angles is 108°; three of these meet at each vertex, so the defect is 360° − (108° + 108° + 108°) = 36°.
The same procedure can be followed for the other Platonic solids
# Descartes' theorem
Descartes' theorem on the "total defect" of a polyhedron states that if the polyhedron is homeomorphic to a sphere (i.e. topologically equivalent to a sphere, so that it may be deformed into a sphere by stretching without tearing), the "total defect", i.e. the sum of the defects of all of the vertices, is two full circles (or 720° or 4π radians). The polyhedron need not be convex.
A generalization says the number of circles in the total defect equals the Euler characteristic of the polyhedron. This is a special case of the Gauss–Bonnet theorem which relates the integral of the Gaussian curvature to the Euler characteristic. Here the Gaussian curvature is concentrated at the vertices: on the faces and edges the Gaussian curvature is zero and the Gaussian curvature at a vertex is equal to the defect there.
This can be used to calculate the number V of vertices of a polyhedron by totaling the angles of all the faces, and adding the total defect. This total will have one complete circle for every vertex in the polyhedron. Care has to be taken to use the correct Euler characteristic for the polyhedron.
# A potential error
It is tempting to think (and has even been stated in geometry textbooks) that every non-convex polyhedron has some vertices whose defect is negative. Here is a counterexample. Consider a cube where one face is replaced by a square pyramid: this elongated square pyramid is convex and the defects at each vertex are each positive. Now consider the same cube where the square pyramid goes into the cube: this is non-convex, but the defects remain the same and so are all positive. | Defect
In geometry, the defect (or deficit) of a vertex of a polyhedron is the amount by which the sum of the angles of the faces at the vertex falls short of a full circle. If the sum of the angles exceeds a full circle, as occurs in some vertices of most (not all) non-convex polyhedra, then the defect is negative. If a polyhedron is convex, then the defects of all of its vertices are positive.
The concept of defect extends to higher dimensions as the amount by which the sum of the dihedral angles of the cells at a peak falls short of a full circle.
(According to the Oxford English Dictionary, one of the senses of the word "defect" is "The quantity or amount by which anything falls short; in Math. a part by which a figure or quantity is wanting or deficient.")
# Examples
The defect of any of the vertices of a regular dodecahedron (in which three regular pentagons meet at each vertex) is 36°, or π/5 radians, or 1/10 of a circle. Each of the angles is 108°; three of these meet at each vertex, so the defect is 360° − (108° + 108° + 108°) = 36°.
The same procedure can be followed for the other Platonic solids
# Descartes' theorem
Descartes' theorem on the "total defect" of a polyhedron states that if the polyhedron is homeomorphic to a sphere (i.e. topologically equivalent to a sphere, so that it may be deformed into a sphere by stretching without tearing), the "total defect", i.e. the sum of the defects of all of the vertices, is two full circles (or 720° or 4π radians). The polyhedron need not be convex.[1]
A generalization says the number of circles in the total defect equals the Euler characteristic of the polyhedron. This is a special case of the Gauss–Bonnet theorem which relates the integral of the Gaussian curvature to the Euler characteristic. Here the Gaussian curvature is concentrated at the vertices: on the faces and edges the Gaussian curvature is zero and the Gaussian curvature at a vertex is equal to the defect there.
This can be used to calculate the number V of vertices of a polyhedron by totaling the angles of all the faces, and adding the total defect. This total will have one complete circle for every vertex in the polyhedron. Care has to be taken to use the correct Euler characteristic for the polyhedron.
# A potential error
It is tempting to think (and has even been stated in geometry textbooks) that every non-convex polyhedron has some vertices whose defect is negative. Here is a counterexample. Consider a cube where one face is replaced by a square pyramid: this elongated square pyramid is convex and the defects at each vertex are each positive. Now consider the same cube where the square pyramid goes into the cube: this is non-convex, but the defects remain the same and so are all positive. | https://www.wikidoc.org/index.php/Defect | |
87a134de77e4f5e10c523c25992e163835e6f5d7 | wikidoc | KvLQT1 | KvLQT1
Kv7.1 (KvLQT1) is a potassium channel protein whose primary subunit in humans is encoded by the KCNQ1 gene. Kv7.1 is a voltage-gated potassium channel present in the cell membranes of cardiac tissue and in inner ear neurons among other tissues. In the cardiac cells, Kv7.1 mediates the IKs (or slow delayed rectifying K+) current that contributes to the repolarization of the cell, terminating the cardiac action potential and thereby the heart's contraction.
# Structure
KvLQT1 is made of six membrane-spanning domains S1-S6, two intracellular domains, and a pore loop. The KvLQT1 channel is made of four KCNQ1 subunits, which form the actual ion channel.
# Function
This gene encodes a protein for a voltage-gated potassium channel required for the repolarization phase of the cardiac action potential. The gene product can form heteromultimers with two other potassium channel proteins, KCNE1 and KCNE3. The gene is located in a region of chromosome 11 that contains a large number of contiguous genes that are abnormally imprinted in cancer and the Beckwith-Wiedemann syndrome. Two alternative transcripts encoding distinct isoforms have been described.
# Clinical significance
Mutations in the gene can lead to a defective protein and several forms of inherited arrhythmias as Long QT syndrome which is a prolongation of the QT interval of heart repolarization, Short QT syndrome, and Familial Atrial Fibrillation. KvLQT1 are also expressed in the pancreas, and KvLQT1 Long QT syndrome patients has been shown to have hyperinsulinemic hypoglycaemia following an oral glucose load. Currents arising from Kv7.1 in over-expression systems have never been recapitulated in native tissues - Kv7.1 is always found in native tissues with a modulatory subunit. In cardiac tissue, these subunits comprise KCNE1 and yotiao. Though physiologically irrelevant, homotetrameric Kv7.1 channels also display a unique form of C-type inactivation that reaches equilibrium quickly, allowing KvLQT1 currents to plateau. This is different from the inactivation seen in A-type currents, which causes rapid current decay.
# Ligands
- ML277: potent and selective channel activator
# Interactions
KvLQT1 has been shown to interact with PRKACA, PPP1CA and AKAP9.
KvLQT1 can also associate with any of the five members of the KCNE family of proteins, but interactions with KCNE1, KCNE2, KCNE3 are the only interactions within this protein family that affect the human heart. KCNE2, KCNE4, and KCNE5 have been shown to have an inhibitory effect on the functionality of KvLQT1, while KCNE1 and KCNE3 are activators of KvLQT1. KvLQT1 can associate with KCNE1 and KCNE4 with the activation effects of KCNE1 overriding the inhibitory effects of KCNE4 on the KvLQT1 channel, and KvLQT1 will commonly associate with anywhere from two to four different KCNE proteins in order to be functional. However, KvLQT1 most commonly associates with KCNE1 and forms the KvLQT1/KCNE1 complex since it has only been seen to function in vivo when associated with another protein. KCNQ1 will form a heteromer with KCNE1 in order to slow its activation and enhance the current density at the plasma membrane of the neuron. In addition to associating with KCNE proteins, the N-terminal juxtamembranous domain of KvLQT1 can also associate with SGK1, which stimulates the slow delayed potassium rectifier current. Since SGK1 requires structural integrity to stimulate KvLQT1/KCNE1, any mutations present in the KvLQT1 protein can result in reduced stimulation of this channel by SGK1. General mutations in KvLQT1 have been known to cause a decrease in this slow delayed potassium rectifier current, longer cardiac action potentials, and a tendency to have tachyarrhythmias.
# KvLQT1/KCNE1
KCNE1 (minK), can assemble with KvLQT1 to form a slow delayed potassium rectifier channel. KCNE1 slows the inactivation of KvLQT1 when the two proteins form a heteromeric complex, and the current amplitude is greatly increased compared to WT-KvLQT1 homotetrameric channels. KCNE1 associates with the pore region of KvLQT1, and its transmembrane domain contributes to the selectivity filter of this heteromeric channel complex. The alpha helix of the KCNE1 protein interacts with the pore domain S5/S6 and with the S4 domain of the KvLQT1 channel. This results in structural modifications of the voltage sensor and the selectivity filter of the KvLQT1 channel. Mutations in either the alpha subunit of this complex, KvLQT1 or the beta subunit, KCNE1, can lead to Long QT Syndrome or other cardiac rhythmic deformities. When associated with KCNE1, the KvLQT1 channel activates much more slowly and at a more positive membrane potential. It is believed that two KCNE1 proteins interact with a tetrameric KvLQT1 channel, since experimental data suggests that there are 4 alpha subunits and 2 beta subunits in this complex.
KVLQT1/KCNE1 channels are taken up from the plasma membrane through a RAB5 dependent mechanism, but inserted into the membrane by RAB11, a GTPase. | KvLQT1
Kv7.1 (KvLQT1) is a potassium channel protein whose primary subunit in humans is encoded by the KCNQ1 gene.[1] Kv7.1 is a voltage-gated potassium channel present in the cell membranes of cardiac tissue and in inner ear neurons among other tissues. In the cardiac cells, Kv7.1 mediates the IKs (or slow delayed rectifying K+) current that contributes to the repolarization of the cell, terminating the cardiac action potential and thereby the heart's contraction.
# Structure
KvLQT1 is made of six membrane-spanning domains S1-S6, two intracellular domains, and a pore loop.[2] The KvLQT1 channel is made of four KCNQ1 subunits, which form the actual ion channel.
# Function
This gene encodes a protein for a voltage-gated potassium channel required for the repolarization phase of the cardiac action potential. The gene product can form heteromultimers with two other potassium channel proteins, KCNE1 and KCNE3. The gene is located in a region of chromosome 11 that contains a large number of contiguous genes that are abnormally imprinted in cancer and the Beckwith-Wiedemann syndrome. Two alternative transcripts encoding distinct isoforms have been described.[3]
# Clinical significance
Mutations in the gene can lead to a defective protein and several forms of inherited arrhythmias as Long QT syndrome[4] which is a prolongation of the QT interval of heart repolarization, Short QT syndrome,[4] and Familial Atrial Fibrillation. KvLQT1 are also expressed in the pancreas, and KvLQT1 Long QT syndrome patients has been shown to have hyperinsulinemic hypoglycaemia following an oral glucose load.[5] Currents arising from Kv7.1 in over-expression systems have never been recapitulated in native tissues - Kv7.1 is always found in native tissues with a modulatory subunit. In cardiac tissue, these subunits comprise KCNE1 and yotiao. Though physiologically irrelevant, homotetrameric Kv7.1 channels also display a unique form of C-type inactivation that reaches equilibrium quickly, allowing KvLQT1 currents to plateau. This is different from the inactivation seen in A-type currents, which causes rapid current decay.
# Ligands
- ML277: potent and selective channel activator[6]
# Interactions
KvLQT1 has been shown to interact with PRKACA,[7] PPP1CA[7] and AKAP9.[7]
KvLQT1 can also associate with any of the five members of the KCNE family of proteins, but interactions with KCNE1, KCNE2, KCNE3 are the only interactions within this protein family that affect the human heart. KCNE2, KCNE4, and KCNE5 have been shown to have an inhibitory effect on the functionality of KvLQT1, while KCNE1 and KCNE3 are activators of KvLQT1.[2] KvLQT1 can associate with KCNE1 and KCNE4 with the activation effects of KCNE1 overriding the inhibitory effects of KCNE4 on the KvLQT1 channel, and KvLQT1 will commonly associate with anywhere from two to four different KCNE proteins in order to be functional.[2] However, KvLQT1 most commonly associates with KCNE1 and forms the KvLQT1/KCNE1 complex since it has only been seen to function in vivo when associated with another protein.[2] KCNQ1 will form a heteromer with KCNE1 in order to slow its activation and enhance the current density at the plasma membrane of the neuron.[2][8] In addition to associating with KCNE proteins, the N-terminal juxtamembranous domain of KvLQT1 can also associate with SGK1, which stimulates the slow delayed potassium rectifier current. Since SGK1 requires structural integrity to stimulate KvLQT1/KCNE1, any mutations present in the KvLQT1 protein can result in reduced stimulation of this channel by SGK1.[9] General mutations in KvLQT1 have been known to cause a decrease in this slow delayed potassium rectifier current, longer cardiac action potentials, and a tendency to have tachyarrhythmias.[8]
# KvLQT1/KCNE1
KCNE1 (minK), can assemble with KvLQT1 to form a slow delayed potassium rectifier channel. KCNE1 slows the inactivation of KvLQT1 when the two proteins form a heteromeric complex, and the current amplitude is greatly increased compared to WT-KvLQT1 homotetrameric channels. KCNE1 associates with the pore region of KvLQT1, and its transmembrane domain contributes to the selectivity filter of this heteromeric channel complex.[8] The alpha helix of the KCNE1 protein interacts with the pore domain S5/S6 and with the S4 domain of the KvLQT1 channel. This results in structural modifications of the voltage sensor and the selectivity filter of the KvLQT1 channel.[10] Mutations in either the alpha subunit of this complex, KvLQT1 or the beta subunit, KCNE1, can lead to Long QT Syndrome or other cardiac rhythmic deformities.[9] When associated with KCNE1, the KvLQT1 channel activates much more slowly and at a more positive membrane potential. It is believed that two KCNE1 proteins interact with a tetrameric KvLQT1 channel, since experimental data suggests that there are 4 alpha subunits and 2 beta subunits in this complex.[10]
KVLQT1/KCNE1 channels are taken up from the plasma membrane through a RAB5 dependent mechanism, but inserted into the membrane by RAB11, a GTPase.[11] | https://www.wikidoc.org/index.php/Delayed_potassium_rectifier_current | |
f00ba7334473479cdf6badcdc64cbd8a074121b7 | wikidoc | Delsym | Delsym
Delsym is an American brand of over-the-counter cough medicine. It is different from most brands of cough medicine as the active ingredient is "time released". The time release allows for the drug to suppress coughing for a longer period of time without taking more.
The active ingredient per teaspoon (5 ml) is Dextromethorphan polistirex, equivalent to Dextromethorphan 30mg.
# Method of Action
The active ingredient, Dextromethorphan, is surrounded by an edible plastic called polistirex. When the Delsym arrives in the stomach, an amount of Dextromethorphan is directly released into the blood stream while the rest is surrounded by a plastic that is slowly dissolved by stomach acid. After the polistirex is dissolved sufficiently, more dextromethorphan is released.
# Controversy
Intentional misuse by deliberately intaking more than the recommended dosage of Delsym can lead to feelings of euphoria, hysteria, and possible short-term insanity, all of which has been recorded to last upwards of twelve hours. The active ingredient of Delsym (Dextromethorphan) is a dangerous chemical compound that should always be used with caution and under the supervision of an adult. The following day of an intentional over-dose can lead to depression and overall opposite effects which are exhibited during misuse; such as vomiting or inexplicable feelings of grief and possibly suicidal thoughts. This has led many stores to require a valid state issued identification card to be submitted to the retailer for proof of adulthood in order to purchase Delsym.
# Accounts of Misuse
Recorded acoounts of intentional misuse should be taken seriously and as a matter of purely prevention encouragement.
March 2007
A Southwestern Michigan resident, Travis Mathis, has explained his personal accounts to a few local residents and events of his account when misusing cough syrup. As a bro, Travis has sometimes refered to Dextromethorphan in order to suppress his coughs which were a result of his challanged immune system and low white blood cell level count. At one point during a case of mononucleosis Travis had consumed a considerable amount of Delsym and began to exhibit actions that were obviously that of an altered state of conciousness. He began to make telephone calls which he later denied ever enacting, and after 26 hours of conciousness he at last fainted; causing a stroke which has parilyzed him from the neck down. | Delsym
Delsym is an American brand of over-the-counter cough medicine. It is different from most brands of cough medicine as the active ingredient is "time released". The time release allows for the drug to suppress coughing for a longer period of time without taking more.
The active ingredient per teaspoon (5 ml) is Dextromethorphan polistirex, equivalent to Dextromethorphan 30mg.
# Method of Action
The active ingredient, Dextromethorphan, is surrounded by an edible plastic called polistirex. When the Delsym arrives in the stomach, an amount of Dextromethorphan is directly released into the blood stream while the rest is surrounded by a plastic that is slowly dissolved by stomach acid. After the polistirex is dissolved sufficiently, more dextromethorphan is released.
# Controversy
Intentional misuse by deliberately intaking more than the recommended dosage of Delsym can lead to feelings of euphoria, hysteria, and possible short-term insanity, all of which has been recorded to last upwards of twelve hours. The active ingredient of Delsym (Dextromethorphan) is a dangerous chemical compound that should always be used with caution and under the supervision of an adult. The following day of an intentional over-dose can lead to depression and overall opposite effects which are exhibited during misuse; such as vomiting or inexplicable feelings of grief and possibly suicidal thoughts. This has led many stores to require a valid state issued identification card to be submitted to the retailer for proof of adulthood in order to purchase Delsym.
# Accounts of Misuse
Recorded acoounts of intentional misuse should be taken seriously and as a matter of purely prevention encouragement.
March 2007
A Southwestern Michigan resident, Travis Mathis, has explained his personal accounts to a few local residents and events of his account when misusing cough syrup. As a bro, Travis has sometimes refered to Dextromethorphan in order to suppress his coughs which were a result of his challanged immune system and low white blood cell level count. At one point during a case of mononucleosis Travis had consumed a considerable amount of Delsym and began to exhibit actions that were obviously that of an altered state of conciousness. He began to make telephone calls which he later denied ever enacting, and after 26 hours of conciousness he at last fainted; causing a stroke which has parilyzed him from the neck down.
# External links
- http://www.delsym.com/
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/Delsym | |
7b6a99573acdd37b9b79ab44e224947997e47de4 | wikidoc | Myelin | Myelin
Please Take Over This Page and Apply to be Editor-In-Chief for this topic:
There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch.
# Overview
Myelin is an electrically insulating phospholipid layer that surrounds the axons of many neurons. It is an outgrowth of glial cells: Schwann cells supply the myelin for peripheral neurons while oligodendrocytes supply it to those of the central nervous system. Myelin is considered a defining characteristic of the (gnathostome) vertebrates, but it has also arisen by parallel evolution in some invertebrates. Myelin was discovered in 1878 by Louis-Antoine Ranvier.
# Composition of myelin
Myelin made by different cell types varies in chemical composition and configuration, but performs the same insulating function. Myelinated axons are white in appearance, hence the "white matter" of the brain.
Myelin is composed of about 80% lipid fat and about 20% protein. Some of the proteins that make up myelin are Myelin basic protein (MBP), Myelin oligodendrocyte glycoprotein (MOG) and Proteolipid protein (PLP). Myelin is made up primarily of a glycolipid called galactocerebroside. The intertwining of the hydrocarbon chains of sphingomyelin serve to strengthen the myelin sheath.
# Function of myelin layer
The main consequence of a myelin layer (or sheath) is an increase in the speed at which impulses propagate along the myelinated fiber. Along unmyelinated fibers, impulses move continuously as waves, but, in myelinated fibers, they hop or "propagate by saltation". Myelin increases resistance across the cell membrane by a factor of 5,000 and decreases capacitance by a factor of 50. Myelination also helps prevent the electrical current from leaving the axon. When a peripheral fiber is severed, the myelin sheath provides a track along which regrowth can occur. Unmyelinated fibers and myelinated axons of the mammalian central nervous system do not regenerate
# Demyelination and Dysmyelination
Demyelination is the act of demyelinating, or the loss of the myelin sheath insulating the nerves, and is the hallmark of some neurodegenerative autoimmune diseases, including multiple sclerosis, transverse myelitis, chronic inflammatory demyelinating polyneuropathy, Guillain-Barre Syndrome. When myelin degrades, conduction of signals along the nerve can be impaired or lost, and the nerve eventually withers.
The immune system may play a role in demyelination associated with such diseases, including inflammation causing demyelination by overproduction of cytokines via upregulation of tumor necrosis factor (TNF) or interferon.
Heavy metal poisoning may also lead to demyelination. Even very small amounts of mercury have been shown to be particularly destructive to nerve sheaths.
Research to repair damaged myelin sheaths is ongoing. Techniques include surgically implanting oligodendrocyte precursor cells in the central nervous system and inducing myelin repair with certain antibodies. While there have been some encouraging results in mice (via stem cell) implant, it is still unknown whether this technique can be effective in humans.
Dysmyelination on the other hand is different from the lesions producing process of active demyelination and is characterized by defective structure and function of myelin sheaths. Such defective sheaths often arise from genetic mutations affecting the biosynthesis and formation of myelin. Examples of human diseases where dysmyelination has been implicated include leukodystrophies (Pelizaeus-Merzbacher disease, Canavan disease) and schizophrenia.
# Symptoms of Demyelination
Demyelination destruction or loss of the myelin sheath typically results in diverse symptoms. The symptoms are determined by the functions normally contributed by the affected neurons.
Damage to the myelin sheath disrupts signals between the brain and other parts of the body producing a range of symptoms. Symptoms are often heterogeneous — dependent on pathophysiology of demyelination — differing from patient to patient, and have different presentations upon clinical observation and in laboratory studies.
- Blurriness in the central visual field that affects only one eye; may be accompanied by pain upon eye movement
- Double vision
- Odd sensation in legs, arms, chest, or face, such as tingling or numbness (neuropathy)
- Weakness of arms or legs
- Cognitive disruption including speech impairment, memory loss
- Heat sensitivity (symptoms worsen, reappear upon exposure to heat such as a hot shower)
- Loss of dexterity
- Difficulty coordinating movement or balance disorder
- Difficulty controlling bowel movements or urination
- Fatigue
# Causes of Demyelination
Drugs such as Nivolumab | Myelin
Template:Neuron map
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [4]
Please Take Over This Page and Apply to be Editor-In-Chief for this topic:
There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [5] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch.
# Overview
Myelin is an electrically insulating phospholipid layer that surrounds the axons of many neurons. It is an outgrowth of glial cells: Schwann cells supply the myelin for peripheral neurons while oligodendrocytes supply it to those of the central nervous system. Myelin is considered a defining characteristic of the (gnathostome) vertebrates, but it has also arisen by parallel evolution in some invertebrates.[1] Myelin was discovered in 1878 by Louis-Antoine Ranvier.
# Composition of myelin
Myelin made by different cell types varies in chemical composition and configuration, but performs the same insulating function. Myelinated axons are white in appearance, hence the "white matter" of the brain.
Myelin is composed of about 80% lipid fat and about 20% protein. Some of the proteins that make up myelin are Myelin basic protein (MBP), Myelin oligodendrocyte glycoprotein (MOG) and Proteolipid protein (PLP). Myelin is made up primarily of a glycolipid called galactocerebroside. The intertwining of the hydrocarbon chains of sphingomyelin serve to strengthen the myelin sheath.
# Function of myelin layer
The main consequence of a myelin layer (or sheath) is an increase in the speed at which impulses propagate along the myelinated fiber. Along unmyelinated fibers, impulses move continuously as waves, but, in myelinated fibers, they hop or "propagate by saltation". Myelin increases resistance across the cell membrane by a factor of 5,000 and decreases capacitance by a factor of 50. Myelination also helps prevent the electrical current from leaving the axon. When a peripheral fiber is severed, the myelin sheath provides a track along which regrowth can occur. Unmyelinated fibers and myelinated axons of the mammalian central nervous system do not regenerate
# Demyelination and Dysmyelination
Demyelination is the act of demyelinating, or the loss of the myelin sheath insulating the nerves, and is the hallmark of some neurodegenerative autoimmune diseases, including multiple sclerosis, transverse myelitis, chronic inflammatory demyelinating polyneuropathy, Guillain-Barre Syndrome. When myelin degrades, conduction of signals along the nerve can be impaired or lost, and the nerve eventually withers.
The immune system may play a role in demyelination associated with such diseases, including inflammation causing demyelination by overproduction of cytokines via upregulation of tumor necrosis factor (TNF)[2] or interferon.
Heavy metal poisoning may also lead to demyelination. Even very small amounts of mercury have been shown to be particularly destructive to nerve sheaths.[3]
Research to repair damaged myelin sheaths is ongoing. Techniques include surgically implanting oligodendrocyte precursor cells in the central nervous system and inducing myelin repair with certain antibodies. While there have been some encouraging results in mice (via stem cell) implant, it is still unknown whether this technique can be effective in humans.[4]
Dysmyelination on the other hand is different from the lesions producing process of active demyelination and is characterized by defective structure and function of myelin sheaths. Such defective sheaths often arise from genetic mutations affecting the biosynthesis and formation of myelin. Examples of human diseases where dysmyelination has been implicated include leukodystrophies (Pelizaeus-Merzbacher disease, Canavan disease) and schizophrenia. [5] [6] [7]
# Symptoms of Demyelination
Demyelination destruction or loss of the myelin sheath typically results in diverse symptoms. The symptoms are determined by the functions normally contributed by the affected neurons.
Damage to the myelin sheath disrupts signals between the brain and other parts of the body producing a range of symptoms. Symptoms are often heterogeneous — dependent on pathophysiology of demyelination — differing from patient to patient, and have different presentations upon clinical observation and in laboratory studies.
- Blurriness in the central visual field that affects only one eye; may be accompanied by pain upon eye movement
- Double vision
- Odd sensation in legs, arms, chest, or face, such as tingling or numbness (neuropathy)
- Weakness of arms or legs
- Cognitive disruption including speech impairment, memory loss
- Heat sensitivity (symptoms worsen, reappear upon exposure to heat such as a hot shower)
- Loss of dexterity
- Difficulty coordinating movement or balance disorder
- Difficulty controlling bowel movements or urination
- Fatigue
# Causes of Demyelination
Drugs such as Nivolumab | https://www.wikidoc.org/index.php/Demyelinating | |
832c4a1dd57a4d33512b379b165b17142fcc7676 | wikidoc | Desmin | Desmin
Desmin is a protein that in humans is encoded by the DES gene. Desmin is a muscle-specific, type III intermediate filament that integrates the sarcolemma, Z disk, and nuclear membrane in sarcomeres and regulates sarcomere architecture.
# Structure
Desmin is a 53.5 kD protein composed of 470 amino acids. There are three major domains to the desmin protein: a conserved alpha helix rod, a variable non alpha helix head, and a carboxy-terminal tail. Desmin, as all intermediate filaments, shows no polarity when assembled. The rod domain consists of 308 amino acids with parallel alpha helical coiled coil dimers and three linkers to disrupt it. The rod domain connects to the head domain. The head domain 84 amino acids with many arginine, serine, and aromatic residues is important in filament assembly and dimer-dimer interactions. The tail domain is responsible for the integration of filaments and interaction with proteins and organelles. Desmin is only expressed in vertebrates, however homologous proteins are found in many organisms. Desmin is a subunit of intermediate filaments in cardiac muscle, skeletal muscle and smooth muscle tissue. In cardiac muscle, desmin is present in Z-discs and intercalated discs. Desmin has been shown to interact with desmoplakin and αB-crystallin.
# Function
Desmin was first described in 1976, first purified in 1977, the gene was cloned in 1989, and the first knockout mouse was created in 1996. The function of desmin has been deduced through studies in knockout mice. Desmin is one of the earliest protein markers for muscle tissue in embryogenesis as it is detected in the somites. Although it is present early in the development of muscle cells, it is only expressed at low levels, and increases as the cell nears terminal differentiation. A similar protein, vimentin, is present in higher amounts during embryogenesis while desmin is present in higher amounts after differentiation. This suggests that there may be some interaction between the two in determining muscle cell differentiation. However desmin knockout mice develop normally and only experience defects later in life. Since desmin is expressed at a low level during differentiation another protein may be able to compensate for desmin's function early in development but not later on.
In adult desmin-null mice, hearts from 10 wk-old animals showed drastic alterations in muscle architecture, including a misalignment of myofibrils and disorganization and swelling of mitochondria; findings that were more severe in cardiac relative to skeletal muscle. Cardiac tissue also exhibited progressive necrosis and calcification of the myocardium. A separate study examined this in more detail in cardiac tissue and found that murine hearts lacking desmin developed hypertrophic cardiomyopathy and chamber dilation combined with systolic dysfunction. In adult muscle, desmin forms a scaffold around the Z-disk of the sarcomere and connects the Z-disk to the subsarcolemmal cytoskeleton. It links the myofibrils laterally by connecting the Z-disks. Through its connection to the sarcomere, desmin connects the contractile apparatus to the cell nucleus, mitochondria, and post-synaptic areas of motor endplates. These connections maintain the structural and mechanical integrity of the cell during contraction while also helping in force transmission and longitudinal load bearing.
In human heart failure, desmin expression is upregulated, which has been hyopthesized to be a defense mechanism in an attempt to maintain normal sarcomere alignment amidst disease pathogenesis. There is some evidence that desmin may also connect the sarcomere to the extracellular matrix (ECM) through desmosomes which could be important in signalling between the ECM and the sarcomere which could regulate muscle contraction and movement. Finally, desmin may be important in mitochondria function. When desmin is not functioning properly there is improper mitochondrial distribution, number, morphology and function. Since desmin links the mitochondria to the sarcomere it may transmit information about contractions and energy need and through this regulate the aerobic respiration rate of the muscle cell.
# Clinical significance
Desmin-related myofibrillar myopathy (DRM or desminopathy) is a subgroup of the myofibrillar myopathy diseases and is the result of a mutation in the gene that codes for desmin which prevents it from forming protein filaments, and rather, forms aggregates of desmin and other proteins throughout the cell. Desmin mutations have been associated with restrictive, dilated and idopathic cardiomyopathy.; and recently, mutations were identified in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC). Some of these DES mutations like p.N116S or p.E114del cause an aggregation of desmin within the cytoplasm.
A mutation p.A120D was discovered in a family where several members had sudden cardiac death.
Desmin has been evaluated for role in assessing the depth of invasion of urothelial carcinoma in TURBT specimens. | Desmin
Desmin is a protein that in humans is encoded by the DES gene.[1][2] Desmin is a muscle-specific, type III[3] intermediate filament that integrates the sarcolemma, Z disk, and nuclear membrane in sarcomeres and regulates sarcomere architecture.[4]
# Structure
Desmin is a 53.5 kD protein composed of 470 amino acids.[5][6] There are three major domains to the desmin protein: a conserved alpha helix rod, a variable non alpha helix head, and a carboxy-terminal tail.[7] Desmin, as all intermediate filaments, shows no polarity when assembled.[7] The rod domain consists of 308 amino acids with parallel alpha helical coiled coil dimers and three linkers to disrupt it.[7] The rod domain connects to the head domain. The head domain 84 amino acids with many arginine, serine, and aromatic residues is important in filament assembly and dimer-dimer interactions.[7] The tail domain is responsible for the integration of filaments and interaction with proteins and organelles. Desmin is only expressed in vertebrates, however homologous proteins are found in many organisms.[7] Desmin is a subunit of intermediate filaments in cardiac muscle, skeletal muscle and smooth muscle tissue.[8] In cardiac muscle, desmin is present in Z-discs and intercalated discs. Desmin has been shown to interact with desmoplakin[9] and αB-crystallin.[10]
# Function
Desmin was first described in 1976,[11] first purified in 1977,[12] the gene was cloned in 1989,[2] and the first knockout mouse was created in 1996.[13] The function of desmin has been deduced through studies in knockout mice. Desmin is one of the earliest protein markers for muscle tissue in embryogenesis as it is detected in the somites.[7] Although it is present early in the development of muscle cells, it is only expressed at low levels, and increases as the cell nears terminal differentiation. A similar protein, vimentin, is present in higher amounts during embryogenesis while desmin is present in higher amounts after differentiation. This suggests that there may be some interaction between the two in determining muscle cell differentiation. However desmin knockout mice develop normally and only experience defects later in life.[8] Since desmin is expressed at a low level during differentiation another protein may be able to compensate for desmin's function early in development but not later on.[14]
In adult desmin-null mice, hearts from 10 wk-old animals showed drastic alterations in muscle architecture, including a misalignment of myofibrils and disorganization and swelling of mitochondria; findings that were more severe in cardiac relative to skeletal muscle. Cardiac tissue also exhibited progressive necrosis and calcification of the myocardium.[15] A separate study examined this in more detail in cardiac tissue and found that murine hearts lacking desmin developed hypertrophic cardiomyopathy and chamber dilation combined with systolic dysfunction.[16] In adult muscle, desmin forms a scaffold around the Z-disk of the sarcomere and connects the Z-disk to the subsarcolemmal cytoskeleton.[17] It links the myofibrils laterally by connecting the Z-disks.[7] Through its connection to the sarcomere, desmin connects the contractile apparatus to the cell nucleus, mitochondria, and post-synaptic areas of motor endplates.[7] These connections maintain the structural and mechanical integrity of the cell during contraction while also helping in force transmission and longitudinal load bearing.[17][18]
In human heart failure, desmin expression is upregulated, which has been hyopthesized to be a defense mechanism in an attempt to maintain normal sarcomere alignment amidst disease pathogenesis.[19] There is some evidence that desmin may also connect the sarcomere to the extracellular matrix (ECM) through desmosomes which could be important in signalling between the ECM and the sarcomere which could regulate muscle contraction and movement.[18] Finally, desmin may be important in mitochondria function. When desmin is not functioning properly there is improper mitochondrial distribution, number, morphology and function.[20][21] Since desmin links the mitochondria to the sarcomere it may transmit information about contractions and energy need and through this regulate the aerobic respiration rate of the muscle cell.
# Clinical significance
Desmin-related myofibrillar myopathy (DRM or desminopathy) is a subgroup of the myofibrillar myopathy diseases and is the result of a mutation in the gene that codes for desmin which prevents it from forming protein filaments, and rather, forms aggregates of desmin and other proteins throughout the cell.[7] Desmin mutations have been associated with restrictive, dilated [22] and idopathic cardiomyopathy.;[23][24] and recently, mutations were identified in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC).[25][26] Some of these DES mutations like p.N116S or p.E114del cause an aggregation of desmin within the cytoplasm.[27]
A mutation p.A120D was discovered in a family where several members had sudden cardiac death.[28]
Desmin has been evaluated for role in assessing the depth of invasion of urothelial carcinoma in TURBT specimens.[29] | https://www.wikidoc.org/index.php/Desmin | |
efeeee58942886dc9ca2be29087d25831953a42a | wikidoc | Dettol | Dettol
Dettol (also called parachlorometaxylenol, or PCMX) is the name of a commercial liquid antiseptic belonging to a product line of household products manufactured by Reckitt Benckiser and marketed in South Asia, Africa & Middle East, Asia Pacific, Europe, Australasia.
The key ingredient which defines its unique antiseptic property is an aromatic chemical compound in chemistry known as chloroxylenol (C8H9ClO). This makes up 4.8% of Dettol's total mixture, with the rest composed of pine oil, isopropanol, castor oil soap, caramel, and water. Because several of the ingredients are insoluble in water, Dettol produces a white emulsion of oil droplets when diluted during use. It has a characterisitic phenolic odour similar to trichlorophenol and the explosive compound known as trinitrotoluene (TNT). Apart from its low toxicity and low metal corrosivity, it is also relatively cheap compared to other disinfectants and is effective against gram positive and gram negative bacteria, fungi, yeast, mildew and even the frightening "super-bug" MRSA, thus giving it a broad spectrum of antimicrobial action. It is able to kill 98% of microbes in just 15 seconds as shown in agar patch studies, by disrupting the bacterial cells' membrane potential, drastically affecting its ability to produce Adenosine triphosphate and thus leading to its rapid death.
Dettol can also be used to treat acne in small quantities. The bottle cap also doubles as a container for pouring increments of 10 ml for its various uses. However, like other household cleaners, it is still poisonous and should not be ingested. In an extreme case, a 42 year old English man died from Dettol overexposure in May 2007. Overuse of Dettol can also cause bacterial resistance, but the risk of infection can be reduced considerably by using it in addition to soap and water. | Dettol
Dettol (also called parachlorometaxylenol, or PCMX) is the name of a commercial liquid antiseptic belonging to a product line of household products manufactured by Reckitt Benckiser and marketed in South Asia, Africa & Middle East, Asia Pacific, Europe, Australasia.
The key ingredient which defines its unique antiseptic property is an aromatic chemical compound in chemistry known as chloroxylenol (C8H9ClO). This makes up 4.8% of Dettol's total mixture, with the rest composed of pine oil, isopropanol, castor oil soap, caramel, and water. Because several of the ingredients are insoluble in water, Dettol produces a white emulsion of oil droplets when diluted during use. It has a characterisitic phenolic odour similar to trichlorophenol and the explosive compound known as trinitrotoluene (TNT). Apart from its low toxicity and low metal corrosivity, it is also relatively cheap compared to other disinfectants and is effective against gram positive and gram negative bacteria, fungi, yeast, mildew and even the frightening "super-bug" MRSA, thus giving it a broad spectrum of antimicrobial action. It is able to kill 98% of microbes in just 15 seconds as shown in agar patch studies[citation needed], by disrupting the bacterial cells' membrane potential, drastically affecting its ability to produce Adenosine triphosphate and thus leading to its rapid death.
Dettol can also be used to treat acne in small quantities[1]. The bottle cap also doubles as a container for pouring increments of 10 ml for its various uses. However, like other household cleaners, it is still poisonous and should not be ingested. In an extreme case, a 42 year old English man died from Dettol overexposure in May 2007. [2] Overuse of Dettol can also cause bacterial resistance, but the risk of infection can be reduced considerably by using it in addition to soap and water. | https://www.wikidoc.org/index.php/Dettol | |
34af2bc012b1e3ee5a5a2cb9ae509b89e98ae34e | wikidoc | Heroin | Heroin
Street Names/Slangs: Aunt Hazel, birdie powder, Black, Black Eagle, Black Pearl, Black Stuff, Black Tar, Boy, Brown, Brown Crystal, Brown Rhine, Brown Sugar Junk, Brown Tape, Chiba or Chiva, China White, dog food, Dope White, Dr. Feelgood, Dragon, H, He, hong-yen, Junk, lemonade, Mexican Brown, Mexican Horse, Mexican Mud, Mexican mud, Mud, Number 4, Number 8, old Steve, pangonadalot, Sack, Skag, Skunk Number 3, Smac, Snow, Snowball Scat, Tar, White Boy, White Girl, White Horse, White Lady, White Nurse, White Stuff, witch hazel
# Overview
Heroin (INN: diacetylmorphine, BAN: diamorphine) is a semi-synthetic opioid synthesized from morphine, a derivative of the opium poppy. It is the 3, 6-diacetyl ester of morphine (hence diacetylmorphine). The white crystalline form is commonly the hydrochloride salt diacetylmorphine hydrochloride.
As with other opiates, heroin is used both as a pain-killer and a recreational drug.
One of the most common methods of heroin use is via intravenous injection. When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine. When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood-brain barrier due to the presence of the acetyl groups, which render it much more lipid-soluble than morphine itself. Once in the brain, it is deacetylated into 3- and 6-monoacetylmorphine and morphine, which bind to μ-opioid receptors resulting in intense euphoria with the feeling centered in the gut.
Frequent administration has a high potential for causing addiction and may quickly lead to tolerance. If a continual, sustained use of heroin for as little as three days is stopped abruptly, withdrawal symptoms can appear. This is much shorter than the withdrawal effects experienced from other common painkillers such as oxycodone and hydrocodone.
Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs. It is illegal to manufacture, possess, or sell heroin in the United States and the UK. However, under the name diamorphine, heroin is a legal prescription drug in the United Kingdom. Popular street names for heroin include black tar, smack, junk, skag, horse, brain, chaw, chiva, and others. These are specific references to heroin and not used to describe any other drug. Dope could be used to refer to heroin, but may also indicate other drugs, from laudanum a century ago to nearly any contemporary recreational drug.
# History
The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to two ingredients, codeine and morphine.
Heroin was first processed in 1874 by C.R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London, England. He had been experimenting with combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride over a stove for several hours and produced a more potent, acetylated form of morphine, now called diacetylmorphine. The compound was sent to F.M. Pierce of Owens College in Manchester for analysis, who reported the following to Wright:
Wright's invention, however, did not lead to any further developments, and heroin only became popular after it was independently re-synthesized 23 years later by another chemist, Felix Hoffmann. Hoffmann, working at the Bayer pharmaceutical company in Elberfeld, Germany, was instructed by his supervisor Heinrich Dreser to acetylate morphine with the objective of producing codeine, a natural derivative of the opium poppy, similar to morphine but less potent and less addictive. But instead of producing codeine, the experiment produced an acetylated form of morphine that was actually 1.5-2 times more potent than morphine itself. Bayer would name the substance "heroin", probably from the word heroisch, German for heroic, because in field studies people using the medicine felt "heroic".
From 1898 through to 1910 heroin was marketed as a non-addictive morphine substitute and cough medicine for children. Bayer marketed heroin as a cure for morphine addiction before it was discovered that heroin is converted to morphine when metabolized in the liver, and as such, "heroin" was basically only a quicker acting form of morphine. The company was somewhat embarrassed by this new finding and it became a historical blunder for Bayer.
As with aspirin, Bayer lost some of its trademark rights to heroin following the German defeat in World War I.
In the United States the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of heroin. The law did allow heroin to be prescribed and sold for medical purposes. In particular, recreational users could often still be legally supplied with heroin and use it. In 1924, the United States Congress passed additional legislation banning the sale, importation or manufacture of heroin in the United States. It is now a Schedule I substance, and is thus illegal in the United States.
# Usage and effects
Heroin is used as a recreational drug for the intense euphoria it induces, which diminishes with increased tolerance. Its popularity with recreational drug users, compared to morphine and other opiates, stems from its perceived different effects; this is unsupported by clinical research.
Controlled studies comparing the physiological and subjective effects of injected heroin and morphine in post-addicts, subjects showed no preference for either drug when administered on a single-injection basis. Equipotent, injected doses had comparable action courses, with no difference in their ability to induce euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness. Data acquired from short-term addiction studies did not indicate that heroin tolerance develops more rapidly than morphine. The findings have been discussed in relation to the physicochemical properties of heroin and morphine and the metabolism of heroin. When compared to other opioids — hydromorphone, fentanyl, oxycodone, and meperidine, post-addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine lend themselves to abuse and addiction. Morphine and heroin were also much more likely to produce euphoria, and other subjective effects when compared to most opioid analgesics. Heroin can be administered several ways, including snorting and injection, and may be smoked by inhaling its vapors when heated, i.e. "chasing the dragon".
Some users mix heroin with cocaine in a "speedball" or "snowball" that usually is injected intravenously, smoked, or dissolved in water and then snorted, producing a more intense rush than heroin alone, but is more dangerous because the combination of the short-acting stimulant with the longer-acting depressant increases the risk of seizure, or overdose with one or both drugs.
Once in the brain, heroin is rapidly metabolized to morphine by removal of the acetyl groups and is thus a prodrug. Morphine is unable to cross the blood-brain barrier as quickly as heroin, which gives heroin a subjectively stronger 'high'. In either case, a morphine molecule binds with opioid receptors, inducing the subjective, opioid high.
The onset of heroin's effects depends upon the method of administration; orally, heroin is completely metabolized in vivo to morphine before crossing the blood-brain barrier; the effects are the same as with oral morphine. Snorting results in an onset within 3 to 5 minutes; smoking results in an almost immediate, 7 to 11 seconds, milder effect that strengthens; intravenous injection induces a rush and euphoria usually taking effect within 30 seconds; intramuscular and subcutaneous injection take effect within 3 to 5 minutes.
Heroin metabolizes into morphine, a μ-opioid (mu-opioid) agonist. It acts on endogenous μ-opioid receptors that are spread in discrete packets throughout the brain, spinal cord and gut in almost all mammals. Heroin, along with other opioids, are agonists to four endogenous neurotransmitters. They are β-endorphin, dynorphin, leu-enkephalin, and met-enkephalin. The body responds to heroin in the brain by reducing (and sometimes stopping) production of the endogenous opioids when heroin is present. Endorphins are regularly released in the brain and nerves, attenuating pain. Their other functions are still obscure, but are probably related to the effects produced by heroin besides analgesia (antitussin, anti-diarrheal). The reduced endorphin production in heroin users creates a dependence on the heroin, and the cessation of heroin results in extremely uncomfortable symptoms including pain (even in the absence of physical trauma). This set of symptoms is called withdrawal syndrome. It has an onset 6 to 8 hours after the last dose of heroin.
The heroin dose used for recreational purposes depends strongly on the level of addiction. A first-time user tyically uses between 5 and 20 mg of heroin, but a typical heavy addict would use between 300 and 500 mg per day.
Large doses of heroin can be fatal. The drug can be used for suicide or as a murder weapon. The serial killer Dr Harold Shipman used it on his victims as did Dr John Bodkin Adams (see his victim, Edith Alice Morrell). It can sometimes be difficult to determine whether a heroin death was an accident, suicide or murder as with the deaths of Sid Vicious, Joseph Krecker, Janis Joplin, Tim Buckley, Jim Morrison, Layne Staley, Kurt Cobain, and Bradley Nowell have been attributed to heroin overdose.
# Regulation
In the United States, heroin is a schedule I drug according to the Controlled Substances Act of 1970 making it illegal to possess without a DEA license. Possession of more than 100 grams of heroin or a mixture containing heroin is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison.
In Canada heroin is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Every person who seeks or obtains heroin without disclosing authorization 30 days prior to obtaining another prescription from a practitioner is guilty of an indictable offense and liable to imprisonment for a term not exceeding seven years. Possession for purpose of trafficking is guilty of an indictable offense and liable to imprisonment for life.
In Hong Kong, heroin is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It can only be used legally by health professionals and for university research purposes. It can be given by pharmacists under a prescription. Anyone who supplies heroin without prescription can be fined $10,000 (HKD). The penalty for trafficking or manufacturing heroin is a $5,000,000 (HKD) fine and life imprisonment. Possession of heroin for consumption without license from the Department of Health is illegal with a $1,000,000 (HKD) fine and/or 7 years of jail time.
In the United Kingdom, heroin is available by prescription, though it is a restricted Class A drug. According to the British National Formulary (BNF) edition 50, diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, heroin is often injected using a syringe driver.
# Production and trafficking: The Golden Triangle
## Manufacturing
Heroin is produced for the black market through opium refinement process - first, morphine is isolated from opium. This crude morphine is then acetylated by heating with acetic anhydride. Purification of the obtained crude heroin as a hydrochloride salt provides a water-soluble salt form of white or yellowish powder.
Crude opium is carefully dissolved in hot water but the resulting hot soup is not boiled. Mechanical impurities - twigs - are scooped together with the foam. The mixture is then made alkaline by gradual addition of lime. Lime causes a number of unwelcome components present in opium to precipitate out of the solution. (The impurities include the useless alkaloids, resins, proteins). The precipitate is removed by filtration through a cloth, washed with additional water and discarded. The filtrates containing water-soluble calcium salt of morphine are then acidified by careful addition of ammonium chloride. This causes the morphine to precipitate. The morphine precipitate is collected by filtration and dried before the next step. The crude morphine (which makes only about 10% of the weight of the used opium) is then heated together with acetic anhydride at 85 °C (185 °F) for six hours. The reaction mixture is then cooled, diluted with water, alkalized with sodium carbonate and the precipitated crude heroin is filtered and washed with water. This crude water-insoluble free-base product (which by itself is usable, for smoking) is further purified and decolourised by dissolution in hot alcohol, filtration with activated charcoal and concentration of the filtrates. The concentrated solution is then acidified with hydrochloric acid, diluted with ether and the precipitated white hydrochloride salt of heroin is collected by filtration. This precipitate is the so-called "no. 4 heroin", the standard product exported to the Western markets. (Side-product residues from purification or the crude free base product are also available on the markets, as the "tar heroin" - a cheap substitute of inferior quality.)
The initial stage of opium refining - the isolation of morphine - is relatively easy to perform in rudimentary settings - even by substituting suitable fertilizers for pure chemical reagents. However, the later steps (acetylation, purification, precipitation as hydrochloride) are more involved - they use large quantities of dangerous chemicals and solvents and they require both skill and patience. The final step is particularly tricky as the highly flammable ether can easily ignite during the positive-pressure filtration (the explosion of vapor-air mixture can obliterate the refinery). If the heroin does ignite, the result is a catastrophic explosion.
## History of heroin traffic
The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the heroin trade.
Heroin trafficking was virtually eliminated in the U.S. during World War II due to temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After the second world war, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily. The Mafia took advantage of Sicily's location along the historic route opium took from Iran westward into Europe and the United States. Large scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed.
Although it remained legal in some countries until after World War II, health risks, addiction, and widespread abuse led most western countries to declare heroin a controlled substance by the latter half of the 20th century.
Between the end of World War II and the 1970s, much of the opium consumed in the west was grown in Iran, but in the late 1960s, under pressure from the U.S. and the United Nations, Iran engaged in anti-opium policies. While opium production never ended in Iran, the decline in production in those countries led to the development of a major new cultivation base in the so-called "Golden Triangle" region in South East Asia. In 1970-71, high-grade heroin laboratories opened in the Golden Triangle. This changed the dynamics of the heroin trade by expanding and decentralizing the trade. Opium production also increased in Afghanistan due to the efforts of Turkey and Iran to reduce production in their respective countries. Lebanon, a traditional opium supplier, also increased its role in the trade during years of civil war.
Soviet-Afghan war led to increased production in the Pakistani-Afghani border regions. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped up government law enforcement presence in Sicily. All of this combined to greatly diminish the role of the country in the international heroin trade.
## Trafficking
Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to U.N. sponsored survey, as of 2004, Afghanistan accounted for production of 87 percent of the world's heroin. Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War once again appeared as a facilitator of the trade.
At present, opium poppies are mostly grown in Afghanistan, and in Southeast Asia, especially in the region known as the Golden Triangle straddling Myanmar, Thailand, Vietnam, Laos and Yunnan province in the People's Republic of China. There is also cultivation of opium poppies in the Sinaloa region of Mexico and in Colombia. The majority of the heroin consumed in the United States comes from Mexico and Colombia. Up until 2004, Pakistan was considered one of the biggest opium-growing countries. However, the efforts of Pakistan's Anti-Narcotics Force have since reduced the opium growing area by 59% as of 2001.
Conviction for trafficking in heroin carries the death penalty in most South-east Asia and some East Asia and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the most strict. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example the arrest of nine Australians in Bali or the hanging of Australian citizen Van Tuong Nguyen in Singapore, both in 2005.
Sandra Gregory has written an autobiography covering her experience of getting caught with Heroin at a Thai airport.
# Risks of non-medical use
- For intravenous users of heroin (and any other substance), the use of non-sterile needles and syringes and other related equipment leads to several serious risks:
the risk of contracting blood-borne pathogens such as HIV and hepatitis
the risk of contracting bacterial or fungal endocarditis and possibly venous sclerosis
abscesses caused by transfer of fungus from the skin of lemons, the acidic juice of which can be added to impure heroin to increase its solubility
- the risk of contracting blood-borne pathogens such as HIV and hepatitis
- the risk of contracting bacterial or fungal endocarditis and possibly venous sclerosis
- abscesses caused by transfer of fungus from the skin of lemons, the acidic juice of which can be added to impure heroin to increase its solubility
- Poisoning from contaminants added to "cut" or dilute heroin
- Chronic constipation
- Addiction and an increasing tolerance.
- Physical dependence can result from prolonged use of all opiate and opioids, resulting in withdrawal symptoms on cessation of use.
- Decreased kidney function. (although it is not currently known if this is due to adulterants used in the cut)
Many countries and local governments have begun funding programs that supply sterile needles to people who inject illegal drugs in an attempt to reduce these contingent risks and especially the contraction and spread of blood-borne diseases. The Drug Policy Alliance reports that up to 75% of new AIDS cases among women and children are directly or indirectly a consequence of drug use by injection. But despite the immediate public health benefit of needle exchanges, some see such programs as tacit acceptance of illicit drug use. The United States federal government does not operate needle exchanges, although some state and local governments do support needle exchange programs.
A heroin overdose is usually treated with an opioid antagonist, such as naloxone (Narcan), or naltrexone, which has a high affinity for opioid receptors but does not activate them. This blocks heroin and other opioid antagonists and causes an immediate return of consciousness and the beginning of withdrawal symptoms when administered intravenously. The half-life of this antagonist is usually much shorter than that of the opiate drugs it is used to block, so the antagonist usually has to be re-administered multiple times until the opiate has been metabolized by the body.
Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours due to anoxia because the breathing reflex is suppressed by µ-opioids. An overdose is immediately reversible with an opioid antagonist injection. Heroin overdoses can occur due to an unexpected increase in the dose or purity or due to diminished opiate tolerance. However, most fatalities reported as overdoses are probably caused by interactions with other depressant drugs like alcohol or benzodiazepines. It should also be noted that, since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomitus by an unconscious victim.
The LD50 for a physically addicted person is prohibitively high, to the point that there is no general medical consensus on where to place it. Several studies done in the 1920s gave users doses of 1,600–1,800 mg of heroin in one sitting, and no adverse effects were reported. Even for a non-user, the LD50 can be placed above 350 mg though some sources give a figure of between 75 and 375 mg for a 75 kg person.
Street heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, those who use the drug after a period of abstinence have tolerances below what they were during active addiction. If a dose comparable to their previous use is taken, an effect greater to what the user intended is caused, in extreme cases an overdose could result.
It has been speculated that an unknown portion of heroin related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent.
A final source of overdose in users comes from place conditioning. Heroin use, like other drug using behaviors, is highly ritualized. While the mechanism has yet to be clearly elucidated, it has been shown that longtime heroin users, immediately before injecting in a common area for heroin use, show an acute increase in metabolism and a surge in the concentration of opiate-metabolizing enzymes. This acute increase, a reaction to a location where the user has repeatedly injected heroin, imbues him or her with a strong (but temporary) tolerance to the toxic effects of the drug. When the user injects in a different location, this environment-conditioned tolerance does not occur, giving the user a much lower-than-expected ability to metabolize the drug. The user's typical dose of the drug, in the face of decreased tolerance, becomes far too high and can be toxic, leading to overdose.
A small percentage of heroin smokers may develop symptoms of toxic leukoencephalopathy. This is believed to be caused by an uncommon adulterant that is only active when heated. Symptoms include slurred speech and difficulty walking.
# Harm reduction approaches to heroin
Proponents of the harm reduction philosophy seek to minimize the harms that arise from the recreational use of heroin. Safer means of taking the drug, such as smoking or nasal, oral and rectal insertion, are encouraged, due to injection having higher risks of overdose, infections and blood-borne viruses.
Where the strength of the drug is unknown, users are encouraged to try a small amount first to gauge the strength, to minimize the risks of overdose. For the same reason, poly drug use (the use of two or more drugs at the same time) is discouraged. Users are also encouraged to not use heroin on their own, as others can assist in the event of an overdose.
Heroin users who choose to inject should always use new needles, syringes, spoons/steri-cups and filters every time they inject and not share these with other users. Governments that support a harm reduction approach often run Needle & Syringe exchange programs, which supply new needles and syringes on a confidential basis, as well as education on proper filtering prior to injection, safer injection techniques, safe disposal of used injecting gear and other equipment used when preparing heroin for injection may also be supplied including citric acid sachets/vitamin C sachets, steri-cups, filters, alcohol pre-injection swabs, sterile water ampules and tourniquets (to stop use of shoe laces or belts).
# Withdrawal
The withdrawal syndrome from heroin may begin starting from within 6 to 24 hours of discontinuation of sustained use of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose. Symptoms may include: sweating, malaise, anxiety, depression, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, cramp-like pains in the limbs, yawning, tears, sleep difficulties (insomnia), cold sweats, chills, severe muscle and bone aches not precipitated by any physical trauma; nausea and vomiting, diarrhea, goose bumps, cramps, and fever. Many users also complain of a painful condition, the so-called "itchy blood", which often results in compulsive scratching that causes bruises and sometimes ruptures the skin, leaving scabs. Abrupt termination of heroin use causes muscle spasms in the legs and arms of the user (restless leg syndrome). Users taking the "cold turkey" approach (withdrawal without using symptom-reducing or counteractive drugs), or induced withdrawal with opiate antagonist drugs, are more likely to experience the negative effects of withdrawal in a more pronounced manner.
Two general approaches are available to ease the physical part of opioid withdrawal. The first is to substitute a longer-acting opioid such as methadone or buprenorphine for heroin or another short-acting opioid and then slowly taper the dose.
In the second approach, benzodiazepines such as diazepam (Valium) may temporarily ease the often extreme anxiety of opioid withdrawal. The most common benzodiazepine employed as part of the detox protocol in these situations is oxazepam (Serax). Benzodiazepine use must be prescribed with care because benzodiazepines have an addiction potential, and many opioid users also use other central nervous system depressants, especially alcohol. Also, though unpleasant, opioid withdrawal seldom has the potential to be fatal, whereas complications related to withdrawal from benzodiazepines, barbiturates and alcohol (such as epileptic seizures, cardiac arrest, and delirium tremens) can prove hazardous and are potentially fatal.
Many symptoms of opioid withdrawal are due to rebound hyperactivity of the sympathetic nervous system, which can be suppressed with clonidine (Catapres), a centrally-acting alpha-2 agonist primarily used to treat hypertension. Another drug sometimes used to relieve the "restless legs" symptom of withdrawal is baclofen, a muscle relaxant. Diarrhea can likewise be treated symptomatically with the peripherally active opioid drug loperamide.
Buprenorphine is one of the substances most recently licensed for the substitution of opioids in the treatment of users. Being a partial opioid agonist/antagonist, it develops a lower grade of tolerance than heroin or methadone due to the so-called ceiling effect. It also has less severe withdrawal symptoms than heroin when discontinued abruptly, which should never be done without proper medical supervision. It is usually administered every 24-48 hrs. Buprenorphine is a kappa-opioid receptor antagonist. This gives the drug an anti-depressant effect, increasing physical and intellectual activity. Buprenorphine also acts as a partial agonist at the same μ-receptor where opioids like heroin exhibit their action. Due to its effects on this receptor, all patients whose tolerance is above a certain level are unable to obtain any "high" from other opioids during buprenorphine treatment except for very high doses.
Researchers at Johns Hopkins University have been testing a sustained-release "depot" form of buprenorphine that can relieve cravings and withdrawal symptoms for up to six weeks. A sustained-release formulation would allow for easier administration and adherence to treatment, and reduce the risk of diversion or misuse.
Methadone is another μ-opioid agonist most often used to substitute for heroin in treatment for heroin addiction. Compared to heroin, methadone is well (but slowly) absorbed by the gastrointestinal tract and has a much longer duration of action of approximately 24 hours. Thus methadone maintenance avoids the rapid cycling between intoxication and withdrawal associated with heroin addiction. In this way, methadone has shown some success as a "less harmful substitute"; despite bearing about the same addiction potential as heroin, it is recommended for those who have repeatedly failed to complete withdrawal or have recently relapsed. As of 2005, the μ-opioid agonist buprenorphine is also being used to manage heroin addiction, being a superior, though still imperfect and not yet widely known alternative to methadone. Methadone, since it is longer-acting, produces withdrawal symptoms that appear later than with heroin, but usually last considerably longer and can in some cases be more intense. Methadone withdrawal symptoms can potentially persist for over a month, compared to heroin where significant physical symptoms would subside in 4 days.
Three opioid antagonists are known: naloxone and the longer-acting naltrexone and nalmefene. These medications block the effects of heroin, as well as the other opioids at the receptor site. Recent studies have suggested that the addition of naltrexone may improve the success rate in treatment programs when combined with the traditional therapy.
The University of Chicago undertook preliminary development of a heroin vaccine in monkeys during the 1970s, but it was abandoned. There were two main reasons for this. Firstly, when immunized monkeys had an increase in dose of x16, their antibodies became saturated and the monkey had the same effect from heroin as non-immunized monkeys. Secondly, until they reached the x16 point immunized monkeys would substitute other drugs to get a heroin-like effect. These factors suggested that immunized human users would simply either take massive quantities of heroin, or switch to other drugs.
There is also a controversial treatment for heroin addiction based on an Iboga-derived African drug, ibogaine. Many people travel abroad for ibogaine treatments that generally interrupt substance use disorders for 3-6 months or more in up to 80% of patients. Relapse may occur when the person returns home to their normal environment however, where drug seeking behavior may return in response to social and environmental cues. Ibogaine treatments are carried out in several countries including Mexico and Canada as well as, in South and Central America and Europe. Opioid withdrawal therapy is the most common use of ibogaine. Some patients find ibogaine therapy more effective when it is given several times over the course of a few months or years. A synthetic derivative of ibogaine, 18-methoxycoronaridine was specifically designed to overcome cardiac and neurotoxic effects seen in some ibogaine research but, the drug has not yet found its way into clinical research..
# Heroin prescription
The UK Department of Health's Rolleston Committee report in 1926 established the British approach to heroin prescription to users, which was maintained for the next forty years: dealers were prosecuted, but doctors could prescribe heroin to users when withdrawing from it would cause harm or severe distress to the patient. This "policing and prescribing" policy effectively controlled the perceived heroin problem in the UK until 1959 when the number of heroinists doubled every sixteenth month during a period of ten years, 1959-1968. . The failure changed the attitudes; in 1964 only specialized clinics and selected approved doctors were allowed to prescribe heroin to users. The law was changed in 1968 in a more restrictive direction. From the 1970s, the emphasis shifted to abstinence and the prescription of methadone, until now only a small number of users in the UK are prescribed heroin.
In 1994 Switzerland began a trial program featuring a heroin prescription for users not well suited for withdrawal programs—e.g. those that had failed multiple withdrawal programs. The aim is maintaining the health of the user in order to avoid medical problems stemming from low-quality street heroin. Reducing drug-related crime was another goal. Users can more easily get or maintain a paid job through the program as well. The first trial in 1994 began with 340 users and it was later expanded to 1000 after medical and social studies suggested its continuation. Participants are prescribed to inject heroin in specially designed pharmacies for about US $13 per dose.
The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own heroin prescription programs. Some Australian cities (such as Sydney) have trialed legal heroin supervised injecting centers, in line with other wider harm minimization programs. Heroin is unavailable on prescription however, and remains illegal outside the injecting room, and effectively decriminalized inside the injecting room.
# Drug interactions
Opioids are strong central nervous system depressants, but regular users develop physiological tolerance allowing gradually increased dosages. In combination with other central nervous system depressants, heroin may still kill even experienced users, particularly if their tolerance to the drug has reduced or the strength of their usual dose has increased.
Toxicology studies of heroin-related deaths reveal frequent involvement of other central nervous system depressants, including alcohol, benzodiazepines such as temazepam (Restoril; Normison), and, to a rising degree, methadone. Ironically, benzodiazepines are often used in the treatment of heroin addiction while they cause much more severe withdrawal symptoms.
Cocaine sometimes proves to be fatal when used in combination with heroin. Though "speedballs" (when injected) or "moonrocks" (when smoked) are a popular mix of the two drugs among users, combinations of stimulants and depressants can have unpredictable and sometimes fatal results. In the United States in early 2006, a rash of deaths was attributed to either a combination of fentanyl and heroin, or pure fentanyl masquerading as heroin particularly in the Detroit Metro Area; one news report refers to the combination as 'laced heroin', though this is likely a generic rather than a specific term. | Heroin
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [3]
Street Names/Slangs: Aunt Hazel, birdie powder, Black, Black Eagle, Black Pearl, Black Stuff, Black Tar, Boy, Brown, Brown Crystal, Brown Rhine, Brown Sugar Junk, Brown Tape, Chiba or Chiva, China White, dog food, Dope White, Dr. Feelgood, Dragon, H, He, hong-yen, Junk, lemonade, Mexican Brown, Mexican Horse, Mexican Mud, Mexican mud, Mud, Number 4, Number 8, old Steve, pangonadalot, Sack, Skag, Skunk Number 3, Smac, Snow, Snowball Scat, Tar, White Boy, White Girl, White Horse, White Lady, White Nurse, White Stuff, witch hazel
# Overview
Heroin (INN: diacetylmorphine, BAN: diamorphine) is a semi-synthetic opioid synthesized from morphine, a derivative of the opium poppy. It is the 3, 6-diacetyl ester of morphine (hence diacetylmorphine). The white crystalline form is commonly the hydrochloride salt diacetylmorphine hydrochloride.
As with other opiates, heroin is used both as a pain-killer and a recreational drug.
One of the most common methods of heroin use is via intravenous injection. When taken orally, heroin undergoes extensive first-pass metabolism via deacetylation, making it a prodrug for the systemic delivery of morphine.[1] When the drug is injected, however, it avoids this first-pass effect, very rapidly crossing the blood-brain barrier due to the presence of the acetyl groups, which render it much more lipid-soluble than morphine itself.[2] Once in the brain, it is deacetylated into 3- and 6-monoacetylmorphine and morphine, which bind to μ-opioid receptors resulting in intense euphoria with the feeling centered in the gut.
Frequent administration has a high potential for causing addiction and may quickly lead to tolerance. If a continual, sustained use of heroin for as little as three days is stopped abruptly, withdrawal symptoms can appear. This is much shorter than the withdrawal effects experienced from other common painkillers such as oxycodone and hydrocodone.[3][4]
Internationally, heroin is controlled under Schedules I and IV of the Single Convention on Narcotic Drugs.[5] It is illegal to manufacture, possess, or sell heroin in the United States and the UK. However, under the name diamorphine, heroin is a legal prescription drug in the United Kingdom. Popular street names for heroin include black tar, smack, junk, skag, horse, brain, chaw, chiva, and others. These are specific references to heroin and not used to describe any other drug. Dope could be used to refer to heroin, but may also indicate other drugs, from laudanum a century ago to nearly any contemporary recreational drug.
# History
The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC.[6] The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to two ingredients, codeine and morphine.
Heroin was first processed in 1874 by C.R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London, England. He had been experimenting with combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride over a stove for several hours and produced a more potent, acetylated form of morphine, now called diacetylmorphine. The compound was sent to F.M. Pierce of Owens College in Manchester for analysis, who reported the following to Wright:
Wright's invention, however, did not lead to any further developments, and heroin only became popular after it was independently re-synthesized 23 years later by another chemist, Felix Hoffmann. Hoffmann, working at the Bayer pharmaceutical company in Elberfeld, Germany, was instructed by his supervisor Heinrich Dreser to acetylate morphine with the objective of producing codeine, a natural derivative of the opium poppy, similar to morphine but less potent and less addictive. But instead of producing codeine, the experiment produced an acetylated form of morphine that was actually 1.5-2 times more potent than morphine itself. Bayer would name the substance "heroin", probably from the word heroisch, German for heroic, because in field studies people using the medicine felt "heroic".[8]
From 1898 through to 1910 heroin was marketed as a non-addictive morphine substitute and cough medicine for children. Bayer marketed heroin as a cure for morphine addiction before it was discovered that heroin is converted to morphine when metabolized in the liver, and as such, "heroin" was basically only a quicker acting form of morphine. The company was somewhat embarrassed by this new finding and it became a historical blunder for Bayer.[9]
As with aspirin, Bayer lost some of its trademark rights to heroin following the German defeat in World War I.[10]
In the United States the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of heroin. The law did allow heroin to be prescribed and sold for medical purposes. In particular, recreational users could often still be legally supplied with heroin and use it. In 1924, the United States Congress passed additional legislation banning the sale, importation or manufacture of heroin in the United States. It is now a Schedule I substance, and is thus illegal in the United States.
# Usage and effects
Heroin is used as a recreational drug for the intense euphoria it induces, which diminishes with increased tolerance. Its popularity with recreational drug users, compared to morphine and other opiates, stems from its perceived different effects;[12] this is unsupported by clinical research.
Controlled studies comparing the physiological and subjective effects of injected heroin and morphine in post-addicts, subjects showed no preference for either drug when administered on a single-injection basis. Equipotent, injected doses had comparable action courses, with no difference in their ability to induce euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness.[13] Data acquired from short-term addiction studies did not indicate that heroin tolerance develops more rapidly than morphine. The findings have been discussed in relation to the physicochemical properties of heroin and morphine and the metabolism of heroin. When compared to other opioids — hydromorphone, fentanyl, oxycodone, and meperidine, post-addicts showed a strong preference for heroin and morphine, suggesting that heroin and morphine lend themselves to abuse and addiction. Morphine and heroin were also much more likely to produce euphoria, and other subjective effects when compared to most opioid analgesics.[14][15] Heroin can be administered several ways, including snorting and injection, and may be smoked by inhaling its vapors when heated, i.e. "chasing the dragon".
Some users mix heroin with cocaine in a "speedball" or "snowball" that usually is injected intravenously, smoked, or dissolved in water and then snorted, producing a more intense rush than heroin alone, but is more dangerous because the combination of the short-acting stimulant with the longer-acting depressant increases the risk of seizure, or overdose with one or both drugs.
Once in the brain, heroin is rapidly metabolized to morphine by removal of the acetyl groups and is thus a prodrug. Morphine is unable to cross the blood-brain barrier as quickly as heroin, which gives heroin a subjectively stronger 'high'. In either case, a morphine molecule binds with opioid receptors, inducing the subjective, opioid high.
The onset of heroin's effects depends upon the method of administration; orally, heroin is completely metabolized in vivo to morphine before crossing the blood-brain barrier; the effects are the same as with oral morphine. Snorting results in an onset within 3 to 5 minutes; smoking results in an almost immediate, 7 to 11 seconds, milder effect that strengthens; intravenous injection induces a rush and euphoria usually taking effect within 30 seconds; intramuscular and subcutaneous injection take effect within 3 to 5 minutes.
Heroin metabolizes into morphine, a μ-opioid (mu-opioid) agonist. It acts on endogenous μ-opioid receptors that are spread in discrete packets throughout the brain, spinal cord and gut in almost all mammals. Heroin, along with other opioids, are agonists to four endogenous neurotransmitters. They are β-endorphin, dynorphin, leu-enkephalin, and met-enkephalin. The body responds to heroin in the brain by reducing (and sometimes stopping) production of the endogenous opioids when heroin is present. Endorphins are regularly released in the brain and nerves, attenuating pain. Their other functions are still obscure, but are probably related to the effects produced by heroin besides analgesia (antitussin, anti-diarrheal). The reduced endorphin production in heroin users creates a dependence on the heroin, and the cessation of heroin results in extremely uncomfortable symptoms including pain (even in the absence of physical trauma). This set of symptoms is called withdrawal syndrome. It has an onset 6 to 8 hours after the last dose of heroin.
The heroin dose used for recreational purposes depends strongly on the level of addiction. A first-time user tyically uses between 5 and 20 mg of heroin, but a typical heavy addict would use between 300 and 500 mg per day.[16]
Large doses of heroin can be fatal. The drug can be used for suicide or as a murder weapon. The serial killer Dr Harold Shipman used it on his victims as did Dr John Bodkin Adams (see his victim, Edith Alice Morrell). It can sometimes be difficult to determine whether a heroin death was an accident, suicide or murder as with the deaths of Sid Vicious, Joseph Krecker, Janis Joplin, Tim Buckley, Jim Morrison, Layne Staley, Kurt Cobain, and Bradley Nowell have been attributed to heroin overdose.[17]
# Regulation
In the United States, heroin is a schedule I drug according to the Controlled Substances Act of 1970 making it illegal to possess without a DEA license. Possession of more than 100 grams of heroin or a mixture containing heroin is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison.
In Canada heroin is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Every person who seeks or obtains heroin without disclosing authorization 30 days prior to obtaining another prescription from a practitioner is guilty of an indictable offense and liable to imprisonment for a term not exceeding seven years. Possession for purpose of trafficking is guilty of an indictable offense and liable to imprisonment for life.
In Hong Kong, heroin is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It can only be used legally by health professionals and for university research purposes. It can be given by pharmacists under a prescription. Anyone who supplies heroin without prescription can be fined $10,000 (HKD). The penalty for trafficking or manufacturing heroin is a $5,000,000 (HKD) fine and life imprisonment. Possession of heroin for consumption without license from the Department of Health is illegal with a $1,000,000 (HKD) fine and/or 7 years of jail time.
In the United Kingdom, heroin is available by prescription, though it is a restricted Class A drug. According to the British National Formulary (BNF) edition 50, diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, heroin is often injected using a syringe driver.
# Production and trafficking: The Golden Triangle
## Manufacturing
Heroin is produced for the black market through opium refinement process - first, morphine is isolated from opium. This crude morphine is then acetylated by heating with acetic anhydride. Purification of the obtained crude heroin as a hydrochloride salt provides a water-soluble salt form of white or yellowish powder.
Crude opium is carefully dissolved in hot water but the resulting hot soup is not boiled. Mechanical impurities - twigs - are scooped together with the foam. The mixture is then made alkaline by gradual addition of lime. Lime causes a number of unwelcome components present in opium to precipitate out of the solution. (The impurities include the useless alkaloids, resins, proteins). The precipitate is removed by filtration through a cloth, washed with additional water and discarded. The filtrates containing water-soluble calcium salt of morphine are then acidified by careful addition of ammonium chloride. This causes the morphine to precipitate. The morphine precipitate is collected by filtration and dried before the next step. The crude morphine (which makes only about 10% of the weight of the used opium) is then heated together with acetic anhydride at 85 °C (185 °F) for six hours. The reaction mixture is then cooled, diluted with water, alkalized with sodium carbonate and the precipitated crude heroin is filtered and washed with water. This crude water-insoluble free-base product (which by itself is usable, for smoking) is further purified and decolourised by dissolution in hot alcohol, filtration with activated charcoal and concentration of the filtrates. The concentrated solution is then acidified with hydrochloric acid, diluted with ether and the precipitated white hydrochloride salt of heroin is collected by filtration. This precipitate is the so-called "no. 4 heroin", the standard product exported to the Western markets. (Side-product residues from purification or the crude free base product are also available on the markets, as the "tar heroin" - a cheap substitute of inferior quality.)
The initial stage of opium refining - the isolation of morphine - is relatively easy to perform in rudimentary settings - even by substituting suitable fertilizers for pure chemical reagents. However, the later steps (acetylation, purification, precipitation as hydrochloride) are more involved - they use large quantities of dangerous chemicals and solvents and they require both skill and patience. The final step is particularly tricky as the highly flammable ether can easily ignite during the positive-pressure filtration (the explosion of vapor-air mixture can obliterate the refinery). If the heroin does ignite, the result is a catastrophic explosion.
## History of heroin traffic
The origins of the present international illegal heroin trade can be traced back to laws passed in many countries in the early 1900s that closely regulated the production and sale of opium and its derivatives including heroin. At first, heroin flowed from countries where it was still legal into countries where it was no longer legal. By the mid-1920s, heroin production had been made illegal in many parts of the world. An illegal trade developed at that time between heroin labs in China (mostly in Shanghai and Tianjin) and other nations. The weakness of government in China and conditions of civil war enabled heroin production to take root there. Chinese triad gangs eventually came to play a major role in the heroin trade.
Heroin trafficking was virtually eliminated in the U.S. during World War II due to temporary trade disruptions caused by the war. Japan's war with China had cut the normal distribution routes for heroin and the war had generally disrupted the movement of opium. After the second world war, the Mafia took advantage of the weakness of the postwar Italian government and set up heroin labs in Sicily. The Mafia took advantage of Sicily's location along the historic route opium took from Iran westward into Europe and the United States. Large scale international heroin production effectively ended in China with the victory of the communists in the civil war in the late 1940s. The elimination of Chinese production happened at the same time that Sicily's role in the trade developed.
Although it remained legal in some countries until after World War II, health risks, addiction, and widespread abuse led most western countries to declare heroin a controlled substance by the latter half of the 20th century.
Between the end of World War II and the 1970s, much of the opium consumed in the west was grown in Iran, but in the late 1960s, under pressure from the U.S. and the United Nations, Iran engaged in anti-opium policies. While opium production never ended in Iran, the decline in production in those countries led to the development of a major new cultivation base in the so-called "Golden Triangle" region in South East Asia. In 1970-71, high-grade heroin laboratories opened in the Golden Triangle. This changed the dynamics of the heroin trade by expanding and decentralizing the trade. Opium production also increased in Afghanistan due to the efforts of Turkey and Iran to reduce production in their respective countries. Lebanon, a traditional opium supplier, also increased its role in the trade during years of civil war.
Soviet-Afghan war led to increased production in the Pakistani-Afghani border regions. It increased international production of heroin at lower prices in the 1980s. The trade shifted away from Sicily in the late 1970s as various criminal organizations violently fought with each other over the trade. The fighting also led to a stepped up government law enforcement presence in Sicily. All of this combined to greatly diminish the role of the country in the international heroin trade.
## Trafficking
Traffic is heavy worldwide, with the biggest producer being Afghanistan.[18] According to U.N. sponsored survey,[19] as of 2004, Afghanistan accounted for production of 87 percent of the world's heroin.[20] Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War once again appeared as a facilitator of the trade.[21]
At present, opium poppies are mostly grown in Afghanistan, and in Southeast Asia, especially in the region known as the Golden Triangle straddling Myanmar, Thailand, Vietnam, Laos and Yunnan province in the People's Republic of China. There is also cultivation of opium poppies in the Sinaloa region of Mexico and in Colombia. The majority of the heroin consumed in the United States comes from Mexico and Colombia. Up until 2004, Pakistan was considered one of the biggest opium-growing countries. However, the efforts of Pakistan's Anti-Narcotics Force have since reduced the opium growing area by 59% as of 2001.
Conviction for trafficking in heroin carries the death penalty in most South-east Asia and some East Asia and Middle Eastern countries (see Use of death penalty worldwide for details), among which Malaysia, Singapore and Thailand are the most strict. The penalty applies even to citizens of countries where the penalty is not in place, sometimes causing controversy when foreign visitors are arrested for trafficking, for example the arrest of nine Australians in Bali or the hanging of Australian citizen Van Tuong Nguyen in Singapore, both in 2005.
Sandra Gregory has written an autobiography covering her experience of getting caught with Heroin at a Thai airport.
# Risks of non-medical use
- For intravenous users of heroin (and any other substance), the use of non-sterile needles and syringes and other related equipment leads to several serious risks:
the risk of contracting blood-borne pathogens such as HIV and hepatitis
the risk of contracting bacterial or fungal endocarditis and possibly venous sclerosis
abscesses caused by transfer of fungus from the skin of lemons, the acidic juice of which can be added to impure heroin to increase its solubility
- the risk of contracting blood-borne pathogens such as HIV and hepatitis
- the risk of contracting bacterial or fungal endocarditis and possibly venous sclerosis
- abscesses caused by transfer of fungus from the skin of lemons, the acidic juice of which can be added to impure heroin to increase its solubility
- Poisoning from contaminants added to "cut" or dilute heroin
- Chronic constipation
- Addiction and an increasing tolerance.
- Physical dependence can result from prolonged use of all opiate and opioids, resulting in withdrawal symptoms on cessation of use.
- Decreased kidney function. (although it is not currently known if this is due to adulterants used in the cut)[22][23][24][25][26]
Many countries and local governments have begun funding programs that supply sterile needles to people who inject illegal drugs in an attempt to reduce these contingent risks and especially the contraction and spread of blood-borne diseases. The Drug Policy Alliance reports that up to 75% of new AIDS cases among women and children are directly or indirectly a consequence of drug use by injection. But despite the immediate public health benefit of needle exchanges, some see such programs as tacit acceptance of illicit drug use. The United States federal government does not operate needle exchanges, although some state and local governments do support needle exchange programs.
A heroin overdose is usually treated with an opioid antagonist, such as naloxone (Narcan), or naltrexone, which has a high affinity for opioid receptors but does not activate them. This blocks heroin and other opioid antagonists and causes an immediate return of consciousness and the beginning of withdrawal symptoms when administered intravenously. The half-life of this antagonist is usually much shorter than that of the opiate drugs it is used to block, so the antagonist usually has to be re-administered multiple times until the opiate has been metabolized by the body.
Depending on drug interactions and numerous other factors, death from overdose can take anywhere from several minutes to several hours due to anoxia because the breathing reflex is suppressed by µ-opioids. An overdose is immediately reversible with an opioid antagonist injection. Heroin overdoses can occur due to an unexpected increase in the dose or purity or due to diminished opiate tolerance. However, most fatalities reported as overdoses are probably caused by interactions with other depressant drugs like alcohol or benzodiazepines.[27] It should also be noted that, since heroin can cause nausea and vomiting, a significant number of deaths attributed to heroin overdose are caused by aspiration of vomitus by an unconscious victim.
The LD50 for a physically addicted person is prohibitively high, to the point that there is no general medical consensus on where to place it. Several studies done in the 1920s gave users doses of 1,600–1,800 mg of heroin in one sitting, and no adverse effects were reported. Even for a non-user, the LD50 can be placed above 350 mg though some sources give a figure of between 75 and 375 mg for a 75 kg person.[28]
Street heroin is of widely varying and unpredictable purity. This means that the user may prepare what they consider to be a moderate dose while actually taking far more than intended. Also, those who use the drug after a period of abstinence have tolerances below what they were during active addiction. If a dose comparable to their previous use is taken, an effect greater to what the user intended is caused, in extreme cases an overdose could result.
It has been speculated that an unknown portion of heroin related deaths are the result of an overdose or allergic reaction to quinine, which may sometimes be used as a cutting agent.[29]
A final source of overdose in users comes from place conditioning. Heroin use, like other drug using behaviors, is highly ritualized. While the mechanism has yet to be clearly elucidated, it has been shown that longtime heroin users, immediately before injecting in a common area for heroin use, show an acute increase in metabolism and a surge in the concentration of opiate-metabolizing enzymes. This acute increase, a reaction to a location where the user has repeatedly injected heroin, imbues him or her with a strong (but temporary) tolerance to the toxic effects of the drug. When the user injects in a different location, this environment-conditioned tolerance does not occur, giving the user a much lower-than-expected ability to metabolize the drug. The user's typical dose of the drug, in the face of decreased tolerance, becomes far too high and can be toxic, leading to overdose.[30]
A small percentage of heroin smokers may develop symptoms of toxic leukoencephalopathy. This is believed to be caused by an uncommon adulterant that is only active when heated. Symptoms include slurred speech and difficulty walking.
# Harm reduction approaches to heroin
Proponents of the harm reduction philosophy seek to minimize the harms that arise from the recreational use of heroin. Safer means of taking the drug, such as smoking or nasal, oral and rectal insertion, are encouraged, due to injection having higher risks of overdose, infections and blood-borne viruses.
Where the strength of the drug is unknown, users are encouraged to try a small amount first to gauge the strength, to minimize the risks of overdose. For the same reason, poly drug use (the use of two or more drugs at the same time) is discouraged. Users are also encouraged to not use heroin on their own, as others can assist in the event of an overdose.
Heroin users who choose to inject should always use new needles, syringes, spoons/steri-cups and filters every time they inject and not share these with other users. Governments that support a harm reduction approach often run Needle & Syringe exchange programs, which supply new needles and syringes on a confidential basis, as well as education on proper filtering prior to injection, safer injection techniques, safe disposal of used injecting gear and other equipment used when preparing heroin for injection may also be supplied including citric acid sachets/vitamin C sachets, steri-cups, filters, alcohol pre-injection swabs, sterile water ampules and tourniquets (to stop use of shoe laces or belts).
# Withdrawal
The withdrawal syndrome from heroin may begin starting from within 6 to 24 hours of discontinuation of sustained use of the drug; however, this time frame can fluctuate with the degree of tolerance as well as the amount of the last consumed dose. Symptoms may include: sweating, malaise, anxiety, depression, priapism, extra sensitivity of the genitals in females, general feeling of heaviness, cramp-like pains in the limbs, yawning, tears, sleep difficulties (insomnia), cold sweats, chills, severe muscle and bone aches not precipitated by any physical trauma; nausea and vomiting, diarrhea, goose bumps, cramps, and fever.[31][32] Many users also complain of a painful condition, the so-called "itchy blood", which often results in compulsive scratching that causes bruises and sometimes ruptures the skin, leaving scabs. Abrupt termination of heroin use causes muscle spasms in the legs and arms of the user (restless leg syndrome). Users taking the "cold turkey" approach (withdrawal without using symptom-reducing or counteractive drugs), or induced withdrawal with opiate antagonist drugs, are more likely to experience the negative effects of withdrawal in a more pronounced manner.
Two general approaches are available to ease the physical part of opioid withdrawal. The first is to substitute a longer-acting opioid such as methadone or buprenorphine for heroin or another short-acting opioid and then slowly taper the dose.
In the second approach, benzodiazepines such as diazepam (Valium) may temporarily ease the often extreme anxiety of opioid withdrawal. The most common benzodiazepine employed as part of the detox protocol in these situations is oxazepam (Serax). Benzodiazepine use must be prescribed with care because benzodiazepines have an addiction potential, and many opioid users also use other central nervous system depressants, especially alcohol. Also, though unpleasant, opioid withdrawal seldom has the potential to be fatal, whereas complications related to withdrawal from benzodiazepines, barbiturates and alcohol (such as epileptic seizures, cardiac arrest, and delirium tremens) can prove hazardous and are potentially fatal.
Many symptoms of opioid withdrawal are due to rebound hyperactivity of the sympathetic nervous system, which can be suppressed with clonidine (Catapres), a centrally-acting alpha-2 agonist primarily used to treat hypertension. Another drug sometimes used to relieve the "restless legs" symptom of withdrawal is baclofen, a muscle relaxant. Diarrhea can likewise be treated symptomatically with the peripherally active opioid drug loperamide.
Buprenorphine is one of the substances most recently licensed for the substitution of opioids in the treatment of users. Being a partial opioid agonist/antagonist, it develops a lower grade of tolerance than heroin or methadone due to the so-called ceiling effect. It also has less severe withdrawal symptoms than heroin when discontinued abruptly, which should never be done without proper medical supervision. It is usually administered every 24-48 hrs. Buprenorphine is a kappa-opioid receptor antagonist. This gives the drug an anti-depressant effect, increasing physical and intellectual activity. Buprenorphine also acts as a partial agonist at the same μ-receptor where opioids like heroin exhibit their action. Due to its effects on this receptor, all patients whose tolerance is above a certain level are unable to obtain any "high" from other opioids during buprenorphine treatment except for very high doses.
Researchers at Johns Hopkins University have been testing a sustained-release "depot" form of buprenorphine that can relieve cravings and withdrawal symptoms for up to six weeks.[33] A sustained-release formulation would allow for easier administration and adherence to treatment, and reduce the risk of diversion or misuse.
Methadone is another μ-opioid agonist most often used to substitute for heroin in treatment for heroin addiction. Compared to heroin, methadone is well (but slowly) absorbed by the gastrointestinal tract and has a much longer duration of action of approximately 24 hours. Thus methadone maintenance avoids the rapid cycling between intoxication and withdrawal associated with heroin addiction. In this way, methadone has shown some success as a "less harmful substitute"; despite bearing about the same addiction potential as heroin, it is recommended for those who have repeatedly failed to complete withdrawal or have recently relapsed. As of 2005, the μ-opioid agonist buprenorphine is also being used to manage heroin addiction, being a superior, though still imperfect and not yet widely known alternative to methadone. Methadone, since it is longer-acting, produces withdrawal symptoms that appear later than with heroin, but usually last considerably longer and can in some cases be more intense. Methadone withdrawal symptoms can potentially persist for over a month, compared to heroin where significant physical symptoms would subside in 4 days.
Three opioid antagonists are known: naloxone and the longer-acting naltrexone and nalmefene. These medications block the effects of heroin, as well as the other opioids at the receptor site. Recent studies have suggested that the addition of naltrexone may improve the success rate in treatment programs when combined with the traditional therapy.
The University of Chicago undertook preliminary development of a heroin vaccine in monkeys during the 1970s, but it was abandoned. There were two main reasons for this. Firstly, when immunized monkeys had an increase in dose of x16, their antibodies became saturated and the monkey had the same effect from heroin as non-immunized monkeys. Secondly, until they reached the x16 point immunized monkeys would substitute other drugs to get a heroin-like effect. These factors suggested that immunized human users would simply either take massive quantities of heroin, or switch to other drugs.
There is also a controversial treatment for heroin addiction based on an Iboga-derived African drug, ibogaine. Many people travel abroad for ibogaine treatments that generally interrupt substance use disorders for 3-6 months or more in up to 80% of patients.[34] Relapse may occur when the person returns home to their normal environment however, where drug seeking behavior may return in response to social and environmental cues. Ibogaine treatments are carried out in several countries including Mexico and Canada as well as, in South and Central America and Europe. Opioid withdrawal therapy is the most common use of ibogaine. Some patients find ibogaine therapy more effective when it is given several times over the course of a few months or years. A synthetic derivative of ibogaine, 18-methoxycoronaridine was specifically designed to overcome cardiac and neurotoxic effects seen in some ibogaine research but, the drug has not yet found its way into clinical research..
# Heroin prescription
The UK Department of Health's Rolleston Committee report in 1926 established the British approach to heroin prescription to users, which was maintained for the next forty years: dealers were prosecuted, but doctors could prescribe heroin to users when withdrawing from it would cause harm or severe distress to the patient. This "policing and prescribing" policy effectively controlled the perceived heroin problem in the UK until 1959 when the number of heroinists doubled every sixteenth month during a period of ten years, 1959-1968. [35]. The failure changed the attitudes; in 1964 only specialized clinics and selected approved doctors were allowed to prescribe heroin to users. The law was changed in 1968 in a more restrictive direction. From the 1970s, the emphasis shifted to abstinence and the prescription of methadone, until now only a small number of users in the UK are prescribed heroin.[36]
In 1994 Switzerland began a trial program featuring a heroin prescription for users not well suited for withdrawal programs—e.g. those that had failed multiple withdrawal programs. The aim is maintaining the health of the user in order to avoid medical problems stemming from low-quality street heroin. Reducing drug-related crime was another goal. Users can more easily get or maintain a paid job through the program as well. The first trial in 1994 began with 340 users and it was later expanded to 1000 after medical and social studies suggested its continuation. Participants are prescribed to inject heroin in specially designed pharmacies for about US $13 per dose.[37]
The success of the Swiss trials led German, Dutch,[38] and Canadian[39] cities to try out their own heroin prescription programs.[40] Some Australian cities (such as Sydney) have trialed legal heroin supervised injecting centers, in line with other wider harm minimization programs. Heroin is unavailable on prescription however, and remains illegal outside the injecting room, and effectively decriminalized inside the injecting room.
# Drug interactions
Opioids are strong central nervous system depressants, but regular users develop physiological tolerance allowing gradually increased dosages. In combination with other central nervous system depressants, heroin may still kill even experienced users, particularly if their tolerance to the drug has reduced or the strength of their usual dose has increased.
Toxicology studies of heroin-related deaths reveal frequent involvement of other central nervous system depressants, including alcohol, benzodiazepines such as temazepam (Restoril; Normison), and, to a rising degree, methadone. Ironically, benzodiazepines are often used in the treatment of heroin addiction while they cause much more severe withdrawal symptoms.
Cocaine sometimes proves to be fatal when used in combination with heroin. Though "speedballs" (when injected) or "moonrocks" (when smoked) are a popular mix of the two drugs among users, combinations of stimulants and depressants can have unpredictable and sometimes fatal results. In the United States in early 2006, a rash of deaths was attributed to either a combination of fentanyl and heroin, or pure fentanyl masquerading as heroin particularly in the Detroit Metro Area; one news report refers to the combination as 'laced heroin', though this is likely a generic rather than a specific term.[41] | https://www.wikidoc.org/index.php/Diacetylmorphine | |
8688f35003165e689d7b1230a173c11abfc78e68 | wikidoc | Diaper | Diaper
# Overview
A diaper (in North America) or nappy (in Britain, many Commonwealth countries and Ireland) is an absorbent garment worn by individuals who are unable to control their bladder or bowel movements, or who are unable to reach the toilet when needed. The purpose of a diaper is to contain mess and keep the wearer dry and comfortable for several hours at a time. When diapers become full and can no longer hold any more waste, they require changing; this process is generally performed by a secondary person such as a parent or caregiver. Failure to change a diaper regularly can result in diaper rash.
Diapers can be made out of either cloth or disposable materials. Cloth diapers contain several layers of fabric such as terry towelling and can be washed and reused. Disposable diapers contain chemicals which increase absorbency and pull wetness away from skin. The decision to use cloth or disposable diapers is a controversial one, due to issues such as convenience, health, price, and their effect on the environment. Currently, disposable diapers are the most commonly used, with Pampers and Huggies the most well-known and popular brands.
Diapers are primarily worn by infants and children who are not yet potty trained or suffer from bedwetting. However, they can also be worn by adults who suffer from incontinence or in certain circumstances where access to a toilet is not available. These include some elderly people, those with a physical or mental disability, and people working in extreme conditions such as astronauts. Diapers are usually worn out of necessity rather than choice, although there are exceptions. Infantilists and diaper fetishists wear diapers willingly for comfort or sexual gratification.
# History
The problem of clothing infants not yet potty trained is as old as human history. In ancient times, babies would be dressed in natural resources such as leaf wraps and animals skins, with the Inuit making diapers out of moss and sealskin and Native Americans packing grass under a cover made of rabbit skin. European societies would wrap their children in strips of linen or wool known as swaddling bands, and in Elizabethan times, children would only have their diapers changed every few days. In countries with warmer climates, babies were kept naked and mothers tried to anticipate their bowel movements so as to avoid mess near their living areas. This method is known as elimination communication and is still used today in some cultures.
In the pioneering days, soiled diapers were rarely washed but simply dried and reused. This resulted in serious skin rashes, and it wasn't until the Industrial Revolution, when people had acquired enough money to buy household furniture, that parents began to make an effort to contain and dispose of their children's waste more carefully. In the nineteenth century, the modern diaper began to take shape and children in Europe and North America were being diapered using cotton material, held in place with a safety pin. Cloth diapers were first mass produced in 1887 by Maria Allen in the United States. When society gained a better understanding of bacteria, viruses, and fungi, mothers began washing their babies' diapers in boiling water in order to reduce the problem of diaper rash.
In the 20th century, the disposable diaper gradually evolved through the inventions of several different people. In 1942, a Swedish paper company known as PauliStróm created the first disposable diaper using sheets of tissue placed inside rubber pants. Four years later, a Westport housewife called Marion Donovan developed a waterproof diaper cover known as the "Boater" using a sheet of plastic from a shower curtain; she was granted four patents for her invention, including the use of plastic snaps as opposed to safety pins. In 1947, a man named George M. Schroder invented the first ever diaper with disposable nonwoven fabric. Disposable diapers were introduced to the US in 1949 by a project called J&J, and were considered one of the great inventions.
During the 1950s, companies such as Kendall, Parke-Davis, Playtex, and Molnlycke entered the disposable diaper market. In 1956, Procter and Gamble began researching disposable diapers. Vic Mills, a man who worked for the company, invented "Pampers" while searching for a better product to use on his grandson. Although Pampers were conceptualized in 1959, the diapers themselves were not launched into the market until 1961. Over the next few decades, the disposable diaper industry boomed and the competition between Procter and Gamble's Pampers and Kimberly Clark's Huggies resulted in lower prices and drastic changes to diaper design. Several improvements were made, such as the introduction of refastenable tapes, the "hourglass shape" so as to reduce bulk at the crotch area, and the invention of "super-absorbent" material.
## Word origin
The word diaper originally referred to the type of cloth rather than its use; "diaper" was the term for a pattern of small repeated geometric shapes, and later came to describe a white cotton or linen fabric with this pattern. The first known reference is in Shakespeare's The Taming of the Shrew: "Another bear the ewer, the third a diaper". The first cloth nappies consisted of a special type of soft tissue sheet, cut into geometric shapes. This is how the term "diaper" acquired a new meaning and it is still used today for modern disposable diapers. This usage stuck in the United States and Canada, but in Britain the word "nappy" took it's place. "Nap" is short fibers which create a hair-like surface on cloth and is sometimes used to make diapers with.
# Types
## Disposable
Since their introduction several decades ago, product innovations include the use of super-absorbent polymers, resealable tapes and elasticised waist bands. They are now much thinner and much more absorbent. The product range has more recently been extended into children’s toilet-training phase with the introduction of training pants and pant diapers.
Modern baby diapers and incontinence products have a layered construction, which allows the transfer and distribution of urine to an absorbent core structure where it is locked in.
- The topsheet closest to the skin is made of soft nonwoven fabric and transfers urine quickly to the layers underneath;
- The distribution layer receives the urine flow and transfers it on to the absorbent core;
- The absorbent core structure is the key component and is made out of a mixture of cellulose pulp and superabsorbent polymers;
- The backsheet is typically made of ‘breathable’ polyethylene film or a nonwoven and film composite which prevents wetness transfer to the bed or clothes.
Disposable diapers have overtaken the cloth diaper market many times over. Approximately 18 billion units of disposable diapers were sold in the USA in 2004.
## Cloth
Cloth diapers are reusable and can be made from natural fibers, man made materials, or a combination of both. Industrial cotton which may be bleached white or left a natural color. Other natural materials (often grown without pesticides), such as bamboo, unbleached hemp, are also used. Wool may also used. Man made materials such as microfiber toweling (for absorbencey), or PUL aka polyurethane laminate (for a waterproof layer) may be used. Another popular non-natural fiber is polyester fleece and faux suedecloth, used inside cloth diapers as a "stay-dry" wicking liner, because of the non-absorbent properties of synthetic fibers. Elastic is also commonly used.
Pre-formed cloth diapers with snaps or hook and loop fasteners (similar to Velcro) and all-in-one diapers with waterproof exteriors are now available, in addition to the older pre-fold and pin variety. Increasingly popular are "pocket" or "stuffable" diapers, which consist of a water-resistant outer shell sewn with an opening in the back for insertion of absorbent material.
These place much less stress on landfills; however, they also require washing in water with a small amount of detergent to be properly cleaned. Contrary to popular belief, high temperatures are not required, nor is soaking. Nowadays most people "dry-pail" after removal of solid waste and wash on a cold or warm wash. Most bacteria are removed by this treatment, any that aren't can be dealt with simply by line-drying outdoors. The UV exposure will kill the rest.
Cloth diaper-wearing children go through about 6,000 diaper changes. If thrown into a landfill, cotton diapers decompose within six months.
Some cities have a cloth diapering service that delivers clean diapers and picks up soiled ones for a fee.
## Debate
A life cycle analysis is one way to choose between disposable diapers and reusable cloth diapers. This analysis attempts to take into account all the environmental factors, including raw material and energy usage, air and water pollution emissions, and waste management issues. Several such analyses have concluded that when all factors are taken into account, both types of diapers have roughly the same environmental effect. However this research has subsequently been proven to be flawed as the numbers of cloth nappy users researched was much smaller than the numbers of disposable users, and the people interviewed were not very representative of cloth nappy users. Cloth nappy groups including the Women's Environmental Network, are campaigning for some more balanced research into the subject.
# Changing
The replacing of a soiled diaper is commonly referred to as "diapering" or "diaper changing." Diaper changing is essential to the prevention of contracting skin irritation of the buttocks, genitalia, and/or the waist. When to change a diaper is the decision of the caregiver. Some people believe that diapers should be changed at fixed times of the day for a routine, such as after naps and after meals. Other people believe that diapers should be changed when they feel a change is needed regardless of timing. Still others people believe a diaper should be changed immediately upon wetting or soiling. And, some believe that a diaper should be changed only when the wearer is uncomfortable, the diaper is full, the diaper is leaking, or the wearer has a bowel movement.
To avoid skin irritation, commonly referred to as diaper rash, the diaper of those prone to it should be changed as soon as possible after it is soiled (especially by fecal matter). The combination of urine and feces creates ammonia. Ammonia irritates skin and can cause painful redness. During the change, after the buttocks are cleaned and dried, some people use baby oil, barrier creme or baby powder to reduce the possibility of irritation. The most effective means to prevent and treat diaper rash is to expose the buttocks to air and sunshine as often as possible. There are also drying creams based on such ingredients as zinc oxide which can be used to treat diaper rash. Before disposing of a diaper, either in a diaper pail for washing or the garbage, fecal matter should be removed as much as possible and placed in a toilet to avoid landfill and ground water contamination.
Viewed by some as unpleasant, diaper changing is often a source of humour. It can provide an excellent opportunity for bonding between parent and child. Tom Selleck and Steve Guttenberg can be seen comically changing a baby's diaper in the 1987 movie Three Men and a Baby.
# Length of use
While awake, most children no longer need diapers when past two to four years of age, depending on culture, diaper type, parental habits, and the child's personality. However, some children have problems with daytime or more often nocturnal bladder control until eight years or older. Known as enuresis, or more commonly bedwetting, this may occur for a wide variety of reasons and can be both a short-term or long-standing issue. With this as well as the increasing number of obese infants in developed countries, disposables manufacturers are increasing the sizes of their products so that children can remain in diapers for longer. This has caused some controversy, with family psychologist John Rosemond claiming it is a "slap to the intelligence of a human being that one would allow baby to continue soiling and wetting himself past age two." Pediatrician T. Berry Brazelton, however, believes that toilet training is the child's choice and has encouraged this view in various commercials for Pampers Size 6, a diaper for older children.
Because of children wearing diapers longer, companies have designed special "training pants" which bridge the gap between baby diapers and normal underwear during the toilet training process. These training pants are distinct from diapers in that they mimic underwear and do not require complex fastening, so children can be changed standing up or even independently without adult assistance. Studies have shown that the use of training pants instead of diapers can be effective in speeding up toilet training. Larger versions, such as GoodNites, are available for older children and teenagers who have already been toilet trained but continue to suffer from bedwetting. They are intended to be discrete and similar to underwear, so as to avoid alienating those who find wearing diapers at a late age to be embarrassing. Available in both cloth and disposable versions, they are constructed like a diaper with an absorbent core and a waterproof shell and can be worn at any age until the child stops wetting the bed. Because they can be pulled on and off like underpants, children are able to use the toilet if they feel the need, rather than being forced to wet or soil themselves unnecessarily. Whereas most diapers are unisex, training pants often come in gender-specific versions because children become more aware of gender-differences as they grow older.
With the development of training pants making it possible for children to change their own diapers, and pediatricions such as Brazelton claiming that forced toilet training can cause lasting psychological and health problems, children are wearing diapers at a much older age than they did historically. Recent studies show that an increasing number of Japanese children are wetting their beds and even wearing diapers full time, well into elementary school. Because of this trend, progressively larger diapers are appearing on the Japanese market. One example includes the "Goo.N Refreshing Bigger than Big Size Diapers," intended for seven-year-old boys and girls. On the Children's Health and Wellness website, Dr Paul believes that diapering a child can prolong bedwetting, as it sends a "message of permisson" to urinate in their sleep. Dr Anthony Page of the Creative Child Online Magazine claims that children can get used to their diapers and begin to view them as a comfort, and that of the children surveyed, most would rather wear diapers than worry about getting up at night to go to the toilet.
# Adult usage
Although generally associated only with infants, diapers are sometimes also worn by older children, youth or adults for a variety of reasons. There may be a medical reason why a person is unable to reach to a toilet for longer than their bladders can hold out, such as incontinence or bedwetting. For example, pregnant women must urinate very frequently, and urgently, and therefore may decide to wear adult diapers. People who are bedridden, recovering from surgery, or in a wheelchair may also wear diapers because they are unable to access the toilet independently. Because the usage of diapers and incontinence problems in general are often the cause of significant embarrassment for the sufferer, youth and adult diapers are often instead referred to as incontinence pads.
Many fetishists wear diapers for sexual gratification. People with diaper fetishism have a desire to wear diapers even though it is not a physiological necessity, and may enjoy using their diaper to various degrees, depending on the person. Infantilists wear and use diapers in ageplay, although they are considered distinct from fetishists, as "diaper lovers" are sometimes sexually motivated to wear diapers, whereas "adult babies" wish to regress to the helpless state of a baby. Other sexual uses of diapers include omorashi, rubber or plastic fetishism, and Total Power Exchange in BDSM.
Astronauts wear trunk-like diapers called "Maximum Absorbency Garments", or MAGs, during liftoff and landing. On space shuttle missions, each crew member receives three diapers — for launch, reentry and a spare in case reentry has to be waved off and tried later. The super-absorbent fabric used in disposable diapers, which can hold up to 400 times its weight, was developed so Apollo astronauts could stay on spacewalks and extra-vehicular activity for at least six hours. Originally, only female astronauts would wear diapers, as the collection devices used by men were unsuitable for women; however, reports of the diapers' comfort and effectiveness eventually convinced men to start wearing them as well.
Public awareness of astronaut diapers rose significantly following the arrest of Lisa Nowak, a NASA astronaut charged with attempted murder who gained notoriety in the media for driving 900 miles in an adult diaper so she would not have to stop to urinate. The diapers became fodder for many television comedians, as well as being included in an adaptation of the story in Law & Order: Criminal Intent, despite Nowak's denial that she wore them.
Others situations in which diapers are worn because access to a toilet is unavailable or not allowed include guards who must stay on duty and are not permitted to leave their post; this is sometimes called the "watchman's urinal". It has long been suggested that legislators don a diaper before an extended filibuster, so often that it has been jokingly called "taking to the diaper." There has certainly been at least one such instance, in which Strom Thurmond gave a record holding 24 hours and 18 minute speech. Some Death Row inmates who are about to be executed wear "execution diapers" to collect body fluids expelled during and after their death. Characters in films such as Monster's Ball, Ted Bundy, and Sin City mention or can be seen being diapered before their execution. People diving in diving suits (in former times often standard diving dresses) may wear diapers because they are underwater continuously for several hours. Similarly, pilots may also wear them on long flights. Some competitive weightlifters choose to wear diapers when they first start out because the pressure makes them urinate involuntarily. It has even been claimed by the The Epoch Times that adult diapers are a popular way to avoid long bathroom lines during China's traveling season.
Seann Odoms of Men's Health magazine is well known for his belief that wearing diapers can help people of all ages to maintain healthy bowel function. He himself claims to wear diapers full-time for this purported health benefit. "Diapers," he states, "are nothing other than a more practical and healthy form of underwear. They are the safe and healthy way of living."
# Animal usage
Diapers and diaper-like products are sometimes used on animals (mostly pets, but also sometimes laboratory and working animals). This is often due to the animal not being housebroken. Though, it may also be for older, sick, or injured pets who have become incontinent. In some cases, these are simply baby diapers with holes cut for the tails to fit through. In other cases, they are diaper-like waste collection devices.
Animals that are sometimes diapered include :
- Horses (often so their manure can be used for fertilizer or so the horses can be used in public settings without leaving droppings on the ground). If the horse is hauling, sometimes the diaper is a piece of strong cloth or plastic slung between the horse's hauling harness and the front of the cart or carriage. Some mares are kept specifically for the production of urine, collected for premarin, a hormonal drug.
- Dogs (often when a female is ovulating and thus bleeding).
- Monkeys and apes (most monkeys are physically unable to learn control of excretions, which is not a useful ability for tree-dwelling animals. Diapers are most often seen on trained animals who appear on TV shows, in movies, or for live entertainment or educational appearances). | Diaper
# Overview
A diaper (in North America) or nappy (in Britain, many Commonwealth countries and Ireland) is an absorbent garment worn by individuals who are unable to control their bladder or bowel movements, or who are unable to reach the toilet when needed. The purpose of a diaper is to contain mess and keep the wearer dry and comfortable for several hours at a time. When diapers become full and can no longer hold any more waste, they require changing; this process is generally performed by a secondary person such as a parent or caregiver. Failure to change a diaper regularly can result in diaper rash.
Diapers can be made out of either cloth or disposable materials. Cloth diapers contain several layers of fabric such as terry towelling and can be washed and reused. Disposable diapers contain chemicals which increase absorbency and pull wetness away from skin. The decision to use cloth or disposable diapers is a controversial one, due to issues such as convenience, health, price, and their effect on the environment. Currently, disposable diapers are the most commonly used, with Pampers and Huggies the most well-known and popular brands.
Diapers are primarily worn by infants and children who are not yet potty trained or suffer from bedwetting. However, they can also be worn by adults who suffer from incontinence or in certain circumstances where access to a toilet is not available. These include some elderly people, those with a physical or mental disability, and people working in extreme conditions such as astronauts. Diapers are usually worn out of necessity rather than choice, although there are exceptions. Infantilists and diaper fetishists wear diapers willingly for comfort or sexual gratification.
# History
The problem of clothing infants not yet potty trained is as old as human history. In ancient times, babies would be dressed in natural resources such as leaf wraps and animals skins, with the Inuit making diapers out of moss and sealskin and Native Americans packing grass under a cover made of rabbit skin. European societies would wrap their children in strips of linen or wool known as swaddling bands, and in Elizabethan times, children would only have their diapers changed every few days.[1] In countries with warmer climates, babies were kept naked and mothers tried to anticipate their bowel movements so as to avoid mess near their living areas. This method is known as elimination communication and is still used today in some cultures.[2]
In the pioneering days, soiled diapers were rarely washed but simply dried and reused. This resulted in serious skin rashes, and it wasn't until the Industrial Revolution, when people had acquired enough money to buy household furniture, that parents began to make an effort to contain and dispose of their children's waste more carefully. In the nineteenth century, the modern diaper began to take shape and children in Europe and North America were being diapered using cotton material, held in place with a safety pin. Cloth diapers were first mass produced in 1887 by Maria Allen in the United States.[3] When society gained a better understanding of bacteria, viruses, and fungi, mothers began washing their babies' diapers in boiling water in order to reduce the problem of diaper rash.[1]
In the 20th century, the disposable diaper gradually evolved through the inventions of several different people. In 1942, a Swedish paper company known as PauliStróm created the first disposable diaper using sheets of tissue placed inside rubber pants. Four years later, a Westport housewife called Marion Donovan developed a waterproof diaper cover known as the "Boater" using a sheet of plastic from a shower curtain; she was granted four patents for her invention, including the use of plastic snaps as opposed to safety pins. In 1947, a man named George M. Schroder invented the first ever diaper with disposable nonwoven fabric. Disposable diapers were introduced to the US in 1949 by a project called J&J, and were considered one of the great inventions.[1][3][4][5]
During the 1950s, companies such as Kendall, Parke-Davis, Playtex, and Molnlycke entered the disposable diaper market. In 1956, Procter and Gamble began researching disposable diapers. Vic Mills, a man who worked for the company, invented "Pampers" while searching for a better product to use on his grandson. Although Pampers were conceptualized in 1959, the diapers themselves were not launched into the market until 1961. Over the next few decades, the disposable diaper industry boomed and the competition between Procter and Gamble's Pampers and Kimberly Clark's Huggies resulted in lower prices and drastic changes to diaper design. Several improvements were made, such as the introduction of refastenable tapes, the "hourglass shape" so as to reduce bulk at the crotch area, and the invention of "super-absorbent" material.[1][3][4][5]
## Word origin
The word diaper originally referred to the type of cloth rather than its use; "diaper" was the term for a pattern of small repeated geometric shapes, and later came to describe a white cotton or linen fabric with this pattern. The first known reference is in Shakespeare's The Taming of the Shrew: "Another bear the ewer, the third a diaper". The first cloth nappies consisted of a special type of soft tissue sheet, cut into geometric shapes. This is how the term "diaper" acquired a new meaning and it is still used today for modern disposable diapers. This usage stuck in the United States and Canada, but in Britain the word "nappy" took it's place. "Nap" is short fibers which create a hair-like surface on cloth and is sometimes used to make diapers with.[6]
# Types
## Disposable
Since their introduction several decades ago, product innovations include the use of super-absorbent polymers, resealable tapes and elasticised waist bands. They are now much thinner and much more absorbent. The product range has more recently been extended into children’s toilet-training phase with the introduction of training pants and pant diapers.
Modern baby diapers and incontinence products have a layered construction, which allows the transfer and distribution of urine to an absorbent core structure where it is locked in.
- The topsheet closest to the skin is made of soft nonwoven fabric and transfers urine quickly to the layers underneath;
- The distribution layer receives the urine flow and transfers it on to the absorbent core;
- The absorbent core structure is the key component and is made out of a mixture of cellulose pulp and superabsorbent polymers;
- The backsheet is typically made of ‘breathable’ polyethylene film or a nonwoven and film composite which prevents wetness transfer to the bed or clothes.
Disposable diapers have overtaken the cloth diaper market many times over. Approximately 18 billion units of disposable diapers were sold in the USA in 2004.
## Cloth
Cloth diapers are reusable and can be made from natural fibers, man made materials, or a combination of both. Industrial cotton which may be bleached white or left a natural color. Other natural materials (often grown without pesticides), such as bamboo, unbleached hemp, are also used. Wool may also used. Man made materials such as microfiber toweling (for absorbencey), or PUL aka polyurethane laminate (for a waterproof layer) may be used. Another popular non-natural fiber is polyester fleece and faux suedecloth, used inside cloth diapers as a "stay-dry" wicking liner, because of the non-absorbent properties of synthetic fibers. Elastic is also commonly used.
Pre-formed cloth diapers with snaps or hook and loop fasteners (similar to Velcro) and all-in-one diapers with waterproof exteriors are now available, in addition to the older pre-fold and pin variety. Increasingly popular are "pocket" or "stuffable" diapers, which consist of a water-resistant outer shell sewn with an opening in the back for insertion of absorbent material.
These place much less stress on landfills; however, they also require washing in water with a small amount of detergent to be properly cleaned. Contrary to popular belief, high temperatures are not required, nor is soaking. Nowadays most people "dry-pail" after removal of solid waste and wash on a cold or warm wash. Most bacteria are removed by this treatment, any that aren't can be dealt with simply by line-drying outdoors. The UV exposure will kill the rest.
Cloth diaper-wearing children go through about 6,000 diaper changes. If thrown into a landfill, cotton diapers decompose within six months.[7]
Some cities have a cloth diapering service that delivers clean diapers and picks up soiled ones for a fee.
## Debate
A life cycle analysis is one way to choose between disposable diapers and reusable cloth diapers. This analysis attempts to take into account all the environmental factors, including raw material and energy usage, air and water pollution emissions, and waste management issues. Several such analyses have concluded that when all factors are taken into account, both types of diapers have roughly the same environmental effect. However this research has subsequently been proven to be flawed as the numbers of cloth nappy users researched was much smaller than the numbers of disposable users, and the people interviewed were not very representative of cloth nappy users. Cloth nappy groups including the Women's Environmental Network, are campaigning for some more balanced research into the subject.
# Changing
The replacing of a soiled diaper is commonly referred to as "diapering" or "diaper changing." Diaper changing is essential to the prevention of contracting skin irritation of the buttocks, genitalia, and/or the waist. When to change a diaper is the decision of the caregiver. Some people believe that diapers should be changed at fixed times of the day for a routine, such as after naps and after meals. Other people believe that diapers should be changed when they feel a change is needed regardless of timing. Still others people believe a diaper should be changed immediately upon wetting or soiling. And, some believe that a diaper should be changed only when the wearer is uncomfortable, the diaper is full, the diaper is leaking, or the wearer has a bowel movement.
To avoid skin irritation, commonly referred to as diaper rash, the diaper of those prone to it should be changed as soon as possible after it is soiled (especially by fecal matter). The combination of urine and feces creates ammonia. Ammonia irritates skin and can cause painful redness. During the change, after the buttocks are cleaned and dried, some people use baby oil, barrier creme or baby powder to reduce the possibility of irritation. The most effective means to prevent and treat diaper rash is to expose the buttocks to air and sunshine as often as possible. There are also drying creams based on such ingredients as zinc oxide which can be used to treat diaper rash. Before disposing of a diaper, either in a diaper pail for washing or the garbage, fecal matter should be removed as much as possible and placed in a toilet to avoid landfill and ground water contamination.
Viewed by some as unpleasant, diaper changing is often a source of humour. It can provide an excellent opportunity for bonding between parent and child. Tom Selleck and Steve Guttenberg can be seen comically changing a baby's diaper in the 1987 movie Three Men and a Baby.
# Length of use
While awake, most children no longer need diapers when past two to four years of age, depending on culture, diaper type, parental habits, and the child's personality. However, some children have problems with daytime or more often nocturnal bladder control until eight years or older.[8] Known as enuresis, or more commonly bedwetting, this may occur for a wide variety of reasons and can be both a short-term or long-standing issue. With this as well as the increasing number of obese infants in developed countries, disposables manufacturers are increasing the sizes of their products so that children can remain in diapers for longer.[9] This has caused some controversy, with family psychologist John Rosemond claiming it is a "slap to the intelligence of a human being that one would allow baby to continue soiling and wetting himself past age two."[10] Pediatrician T. Berry Brazelton, however, believes that toilet training is the child's choice and has encouraged this view in various commercials for Pampers Size 6, a diaper for older children.[10]
Because of children wearing diapers longer, companies have designed special "training pants" which bridge the gap between baby diapers and normal underwear during the toilet training process. These training pants are distinct from diapers in that they mimic underwear and do not require complex fastening, so children can be changed standing up or even independently without adult assistance. Studies have shown that the use of training pants instead of diapers can be effective in speeding up toilet training.[11] Larger versions, such as GoodNites, are available for older children and teenagers who have already been toilet trained but continue to suffer from bedwetting. They are intended to be discrete and similar to underwear, so as to avoid alienating those who find wearing diapers at a late age to be embarrassing.[12] Available in both cloth and disposable versions, they are constructed like a diaper with an absorbent core and a waterproof shell and can be worn at any age until the child stops wetting the bed. Because they can be pulled on and off like underpants, children are able to use the toilet if they feel the need, rather than being forced to wet or soil themselves unnecessarily. Whereas most diapers are unisex, training pants often come in gender-specific versions because children become more aware of gender-differences as they grow older.[11]
With the development of training pants making it possible for children to change their own diapers, and pediatricions such as Brazelton claiming that forced toilet training can cause lasting psychological and health problems, children are wearing diapers at a much older age than they did historically.[10] Recent studies show that an increasing number of Japanese children are wetting their beds and even wearing diapers full time, well into elementary school.[13][14] Because of this trend, progressively larger diapers are appearing on the Japanese market. One example includes the "Goo.N Refreshing Bigger than Big Size Diapers," intended for seven-year-old boys[15] and girls[16]. On the Children's Health and Wellness website, Dr Paul believes that diapering a child can prolong bedwetting, as it sends a "message of permisson" to urinate in their sleep.[17] Dr Anthony Page of the Creative Child Online Magazine claims that children can get used to their diapers and begin to view them as a comfort, and that of the children surveyed, most would rather wear diapers than worry about getting up at night to go to the toilet.[18]
# Adult usage
Although generally associated only with infants, diapers are sometimes also worn by older children, youth or adults for a variety of reasons. There may be a medical reason why a person is unable to reach to a toilet for longer than their bladders can hold out, such as incontinence or bedwetting. For example, pregnant women must urinate very frequently, and urgently, and therefore may decide to wear adult diapers. People who are bedridden, recovering from surgery, or in a wheelchair may also wear diapers because they are unable to access the toilet independently. Because the usage of diapers and incontinence problems in general are often the cause of significant embarrassment for the sufferer, youth and adult diapers are often instead referred to as incontinence pads.
Many fetishists wear diapers for sexual gratification. People with diaper fetishism have a desire to wear diapers even though it is not a physiological necessity, and may enjoy using their diaper to various degrees, depending on the person. Infantilists wear and use diapers in ageplay, although they are considered distinct from fetishists, as "diaper lovers" are sometimes sexually motivated to wear diapers, whereas "adult babies" wish to regress to the helpless state of a baby. Other sexual uses of diapers include omorashi, rubber or plastic fetishism[19], and Total Power Exchange in BDSM.
Astronauts wear trunk-like diapers called "Maximum Absorbency Garments", or MAGs, during liftoff and landing.[20] On space shuttle missions, each crew member receives three diapers — for launch, reentry and a spare in case reentry has to be waved off and tried later.[21] The super-absorbent fabric used in disposable diapers, which can hold up to 400 times its weight, was developed so Apollo astronauts could stay on spacewalks and extra-vehicular activity for at least six hours.[22] Originally, only female astronauts would wear diapers, as the collection devices used by men were unsuitable for women; however, reports of the diapers' comfort and effectiveness eventually convinced men to start wearing them as well.[23]
Public awareness of astronaut diapers rose significantly following the arrest of Lisa Nowak, a NASA astronaut charged with attempted murder who gained notoriety in the media for driving 900 miles in an adult diaper so she would not have to stop to urinate.[24] The diapers became fodder for many television comedians, as well as being included in an adaptation of the story in Law & Order: Criminal Intent, despite Nowak's denial that she wore them.[25]
Others situations in which diapers are worn because access to a toilet is unavailable or not allowed include guards who must stay on duty and are not permitted to leave their post; this is sometimes called the "watchman's urinal".[22] It has long been suggested that legislators don a diaper before an extended filibuster, so often that it has been jokingly called "taking to the diaper." There has certainly been at least one such instance, in which Strom Thurmond gave a record holding 24 hours and 18 minute speech.[26][22] Some Death Row inmates who are about to be executed wear "execution diapers" to collect body fluids expelled during and after their death.[22][27] Characters in films such as Monster's Ball, Ted Bundy, and Sin City mention or can be seen being diapered before their execution. People diving in diving suits (in former times often standard diving dresses) may wear diapers because they are underwater continuously for several hours.[22] Similarly, pilots may also wear them on long flights.[22] Some competitive weightlifters choose to wear diapers when they first start out because the pressure makes them urinate involuntarily.[22] It has even been claimed by the The Epoch Times that adult diapers are a popular way to avoid long bathroom lines during China's traveling season.[28]
Seann Odoms of Men's Health magazine is well known for his belief that wearing diapers can help people of all ages to maintain healthy bowel function. He himself claims to wear diapers full-time for this purported health benefit. "Diapers," he states, "are nothing other than a more practical and healthy form of underwear. They are the safe and healthy way of living."
[2]
# Animal usage
Diapers and diaper-like products are sometimes used on animals (mostly pets, but also sometimes laboratory and working animals). This is often due to the animal not being housebroken. Though, it may also be for older, sick, or injured pets who have become incontinent. In some cases, these are simply baby diapers with holes cut for the tails to fit through. In other cases, they are diaper-like waste collection devices.
Animals that are sometimes diapered include :
- Horses (often so their manure can be used for fertilizer or so the horses can be used in public settings without leaving droppings on the ground). If the horse is hauling, sometimes the diaper is a piece of strong cloth or plastic slung between the horse's hauling harness and the front of the cart or carriage. Some mares are kept specifically for the production of urine, collected for premarin, a hormonal drug.
- Dogs (often when a female is ovulating and thus bleeding).
- Monkeys and apes (most monkeys are physically unable to learn control of excretions, which is not a useful ability for tree-dwelling animals. Diapers are most often seen on trained animals who appear on TV shows, in movies, or for live entertainment or educational appearances). | https://www.wikidoc.org/index.php/Diaper | |
f2b274d713afe9fb0f12d08594402c6c9feb19c4 | wikidoc | Diesel | Diesel
# Overview
Diesel or diesel fuel (Template:IPAEng) in general is any fuel used in diesel engines. Production costs are 25-35% less than that of regular gasoline. The most common is a specific fractional distillate of petroleum fuel oil, but alternatives that are not derived from petroleum, such as biodiesel, biomass to liquid (BTL) or gas to liquid (GTL) diesel, are increasingly being developed and adopted. To distinguish these types, petroleum-derived diesel is increasingly called petrodiesel. Ultra-low sulfur diesel (ULSD) is a term used to describe a standard for defining diesel fuel with substantially lowered sulfur contents. As of 2007, almost every diesel fuel available in America and Europe are ULSD type.
# History
## Etymology
The word "diesel" is derived from the German inventor Rudolf Christian Karl Diesel (March 18, 1858 – September 30, 1913) who in 1892 invented the diesel engine.
## Diesel Engine
Diesel engines are a type of internal combustion engine. Rudolf Diesel originally designed the diesel engine to use vegetable oils as a fuel in order to help support agrarian society and to enable independent craftsmen and artisans to compete with large industry.
# Petroleum diesel
Petroleum diesel, or petrodiesel, is produced from petroleum and is a hydrocarbon mixture, obtained in the fractional distillation of crude oil between 200 °C and 350 °C at atmospheric pressure.
The density of petroleum diesel is about 850 grams per litre whereas petrol (gasoline) has a density of about 720 g/L, about 15% less. When burnt, diesel typically releases about 138,700 (Expression error: Missing operand for *. ) per US gallon, whereas gasoline releases 125,000 (Expression error: Missing operand for *. ) per US gallon, about 11% less. Diesel is generally simpler to refine from petroleum than gasoline. The price of diesel traditionally rises during colder months as demand for heating oil rises, which is refined in much the same way. Due to its higher level of pollutants, diesel must undergo additional filtration which contributes to a sometimes higher cost. In many parts of the United States and throughout the UK, diesel may be higher priced than petrol. Reasons for higher priced diesel include the shutdown of some refineries in the Gulf of Mexico, diversion of mass refining capacity to gasoline production, and a recent transfer to ultra-low sulfur diesel (ULSD), which causes infrastructural complications.
Diesel-powered cars generally have a better fuel economy than equivalent gasoline engines and produce less greenhouse gas pollution. Their greater economy is due to the higher energy per-litre content of diesel fuel and the intrinsic efficiency of the diesel engine. While petrodiesel's 15% higher density results in 15% higher greenhouse gas emissions per litre compared to gasoline, the 20–40% better fuel economy achieved by modern diesel-engined automobiles offsets the higher-per-liter emissions of greenhouse gases, and produces 10-20 percent less GHG emissions than comparable gasoline vehicles. However, the EPA carbon footprint estimates do not include the carbon cost of vehicle manufacture, nor the carbon cost of filtering particulates, sulfates, and nitrates emissions. Biodiesel-powered diesel engines offer substantially improved emission reductions compared to petro-diesel or gasoline-powered engines, while retaining most of the fuel economy advantages over conventional gasoline-powered automobiles.
In the past, diesel fuel contained higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In the United States, more stringent emission standards have been adopted with the transition to ULSD starting in 2006 and becoming mandatory on June 1, 2010 (see also diesel exhaust). U.S. diesel fuel typically also has a lower cetane number (a measure of ignition quality) than European diesel, resulting in worse cold weather performance and some increase in emissions. This is one reason why U.S. drivers of large trucks have increasingly turned to biodiesel fuels with their generally higher cetane ratings.
High levels of sulfur in diesel are harmful for the environment because they prevent the use of catalytic diesel particulate filters to control diesel particulate emissions, as well as more advanced technologies, such as nitrogen oxide (NOx) adsorbers (still under development), to reduce emissions. However, the process for lowering sulfur also reduces the lubricity of the fuel, meaning that additives must be put into the fuel to help lubricate engines. Biodiesel and biodiesel/petrodiesel blends, with their higher lubricity levels, are increasingly being utilized as an alternative.
The U.S. annual consumption of diesel fuel in 2006 was about 190 billion litres (42 billion imperial gallons or 50 billion US gallons).
## Chemical composition
Petroleum-derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes). The average chemical formula for common diesel fuel is C12H23, ranging from approx. C10H20 to C15H28
## Algae, microbes, and water
There has been much discussion and misinformation about algae in diesel fuel. Algae require sunlight to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive there. However, some microbes can survive there, and can feed on the diesel fuel.
These microbes form a colony that lives at the fuel/water interface. They grow quite rapidly in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters.
It is possible to either kill this growth with a biocide treatment, or eliminate the water, a necessary component of microbial life. There are a number of biocides on the market, which must be handled very carefully. If a biocide is used, it must be added every time a tank is refilled until the problem is fully resolved.
Biocides attack the cell wall of microbes resulting in lysis, the death of a cell by bursting. The dead cells then gather on the bottom of the fuel tanks and form a sludge, filter clogging will continue after biocide treatment until the sludge has abated.
Given the right conditions microbes will repopulate the tanks and re-treatment with biocides will then be necessary. With repetitive biocide treatments microbes can then form resistance to a particular brand. Trying another brand may resolve this.
Petrodiesel spilled on a road will stay there until washed away by sufficiently heavy rain, whereas gasoline will quickly evaporate. Diesel spills severely reduce tire grip and have been implicated in many accidents. They are especially dangerous for two-wheeled vehicles.
# Synthetic diesel
Wood, hemp, straw, corn, garbage, food scraps, and sewage-sludge may be dried and gasified to synthesis gas. After purification the Fischer-Tropsch process is used to produce synthetic diesel.
This means that synthetic diesel oil may be one route to biomass based diesel oil. Such processes are often called Biomass-To-Liquids or BTL.
Synthetic diesel may also be produced out of natural gas in the Gas-to-liquid (GTL) process or out of coal in the Coal-to-liquid (CTL) process. Such synthetic diesel has 30% less particulate emissions than conventional diesel (US- California).
# Biodiesel
Biodiesel can be obtained from vegetable oil (vegidiesel/vegifuel), or animal fats (bio-lipids), using transesterification. Biodiesel is a non-fossil fuel alternative to petrodiesel. It can also be mixed with petrodiesel in any amount in modern engines, though when first using it, the solvent properties of the fuel tend to dissolve accumulated deposits and can clog fuel filters. Biodiesel has a higher gel point than petrodiesel, but is comparable to diesel. This can be overcome by using a biodiesel/petrodiesel blend, or by installing a fuel heater, but this is only necessary during the colder months. A diesel-biodiesel mix results in lower emissions than either can achieve alone, except for NOx emissions. A small fraction of biodiesel can be used as an additive in low-sulfur formulations of diesel to increase the lubricity lost when the sulfur is removed. In the event of fuel spills, biodiesel is easily washed away with ordinary water and is nontoxic compared to other fuels.
Biodiesel can be produced using kits. Certain kits allow for processing of used vegetable oil that can be run through any conventional diesel motor with modifications. The modification needed is the replacement of fuel lines from the intake and motor and all affected rubber fittings in injection and feeding pumps a.s.o. This is because biodiesel is an effective solvent and will replace softeners within unsuitable rubber with itself over time. Synthetic gaskets for fittings and hoses prevent this.
Chemically, most biodiesel consists of alkyl (usually methyl) esters instead of the alkanes and aromatic hydrocarbons of petroleum derived diesel. However, biodiesel has combustion properties very similar to petrodiesel, including combustion energy and cetane ratings. Paraffin biodiesel also exists. Due to the purity of the source, it has a higher quality than petrodiesel.
## Biodiesel emissions
The use of biodiesel blended diesel fuels in fractions up to 99% result in substantial emission reductions. Sulfur oxide and sulfate emissions, major components of acid rain, are essentially eliminated with pure biodiesel and substantially reduced using biodiesel blends with minor quantities of ULSD petrodiesel. Use of biodiesel also results in substantial reductions of unburned hydrocarbons, carbon monoxide, and particulate matter compared to either gasoline or petrodiesel. C02, or carbon monoxide emissions using biodiesel are substantially reduced, on the order of 50% compared to most petrodiesel fuels. The exhaust emissions of particulate matter from biodiesel have been found to be 30 percent lower than overall particulate matter emissions from petrodiesel. The exhaust emissions of total hydrocarbons (a contributing factor in the localized formation of smog and ozone) are up to 93 percent lower for biodiesel than diesel fuel. Biodiesel emissions of nitrogen oxides can sometimes increase slightly. However, biodiesel's complete lack of sulfur and sulfate emissions allows the use of NOx control technologies, such as AdBlue, that cannot be used with conventional diesel, allowing the management and control of nitrous oxide emissions.
Biodiesel also may reduce health risks associated with petroleum diesel. Biodiesel emissions showed decreased levels of PAH and nitrited PAH compounds which have been identified as potential cancer causing compounds. In recent testing, PAH compounds were reduced by 75 to 85 percent, with the exception of benzo(a)anthracene, which was reduced by roughly 50 percent. Targeted nPAH compounds were also reduced dramatically with biodiesel fuel, with 2-nitrofluorene and 1-nitropyrene reduced by 90 percent, and the rest of the nPAH compounds reduced to only trace levels.
## Aircraft
The first diesel powered flight of a fixed wing aircraft took place on the evening of September 18, 1928, at the Packard Motor Company proving grounds, Utica, Michigan with Captain Lionel M. Woolson and Walter Lees at the controls (the first "official" test flight was taken the next morning). The engine was designed for Packard by Woolson and the aircraft was a Stinson SM1B, X7654. Later that year Charles Lindbergh flew the same aircraft. In 1929 it was flown 621 miles (999.402624 km) non-stop from Detroit to Langley, Virginia (near Washington, D.C.). This aircraft is presently owned by Greg Herrick and resides in the Golden Wings Flying Museum near Minneapolis, Minnesota. In 1931, Walter Lees and Fredrick Brossy set the nonstop flight record flying a Bellanca powered by a Packard diesel for 84h 32 m. The Hindenburg was powered by four 16 cylinder diesel engines, each with approximately 1,200 (Expression error: Missing operand for *. ) available in bursts, and 850 (Expression error: Missing operand for *. ) available for cruising. Modern diesel engines for propellor-driven aircraft are manufactured by Thielert Aircraft Engines and SMA. These engines are able to run on Jet A fuel, which is similar in composition to automotive diesel and cheaper and more plentiful than the 100 octane low-lead gasoline (avgas) used by the majority of the piston-engine aircraft fleet.
The most-produced aviation diesel engine in history so far has been the Junkers Jumo 205, which, along with its similar developments from the Junkers Motorenwerke, had approximately 1000 examples of the unique opposed piston, two-stroke design powerplant built in the 1930s leading into World War II in Germany.
# Automobiles
The very first diesel-engine automobile trip (inside USA) was completed on January 6, 1930. The trip was from Indianapolis to New York City, a distance of nearly 800 miles (1300 km). This feat helped to prove the usefulness of the compression ignition engine.
## Automobile racing
In 1931, Dave Evans drove his Cummins Diesel Special to a nonstop finish in the Indianapolis 500, the first time a car had completed the race without a pit stop. That car and a later Cummins Diesel Special are on display at the Indianapolis Motor Speedway Hall of Fame Museum.
In the late 1970s Mercedes-Benz at Nardò drove a C111-III with a 5 cylinder diesel engine to several new records, including driving an average of 314 km/h (195 mph) for 12 hours and hitting a top speed of 325 km/h (201 mph).
With turbocharged diesel cars getting stronger in the 1990s, they were entered in touring car racing, and BMW even won the 24 Hours Nürburgring in 1998 with a 320d. After winning the 12 Hours of Sebring in 2006 with their diesel-powered R10 LMP, Audi won the 24 Hours of Le Mans, too. This is the first time a diesel-fueled vehicle has won at Le Mans against cars powered with regular fuel or other alternative fuel like methanol or bio-ethanol. Competitors like Porsche predicted this victory for Audi as current FIA and ACO regulations are seen as pro-diesel. French automaker Peugeot entered the diesel powered Peugeot 908 LMP in the 2007 24 Hours of Le Mans in response to the success of the Audi R10.
In an effort to further demonstrate the potential of diesel power, California-based Gale Banks Engineering designed, built and raced a Cummins-powered pickup at the Bonneville Salt Flats in October 2002. The truck set a top speed of 355 km/h (222 mph) and became the world’s fastest pickup, and almost equally notable, the truck drove to the race towing its own support trailer.
On 23 August 2006, the British-based earthmoving machine manufacturer JCB raced the specially designed JCB Dieselmax car at 563.4 km/h (350.1 mph). The driver was Andy Green. The car was powered by two modified JCB 444 diesel engines.
# Other uses
Poor quality, (high sulfur) diesel fuel has been used as a palladium extraction agent for the liquid-liquid extraction of this metal from nitric acid mixtures. This has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this solvent extraction system the hydrocarbons of the diesel act as the diluent while the dialkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far neither a pilot plant or full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel.
# Health effects
Diesel combustion exhaust is an important source of atmospheric soot and fine particles, which is a fraction of air pollution implicated in human heart and lung damage. Diesel exhaust also contains nanoparticles which have been found to damage the cardiovascular system in a mouse model. The study of nanotoxicology is still in its infancy, and the extent of health and societal effects caused by diesel combustion is unknown. Biodiesel and biodiesel blends result in greatly decreased pollution levels.
# Taxation
Diesel fuel is very similar to heating oil which is used in central heating. In Europe, the United States, and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. Similarly, "untaxed" diesel (sometimes called "off road diesel") is available in the United States, which is available for use primarily in agricultural applications such as fuel for tractors, recreational and utility vehicles or other non-commercial vehicles that do not use public roads. Additionally, this fuel may have sulphur levels that exceed the limits for road use using the newer 2007 standards. This untaxed diesel is dyed red for identification purposes,
and should a person be found to be using this untaxed diesel fuel for a typically taxed purpose (such as "over-the-road", or driving use), the user can be fined US$10,000. In the United Kingdom, Belgium and the Netherlands it is known as red diesel (or gas oil), and is also used in agricultural vehicles, home heating tanks, refrigeration units on vans/trucks which contain perishable items (e.g. food, medicine) and for marine craft. Diesel fuel, or Marked Gas Oil is dyed green in the Republic of Ireland. The term DERV ("diesel engined road vehicle") is used in the UK as a synonym for unmarked road diesel fuel. In India, taxes on diesel fuel are lower than on gasoline as the majority of the transportation that transports grains and other essential commodities across the country runs on diesel.
In Germany, diesel fuel is taxed lower than gasoline but the annual vehicle tax is higher for diesel vehicles than for gasoline vehicles. This gives an advantage to vehicles that travel longer distances (which is the case for trucks and utility vehicles) because the annual vehicle tax depends only on engine displacement, not on distance driven. The point at which a diesel vehicle becomes less expensive than a comparable gasoline vehicle is around 20,000 km per year (12,500 miles per year) for an average car.
Taxes on biodiesel in the United States vary from state to state and in some states (Texas, for example) have no tax on biodiesel and a reduced tax on biodiesel blends equivalent to the amount of biodiesel in the blend, so B20 fuel is taxed 20% less than pure petrodiesel. Other states, such as North Carolina, tax biodiesel (in any blended configuration) the same as petrodiesel, although they have introduced new incentives to producers and users of all biofuels. | Diesel
# Overview
Diesel or diesel fuel (Template:IPAEng) in general is any fuel used in diesel engines. Production costs are 25-35% less than that of regular gasoline. The most common is a specific fractional distillate of petroleum fuel oil, but alternatives that are not derived from petroleum, such as biodiesel, biomass to liquid (BTL) or gas to liquid (GTL) diesel, are increasingly being developed and adopted. To distinguish these types, petroleum-derived diesel is increasingly called petrodiesel. Ultra-low sulfur diesel (ULSD) is a term used to describe a standard for defining diesel fuel with substantially lowered sulfur contents. As of 2007, almost every diesel fuel available in America and Europe are ULSD type.
# History
## Etymology
The word "diesel" is derived from the German inventor Rudolf Christian Karl Diesel (March 18, 1858 – September 30, 1913) who in 1892 invented the diesel engine.
## Diesel Engine
Template:Expand-section
Diesel engines are a type of internal combustion engine. Rudolf Diesel originally designed the diesel engine to use vegetable oils as a fuel in order to help support agrarian society and to enable independent craftsmen and artisans to compete with large industry.
# Petroleum diesel
Petroleum diesel, or petrodiesel,[1] is produced from petroleum and is a hydrocarbon mixture, obtained in the fractional distillation of crude oil between 200 °C and 350 °C at atmospheric pressure.
The density of petroleum diesel is about 850 grams per litre whereas petrol (gasoline) has a density of about 720 g/L, about 15% less. When burnt, diesel typically releases about 138,700 (Expression error: Missing operand for *. ) per US gallon, whereas gasoline releases 125,000 (Expression error: Missing operand for *. ) per US gallon, about 11% less.[2] Diesel is generally simpler to refine from petroleum than gasoline. The price of diesel traditionally rises during colder months as demand for heating oil rises, which is refined in much the same way. Due to its higher level of pollutants, diesel must undergo additional filtration[citation needed] which contributes to a sometimes higher cost. In many parts of the United States and throughout the UK, diesel may be higher priced than petrol.[3] Reasons for higher priced diesel include the shutdown of some refineries in the Gulf of Mexico, diversion of mass refining capacity to gasoline production, and a recent transfer to ultra-low sulfur diesel (ULSD), which causes infrastructural complications.[4]
Diesel-powered cars generally have a better fuel economy than equivalent gasoline engines and produce less greenhouse gas pollution. Their greater economy is due to the higher energy per-litre content of diesel fuel and the intrinsic efficiency of the diesel engine. While petrodiesel's 15% higher density results in 15% higher greenhouse gas emissions per litre compared to gasoline,[5] the 20–40% better fuel economy achieved by modern diesel-engined automobiles offsets the higher-per-liter emissions of greenhouse gases, and produces 10-20 percent less GHG emissions than comparable gasoline vehicles.[6][7][8] However, the EPA carbon footprint estimates do not include the carbon cost of vehicle manufacture, nor the carbon cost of filtering particulates, sulfates, and nitrates emissions. Biodiesel-powered diesel engines offer substantially improved emission reductions compared to petro-diesel or gasoline-powered engines, while retaining most of the fuel economy advantages over conventional gasoline-powered automobiles.
In the past, diesel fuel contained higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In the United States, more stringent emission standards have been adopted with the transition to ULSD starting in 2006 and becoming mandatory on June 1, 2010 (see also diesel exhaust). U.S. diesel fuel typically also has a lower cetane number (a measure of ignition quality) than European diesel, resulting in worse cold weather performance and some increase in emissions.[9] This is one reason why U.S. drivers of large trucks have increasingly turned to biodiesel fuels with their generally higher cetane ratings.
High levels of sulfur in diesel are harmful for the environment because they prevent the use of catalytic diesel particulate filters to control diesel particulate emissions, as well as more advanced technologies, such as nitrogen oxide (NOx) adsorbers (still under development), to reduce emissions. However, the process for lowering sulfur also reduces the lubricity of the fuel, meaning that additives must be put into the fuel to help lubricate engines. Biodiesel and biodiesel/petrodiesel blends, with their higher lubricity levels, are increasingly being utilized as an alternative.
The U.S. annual consumption of diesel fuel in 2006 was about 190 billion litres (42 billion imperial gallons or 50 billion US gallons). [1]
## Chemical composition
Petroleum-derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes).[10] The average chemical formula for common diesel fuel is C12H23, ranging from approx. C10H20 to C15H28
## Algae, microbes, and water
There has been much discussion and misinformation about algae in diesel fuel[citation needed]. Algae require sunlight to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive there. However, some microbes can survive there, and can feed on the diesel fuel.
These microbes form a colony that lives at the fuel/water interface. They grow quite rapidly in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters.
It is possible to either kill this growth with a biocide treatment, or eliminate the water, a necessary component of microbial life. There are a number of biocides on the market, which must be handled very carefully. If a biocide is used, it must be added every time a tank is refilled until the problem is fully resolved.
Biocides attack the cell wall of microbes resulting in lysis, the death of a cell by bursting. The dead cells then gather on the bottom of the fuel tanks and form a sludge, filter clogging will continue after biocide treatment until the sludge has abated.
Given the right conditions microbes will repopulate the tanks and re-treatment with biocides will then be necessary. With repetitive biocide treatments microbes can then form resistance to a particular brand.[citation needed] Trying another brand may resolve this.
Petrodiesel spilled on a road will stay there until washed away by sufficiently heavy rain, whereas gasoline will quickly evaporate. Diesel spills severely reduce tire grip and have been implicated in many accidents. They are especially dangerous for two-wheeled vehicles.
# Synthetic diesel
Wood, hemp, straw, corn, garbage, food scraps, and sewage-sludge may be dried and gasified to synthesis gas. After purification the Fischer-Tropsch process is used to produce synthetic diesel.[11]
This means that synthetic diesel oil may be one route to biomass based diesel oil. Such processes are often called Biomass-To-Liquids or BTL.
Synthetic diesel may also be produced out of natural gas in the Gas-to-liquid (GTL) process or out of coal in the Coal-to-liquid (CTL) process. Such synthetic diesel has 30% less particulate emissions than conventional diesel (US- California).[12]
# Biodiesel
Biodiesel can be obtained from vegetable oil (vegidiesel/vegifuel), or animal fats (bio-lipids), using transesterification. Biodiesel is a non-fossil fuel alternative to petrodiesel. It can also be mixed with petrodiesel in any amount in modern engines, though when first using it, the solvent properties of the fuel tend to dissolve accumulated deposits and can clog fuel filters.[citation needed] Biodiesel has a higher gel point than petrodiesel, but is comparable to diesel. This can be overcome by using a biodiesel/petrodiesel blend, or by installing a fuel heater, but this is only necessary during the colder months. A diesel-biodiesel mix results in lower emissions than either can achieve alone,[13] except for NOx emissions. A small fraction of biodiesel can be used as an additive in low-sulfur formulations of diesel to increase the lubricity lost when the sulfur is removed. In the event of fuel spills, biodiesel is easily washed away with ordinary water and is nontoxic compared to other fuels.
Biodiesel can be produced using kits. Certain kits allow for processing of used vegetable oil that can be run through any conventional diesel motor with modifications. The modification needed is the replacement of fuel lines from the intake and motor and all affected rubber fittings in injection and feeding pumps a.s.o. This is because biodiesel is an effective solvent and will replace softeners within unsuitable rubber with itself over time. Synthetic gaskets for fittings and hoses prevent this.
Chemically, most biodiesel consists of alkyl (usually methyl) esters instead of the alkanes and aromatic hydrocarbons of petroleum derived diesel. However, biodiesel has combustion properties very similar to petrodiesel, including combustion energy and cetane ratings. Paraffin biodiesel also exists. Due to the purity of the source, it has a higher quality than petrodiesel.
## Biodiesel emissions
The use of biodiesel blended diesel fuels in fractions up to 99% result in substantial emission reductions. Sulfur oxide and sulfate emissions, major components of acid rain, are essentially eliminated with pure biodiesel and substantially reduced using biodiesel blends with minor quantities of ULSD petrodiesel. Use of biodiesel also results in substantial reductions of unburned hydrocarbons, carbon monoxide, and particulate matter compared to either gasoline or petrodiesel. C02, or carbon monoxide emissions using biodiesel are substantially reduced, on the order of 50% compared to most petrodiesel fuels. The exhaust emissions of particulate matter from biodiesel have been found to be 30 percent lower than overall particulate matter emissions from petrodiesel. The exhaust emissions of total hydrocarbons (a contributing factor in the localized formation of smog and ozone) are up to 93 percent lower for biodiesel than diesel fuel. Biodiesel emissions of nitrogen oxides can sometimes increase slightly. However, biodiesel's complete lack of sulfur and sulfate emissions allows the use of NOx control technologies, such as AdBlue, that cannot be used with conventional diesel, allowing the management and control of nitrous oxide emissions.
Biodiesel also may reduce health risks associated with petroleum diesel. Biodiesel emissions showed decreased levels of PAH and nitrited PAH compounds which have been identified as potential cancer causing compounds. In recent testing, PAH compounds were reduced by 75 to 85 percent, with the exception of benzo(a)anthracene, which was reduced by roughly 50 percent. Targeted nPAH compounds were also reduced dramatically with biodiesel fuel, with 2-nitrofluorene and 1-nitropyrene reduced by 90 percent, and the rest of the nPAH compounds reduced to only trace levels.[14]
## Aircraft
The first diesel powered flight of a fixed wing aircraft took place on the evening of September 18, 1928, at the Packard Motor Company proving grounds, Utica, Michigan with Captain Lionel M. Woolson and Walter Lees at the controls (the first "official" test flight was taken the next morning). The engine was designed for Packard by Woolson and the aircraft was a Stinson SM1B, X7654. Later that year Charles Lindbergh flew the same aircraft. In 1929 it was flown 621 miles (999.402624 km) non-stop from Detroit to Langley, Virginia (near Washington, D.C.). This aircraft is presently owned by Greg Herrick and resides in the Golden Wings Flying Museum near Minneapolis, Minnesota. In 1931, Walter Lees and Fredrick Brossy set the nonstop flight record flying a Bellanca powered by a Packard diesel for 84h 32 m. The Hindenburg was powered by four 16 cylinder diesel engines, each with approximately 1,200 (Expression error: Missing operand for *. ) available in bursts, and 850 (Expression error: Missing operand for *. ) available for cruising. Modern diesel engines for propellor-driven aircraft are manufactured by Thielert Aircraft Engines and SMA. These engines are able to run on Jet A fuel, which is similar in composition to automotive diesel and cheaper and more plentiful than the 100 octane low-lead gasoline (avgas) used by the majority of the piston-engine aircraft fleet.[citation needed]
The most-produced aviation diesel engine in history so far has been the Junkers Jumo 205, which, along with its similar developments from the Junkers Motorenwerke, had approximately 1000 examples of the unique opposed piston, two-stroke design powerplant built in the 1930s leading into World War II in Germany.
# Automobiles
The very first diesel-engine automobile trip (inside USA) was completed on January 6, 1930. The trip was from Indianapolis to New York City, a distance of nearly 800 miles (1300 km).[citation needed] This feat helped to prove the usefulness of the compression ignition engine.
## Automobile racing
In 1931, Dave Evans drove his Cummins Diesel Special to a nonstop finish in the Indianapolis 500, the first time a car had completed the race without a pit stop. That car and a later Cummins Diesel Special are on display at the Indianapolis Motor Speedway Hall of Fame Museum.[15]
In the late 1970s Mercedes-Benz at Nardò drove a C111-III with a 5 cylinder diesel engine to several new records, including driving an average of 314 km/h (195 mph) for 12 hours and hitting a top speed of 325 km/h (201 mph).
With turbocharged diesel cars getting stronger in the 1990s, they were entered in touring car racing, and BMW even won the 24 Hours Nürburgring in 1998 with a 320d. After winning the 12 Hours of Sebring in 2006 with their diesel-powered R10 LMP, Audi won the 24 Hours of Le Mans, too. This is the first time a diesel-fueled vehicle has won at Le Mans against cars powered with regular fuel or other alternative fuel like methanol or bio-ethanol. Competitors like Porsche predicted this victory for Audi as current FIA and ACO regulations are seen as pro-diesel. French automaker Peugeot entered the diesel powered Peugeot 908 LMP in the 2007 24 Hours of Le Mans in response to the success of the Audi R10.
In an effort to further demonstrate the potential of diesel power, California-based Gale Banks Engineering designed, built and raced a Cummins-powered pickup at the Bonneville Salt Flats in October 2002. The truck set a top speed of 355 km/h (222 mph) and became the world’s fastest pickup, and almost equally notable, the truck drove to the race towing its own support trailer.
On 23 August 2006, the British-based earthmoving machine manufacturer JCB raced the specially designed JCB Dieselmax car at 563.4 km/h (350.1 mph). The driver was Andy Green. The car was powered by two modified JCB 444 diesel engines.
# Other uses
Poor quality, (high sulfur) diesel fuel has been used as a palladium extraction agent for the liquid-liquid extraction of this metal from nitric acid mixtures. This has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this solvent extraction system the hydrocarbons of the diesel act as the diluent while the dialkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far neither a pilot plant or full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel.[16]
# Health effects
Diesel combustion exhaust is an important source of atmospheric soot and fine particles, which is a fraction of air pollution implicated in human heart and lung damage. Diesel exhaust also contains nanoparticles which have been found to damage the cardiovascular system in a mouse model.[17] The study of nanotoxicology is still in its infancy, and the extent of health and societal effects caused by diesel combustion is unknown. Biodiesel and biodiesel blends result in greatly decreased pollution levels.
# Taxation
Diesel fuel is very similar to heating oil which is used in central heating. In Europe, the United States, and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. Similarly, "untaxed" diesel (sometimes called "off road diesel") is available in the United States, which is available for use primarily in agricultural applications such as fuel for tractors, recreational and utility vehicles or other non-commercial vehicles that do not use public roads. Additionally, this fuel may have sulphur levels that exceed the limits for road use using the newer 2007 standards. This untaxed diesel is dyed red for identification purposes,[18]
and should a person be found to be using this untaxed diesel fuel for a typically taxed purpose (such as "over-the-road", or driving use), the user can be fined US$10,000. In the United Kingdom, Belgium and the Netherlands it is known as red diesel (or gas oil), and is also used in agricultural vehicles, home heating tanks, refrigeration units on vans/trucks which contain perishable items (e.g. food, medicine) and for marine craft. Diesel fuel, or Marked Gas Oil is dyed green in the Republic of Ireland. The term DERV ("diesel engined road vehicle") is used in the UK as a synonym for unmarked road diesel fuel. In India, taxes on diesel fuel are lower than on gasoline as the majority of the transportation that transports grains and other essential commodities across the country runs on diesel.
In Germany, diesel fuel is taxed lower than gasoline but the annual vehicle tax is higher for diesel vehicles than for gasoline vehicles.[citation needed] This gives an advantage to vehicles that travel longer distances (which is the case for trucks and utility vehicles) because the annual vehicle tax depends only on engine displacement, not on distance driven. The point at which a diesel vehicle becomes less expensive than a comparable gasoline vehicle is around 20,000 km per year (12,500 miles per year) for an average car.[citation needed]
Taxes on biodiesel in the United States vary from state to state and in some states (Texas, for example) have no tax on biodiesel and a reduced tax on biodiesel blends equivalent to the amount of biodiesel in the blend, so B20 fuel is taxed 20% less than pure petrodiesel.[19] Other states, such as North Carolina, tax biodiesel (in any blended configuration) the same as petrodiesel, although they have introduced new incentives to producers and users of all biofuels.[20] | https://www.wikidoc.org/index.php/Diesel | |
89d5d40a6a593b1bcff33f66968fedcbc15da8f7 | wikidoc | Dimple | Dimple
Dimples are visible indentations of the skin, caused by underlying flesh, which form on some people's cheeks when they smile.
Dimples are genetically inherited and are a dominant trait. Dimples on each cheek are a relatively common occurrence for people with dimples. A rarer form is the single dimple, which occurs on one side of the face only. Anatomically, dimples may be caused by variations in the structure of the facial muscle known as zygomaticus major. Specifically, the presence of a double or bifid zygomaticus major muscle may explain the formation of cheek dimples. This bifid variation of the muscle originates as a single structure from the zygomatic bone. As it travels anteriorly, it then divides with a superior bundle that inserts in the typical position above the corner of the mouth. An inferior bundle inserts below the corner of the mouth.
Dimples are considered attractive in some cultures. Babies commonly have dimples, but sometimes these disappear (or become less noticeable) as the muscles lengthen with age; consequently, dimples are often associated with youth. | Dimple
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Dimples are visible indentations of the skin, caused by underlying flesh, which form on some people's cheeks when they smile.
Dimples are genetically inherited and are a dominant trait.[1] Dimples on each cheek are a relatively common occurrence for people with dimples. A rarer form is the single dimple, which occurs on one side of the face only. Anatomically, dimples may be caused by variations in the structure of the facial muscle known as zygomaticus major. Specifically, the presence of a double or bifid zygomaticus major muscle may explain the formation of cheek dimples.[2] This bifid variation of the muscle originates as a single structure from the zygomatic bone. As it travels anteriorly, it then divides with a superior bundle that inserts in the typical position above the corner of the mouth. An inferior bundle inserts below the corner of the mouth.
Dimples are considered attractive in some cultures. Babies commonly have dimples, but sometimes these disappear (or become less noticeable) as the muscles lengthen with age; consequently, dimples are often associated with youth. | https://www.wikidoc.org/index.php/Dimple | |
c33d84ef8a79f6511f2c8e6bff2dc88026aa13b0 | wikidoc | Dioxin | Dioxin
# Overview
Dioxin is the common name for the group of compounds classified as polychlorinated dibenzodioxins (PCDDs). PCDDs, which are members of the family of halogenated organic compounds, have been shown to bioaccumulate in humans and wildlife due to their lipophilic properties, and are known teratogens, mutagens, and suspected human carcinogens.
# Chemical structure
The basic structure of PCDDs comprises two benzene rings joined by a double oxygen bridge. Chlorine atoms are attached to the basic structure at any of 8 different places on the molecule, positions 1–4 and 6–9. There are 75 different types of PCDD congeners (herein, a congener means a related dioxin compound). The toxicity of PCDDs depends on the number and position of the chlorine atoms; only congeners that have chlorines in the 2, 3, 7, and 8 positions have been found to be significantly toxic. Out of the 75 PCDD compounds, only 7 congeners have chlorine atoms in the relevant positions to be considered toxic by the NATO Committee on the Challenges to Modern Society (NATO/CCMS) international toxic equivalent (I-TEQ) scheme.
# Historical perspective
Concentrations of dioxins in nature prior to industrialization, due to natural combustion and geological processes, were generally about three times lower than today . The first intentional synthesis of chlorinated dibenzodioxin dates back to 1872. Today, concentrations of dioxins are found in all humans, with higher levels commonly found in persons living in more industrialized countries. The most toxic dioxin, 2,3,7,8 tetrachlorodibenzo-p-dioxin (TCDD), became well known as a contaminant of Agent Orange, a herbicide used in the Vietnam War. Later, dioxins were found in Times Beach, Missouri, USA and Love Canal, New York, USA and Seveso, Italy . More recently, dioxin has been in the news with the poisoning of President Viktor Yushchenko of Ukraine, 2004 .
# Sources of dioxin
The United States Environmental Protection Agency Dioxin Reassessment Report is possibly the most comprehensive review of dioxin, but other countries now have substantial research. Australia, New Zealand and the United Kingdom all have substantial research into body burdens and sources. Tolerable daily, monthly or annual intakes have been set by the World Health Organization and a number of governments. Dioxin enters the general population almost exclusively from ingestion of food, specifically through the consumption of fish, meat, and dairy products since dioxins are fat-soluble and readily climb the food chain .
Occupational exposure is an issue for some in the chemical industry, or in the application of chemicals, notably herbicides. Inhalation has been a problem for people living near substantial point sources where emissions are not adequately controlled. In many developed nations there are now emissions regulations which have alleviated some concerns, although the lack of constant sampling of dioxin emissions causes concern about the understatement of emissions. In Belgium, through the introduction of a process called AMESA, constant sampling showed that periodic sampling understated emissions by a factor of 30 to 50 times. Few facilities have constant sampling.
Most controversial is the United States Environmental Protection Agency assessment's (draft) finding that any reference dose that were to be set would be far below current average intakes.
Children are passed substantial body burdens by their mothers, and breastfeeding increases the child's body burden. Children's body burdens are often many times above the amount implied by tolerable intakes which are based on body weight. Breast fed children usually have substantially higher dioxin body burdens than non breast fed children until they are about 8 to 10 years old. The WHO still recommends breast feeding for its other benefits.
Dioxins are produced in small concentrations when organic material is burned in the presence of chlorine, whether the chlorine is present as chloride ions or as organochlorine compounds, so they are widely produced in many contexts. According to the most recent US EPA data the major sources of dioxin are:
- Coal fired utilities
- Metal smelting
- Diesel trucks
- Land application of sewage sludge
- Burning treated wood
- Trash burn barrels
These sources together account for nearly 80% of dioxin emissions.
When the original US EPA inventory of dioxin sources was done in 1987, incineration represented over 80% of known dioxin sources. As a result, US EPA implemented new emissions requirements. These regulations have been very successful in reducing dioxin stack emissions from incinerators. Incineration of municipal solid waste, medical waste, sewage sludge, and hazardous waste together now produce less than 3% of all dioxin emissions.
In incineration, dioxins can also reform in the atmosphere above the stack as the exhaust gases cool through a temperature window of 600 to 200°C. The most common method of reducing dioxins reforming or forming de novo is through rapid (30 millisecond) quenching of the exhaust gases through that 400°C window . Incinerator emissions of dioxins have been reduced by over 90% as a result of new emissions control requirements. Incineration is now a very minor contributor to dioxin emissions.
Dioxins are also generated in reactions that do not involve burning — such as bleaching fibers for paper or textiles, and in the manufacture of chlorinated phenols, particularly when reaction temperature is not well controlled. Affected compounds include the wood preservative pentachlorophenol, and also herbicides such as 2,4-dichlorophenoxyacetic acid (or 2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T). Higher levels of chlorination require higher reaction temperatures and greater dioxin production. See Agent Orange for more on contamination problems in the 1960s. Dioxins may also be formed during the photochemical breakdown of the common antimicrobial compound triclosan .
Dioxins are also in typical cigarette smoke. Dioxin in cigarette smoke was noted as "understudied" by the US EPA in its "Re-Evaluating Dioxin" (1995). In that same document, the US EPA acknowledged that dioxin in cigarettes is "anthropogenic" (man-made, "not likely in nature"). Nevertheless, the use of chlorine-containing tobacco pesticides and chlorine-bleached cigarette papers remains legal.
Dioxins are present in minuscule amounts in a wide range of materials used by humans — including practically all substances manufactured using plastics, resins, or bleaches. Such materials include tampons, and a wide variety of food packaging substances. The use of these materials means that all Western humans receive at least a very small daily dose of dioxin—however, it is disputed whether such exceptionally tiny exposures have any clinical relevance. It is even controversially discussed whether dioxins might have a non-linear dose-response curve with beneficial health effects in a certain lower dose range, a phenomenon called hormesis.
Dietary sources of dioxin in the United States have been analyzed by the EPA and scientists from other organizations.
# Toxicity
Dioxins are absorbed primarily through dietary intake of fat, as this is where they accumulate in animals and humans. In humans, the highly chlorinated dioxins are stored in fatty tissues and are neither readily metabolized nor excreted. The estimated elimination half-life for highly clorinated dioxins (4-8 chlorine atoms) in humans ranges from 7.8 to 132 years .
The persistence of a particular dioxin congener in an animal is thought to be a consequence of its structure. It is believed that dioxins with few chlorines, which thus contain hydrogen atoms on adjacent pairs of carbons, can more readily be oxidized by cytochromes P450. The oxidized dioxins can then be more readily excreted rather than stored for long time.
2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is the most toxic of the congeners. Other dioxin congeners (or mixtures thereof) are given a toxicity rating from 0 to 1, where TCDD = 1. This toxicity rating is called the Toxic Equivalence Factor, or TEF. TEFs are consensus values and, because of the strong species dependence for toxicity, are listed separately for mammals, fish, and birds. TEFs for mammalian species are generally applicable to human risk calculations. The TEFs have been developed from detailed assessment of literature data to facilitate both risk assessment and regulatory control . Many other compounds may also have dioxin-like properties, particularly non-ortho PCBs, some of which can have TEFs as high as 0.1.
The total dioxin toxic equivalence (TEQ) value expresses the toxicity as if the mixture were pure TCDD. The TEQ approach and current TEFs have been adopted internationally as the most appropriate way to estimate the potential health risks of mixture of dioxins. Recent data suggest that this type of linear scaling factor may not be the most appropriate treatment for complex mixtures of dioxins; further research into non-linear toxicity models is required to substantiate this hypothesis.
Dioxins and other persistent organic pollutants (POPs) are subject to the Stockholm Convention. The treaty obliges signatories to take measures to eliminate where possible, and minimize where not possible to eliminate, all sources of dioxin.
# Health effects in humans
Dioxins build up primarily in fatty tissues over time (bioaccumulate), so even small exposures may eventually reach dangerous levels. In 1994, EPA reported that dioxin is a probable carcinogen, but notes that non-cancer effects (reproduction and sexual development, immune system) may pose an even greater threat to human health. TCDD, the most toxic of the dibenzodioxins, has a half-life of approximately 8 years in humans, but at high concentrations, the elimination rate is enhanced by metabolism . The health effects of dioxins are mediated by their action on a cellular receptor, the aryl hydrocarbon receptor (AhR) .
Dioxins also accumulate in food chains in a fashion similar to other chlorinated compounds (bioaccumulate). This means that even small concentrations in contaminated water can be concentrated up a food chain to dangerous levels due to the long biological half life and low solubility of dioxins.
Exposure to high levels of dioxin in humans causes a severe form of persistent acne, known as chloracne . Other effects in humans may include:
- Developmental abnormalities in the enamel of children's teeth .
- Central and Peripheral Nervous System pathology
- Thyroid disorders
- Damage to the Immune systems .
- Endometriosis
- Diabetes
# Health effects in other animals
While it has been difficult to prove that dioxins cause specific health effects in humans due to the lack of controlled dose experiments, studies in animals have shown that dioxin causes a wide variety of toxic effects. In particular, TCDD has been shown to be teratogenic, mutagenic, carcinogenic, immunotoxic, and hepatotoxic. Furthermore, alterations in multiple endocrine and growth factor systems have been reported. The most sensitive effects, observed in multiple species, appear to be developmental, including effects on the developing immune, nervous, and reproductive systems . These effects are caused at body burdens close to those reported in humans.
Among the animals for which TCDD toxicity has been studied, there is strong evidence for the following effects:
- Birth defects (teratogenicity)
- Cancer (including neoplasms in the mammalian lung, oral/nasal cavities, thyroid and adrenal glands, and liver, squamous cell carcinoma, and various animal hepatocarcinomas)
- Hepatotoxicity (liver toxicity)
- Endocrine disruption
- Immunosuppression
- Learning
# Studies of dioxin's effects in Vietnam
US veterans' groups and Vietnamese groups, including the Vietnamese government, have convened scientific studies to explore their belief that dioxins were responsible for a host of disorders, including tens of thousands of birth defects in children, amongst Vietnam veterans as well as an estimated one million Vietnamese, through their exposure to Agent Orange during the Vietnam War, which was found to be highly contaminated with TCDD. Several exposure studies showed that some US Vietnam Veterans who were exposed to Agent Orange had serum TCDD levels up to 600 ppt (parts per trillion) many years after they left Vietnam, compared to general population levels of approximately 1 to 2 ppt of TCDD. In Vietnam, TCDD levels up to 1,000,000 ppt have been found in soil and sediments from Agent Orange contaminated areas 3 to 4 decades after spraying. In addition, elevated levels have been measured in food and wildlife in Vietnam .
The most recent study, paid for by the National Academy of Sciences, was released in an April 2003 report. This report is currently (March 2007) being revised for release again later in 2007.
The Centers for Disease Control found that dioxin levels in Vietnam veterans were in no way atypical when compared against the rest of the population. The only exception existed for those who directly handled Agent Orange. These were members of Operation Ranch Hand. Long-term studies of the members of Ranch Hand have thus far uncovered a possibility of elevated risks of diabetes.
# Dioxin exposure incidents
- In 1949, in herbicide production plant for 2,4,5-T in Nitro, West Virginia, 240 people were affected when a relief valve opened .
- In 1963, a dioxin cloud escapes after an explosion in a Philips-Duphar plant (now Solvay Group) near Amsterdam. In the 1960s, Philips-Duphar produced 2250 tonnes of 'Agent Orange' for the US Army.
- In 1976, large amounts of dioxin were released in an industrial accident at Seveso, although no immediate human fatalities or birth defects occurred .
- In 1978, dioxin was one of the contaminants that forced the evacuation of the Love Canal neighborhood of Niagara Falls, New York. Dioxin also caused the 1983 evacuation of Times Beach, Missouri.
- In the 1960s, parts of the Spolana chemical plant in Neratovice, Czechoslovakia, were heavily contaminated by dioxins, when the herbicide 2,4,5-T (also a component of Agent Orange) was produced there. Workers in this factory were exposed to high concentrations of dioxins at that time. Dozens of them fell seriously ill. A possibly large amount of dioxins was flushed from the factory into the Labe river during the 2002 European flood. No direct consequences of this incident have thus far been recorded.
- From 1982 through to 1985, Times Beach, Missouri, was bought out and evacuated under order of the United States Environmental Protection Agency due to high levels of dioxin in the soil . The town eventually disincorporated .
- In December 1991, an electrical explosion caused dioxin (created from the oxidation of polychlorinated biphenyl) to spread through four residence halls and two other buildings on the college campus of SUNY New Paltz.
- In May 1999, there was a dioxin crisis in Belgium: quantities of dioxin had entered the food chain through contaminated animal feed. 7,000,000 chickens and 60,000 pigs had to be slaughtered. This scandal was followed by a landslide change in government in the elections one month later.
- On September 11, 2001, explosions released massive amounts of dust into the air. The air was measured for dioxin from September 23, 2001, to November 21, 2001, and reported to be "likely the highest ambient concentration that have ever been reported." . The United States Environmental Protection Agency report dated October 2002 and released in December of 2002 titled "Exposure and Human Health Evaluation of Airborne Pollution from the World Trade Center Disaster" authored by the EPA Office of Research and Development in Washington states that dioxin levels recorded at a monitoring station on Park Row near City Hall Park in New York between October 12 and 29, 2001, averaged 5.6 parts per trillion, or nearly six times the highest dioxin level ever recorded in the U.S. Dioxin levels in the rubble of the World Trade Centers were much higher with concentrations ranging from 10 to 170 parts per trillion. The report did no measuring of the toxicity of indoor air.
- In a 2001 case study , physicians reported clinical changes in a 30 year old woman who had been exposed to a massive dosage (144,000 pg/g blood fat) of dioxin equal to 16,000 times the normal body level; the highest dose of dioxin ever recorded in a human. She suffered from chloracne, nausea, vomiting, epigastric pain, loss of appetite, leukocytosis, anemia, amenorrhoea and thrombocytopenia. However, other notable laboratory tests, such as immune function tests, were relatively normal. The same study also covered a second subject who had received a dosage equivalent to 2,900 times the normal level, who apparently suffered no notable negative effects other than chloracne. These patients were provided with olestra to accelerate dioxin elimination .
- In 2004, a notable individual case of dioxin poisoning, Ukrainian politician Viktor Yushchenko was exposed to the second-largest measured dose of dioxins, according to the reports of the physicians responsible for diagnosing him. This is the first known case of a single high dose of TCDD dioxin poisoning, and was diagnosed only after a toxicologist recognized the symptoms of chloracne while viewing television news coverage of his condition .
- In the early 2000s, residents of the city of New Plymouth, New Zealand, report many illnesses of people living around and working at the Dow Chemical plant. This plant ceased production of 2,4,5-T in 1987.
- 1,995 people are suing DuPont, claiming dioxin emissions from its plant in DeLisle, Mississppi, caused their cancers, illnesses or loved one's death. In August 2005, Glenn Strong, an oyster fisherman with the rare blood cancer multiple myeloma, was awarded $14 million from DuPont. In another case, parents claim dioxin from pollution caused the death of their 8 year old daughter; the trial is expected to begin May 2007. DuPont's DeLisle plant is one of three titanium dioxide facilities (including Edgemoor, DE, and New Johnsonville, TN) that are the largest producers of dioxin in the country, according to the US EPA's Toxic Release Inventory. | Dioxin
# Overview
Dioxin is the common name for the group of compounds classified as polychlorinated dibenzodioxins (PCDDs). PCDDs, which are members of the family of halogenated organic compounds, have been shown to bioaccumulate in humans and wildlife due to their lipophilic properties, and are known teratogens, mutagens, and suspected human carcinogens.
# Chemical structure
The basic structure of PCDDs comprises two benzene rings joined by a double oxygen bridge. Chlorine atoms are attached to the basic structure at any of 8 different places on the molecule, positions 1–4 and 6–9. There are 75 different types of PCDD congeners (herein, a congener means a related dioxin compound). The toxicity of PCDDs depends on the number and position of the chlorine atoms; only congeners that have chlorines in the 2, 3, 7, and 8 positions have been found to be significantly toxic. Out of the 75 PCDD compounds, only 7 congeners have chlorine atoms in the relevant positions to be considered toxic by the NATO Committee on the Challenges to Modern Society (NATO/CCMS) international toxic equivalent (I-TEQ) scheme.
# Historical perspective
Concentrations of dioxins in nature prior to industrialization, due to natural combustion and geological processes, were generally about three times lower than today [1] [2]. The first intentional synthesis of chlorinated dibenzodioxin dates back to 1872. Today, concentrations of dioxins are found in all humans, with higher levels commonly found in persons living in more industrialized countries. The most toxic dioxin, 2,3,7,8 tetrachlorodibenzo-p-dioxin (TCDD), became well known as a contaminant of Agent Orange, a herbicide used in the Vietnam War[3]. Later, dioxins were found in Times Beach, Missouri, USA [4] and Love Canal, New York, USA [5] and Seveso, Italy [6]. More recently, dioxin has been in the news with the poisoning of President Viktor Yushchenko of Ukraine, 2004 [7].
# Sources of dioxin
The United States Environmental Protection Agency Dioxin Reassessment Report is possibly the most comprehensive review of dioxin, but other countries now have substantial research. Australia, New Zealand and the United Kingdom all have substantial research into body burdens and sources. Tolerable daily, monthly or annual intakes have been set by the World Health Organization and a number of governments. Dioxin enters the general population almost exclusively from ingestion of food, specifically through the consumption of fish, meat, and dairy products since dioxins are fat-soluble and readily climb the food chain [8].
Occupational exposure is an issue for some in the chemical industry, or in the application of chemicals, notably herbicides. Inhalation has been a problem for people living near substantial point sources where emissions are not adequately controlled. In many developed nations there are now emissions regulations which have alleviated some concerns, although the lack of constant sampling of dioxin emissions causes concern about the understatement of emissions. In Belgium, through the introduction of a process called AMESA, constant sampling showed that periodic sampling understated emissions by a factor of 30 to 50 times. Few facilities have constant sampling.
Most controversial is the United States Environmental Protection Agency assessment's (draft) finding that any reference dose that were to be set would be far below current average intakes.
Children are passed substantial body burdens by their mothers, and breastfeeding increases the child's body burden. Children's body burdens are often many times above the amount implied by tolerable intakes which are based on body weight. Breast fed children usually have substantially higher dioxin body burdens than non breast fed children until they are about 8 to 10 years old. The WHO still recommends breast feeding for its other benefits.
Dioxins are produced in small concentrations when organic material is burned in the presence of chlorine, whether the chlorine is present as chloride ions or as organochlorine compounds, so they are widely produced in many contexts. According to the most recent US EPA data the major sources of dioxin are:
- Coal fired utilities
- Metal smelting
- Diesel trucks
- Land application of sewage sludge
- Burning treated wood
- Trash burn barrels
These sources together account for nearly 80% of dioxin emissions.
When the original US EPA inventory of dioxin sources was done in 1987, incineration represented over 80% of known dioxin sources. As a result, US EPA implemented new emissions requirements. These regulations have been very successful in reducing dioxin stack emissions from incinerators. Incineration of municipal solid waste, medical waste, sewage sludge, and hazardous waste together now produce less than 3% of all dioxin emissions.
In incineration, dioxins can also reform in the atmosphere above the stack as the exhaust gases cool through a temperature window of 600 to 200°C. The most common method of reducing dioxins reforming or forming de novo is through rapid (30 millisecond) quenching of the exhaust gases through that 400°C window [9]. Incinerator emissions of dioxins have been reduced by over 90% as a result of new emissions control requirements. Incineration is now a very minor contributor to dioxin emissions.
Dioxins are also generated in reactions that do not involve burning — such as bleaching fibers for paper or textiles, and in the manufacture of chlorinated phenols, particularly when reaction temperature is not well controlled. Affected compounds include the wood preservative pentachlorophenol, and also herbicides such as 2,4-dichlorophenoxyacetic acid (or 2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T). Higher levels of chlorination require higher reaction temperatures and greater dioxin production. See Agent Orange for more on contamination problems in the 1960s. Dioxins may also be formed during the photochemical breakdown of the common antimicrobial compound triclosan [10].
Dioxins are also in typical cigarette smoke. Dioxin in cigarette smoke was noted as "understudied" by the US EPA in its "Re-Evaluating Dioxin" (1995). In that same document, the US EPA acknowledged that dioxin in cigarettes is "anthropogenic" (man-made, "not likely in nature"). Nevertheless, the use of chlorine-containing tobacco pesticides and chlorine-bleached cigarette papers remains legal.
Dioxins are present in minuscule amounts in a wide range of materials used by humans — including practically all substances manufactured using plastics, resins, or bleaches. Such materials include tampons, and a wide variety of food packaging substances. The use of these materials means that all Western humans receive at least a very small daily dose of dioxin—however, it is disputed whether such exceptionally tiny exposures have any clinical relevance. It is even controversially discussed whether dioxins might have a non-linear dose-response curve with beneficial health effects in a certain lower dose range, a phenomenon called hormesis.
Dietary sources of dioxin in the United States have been analyzed by the EPA and scientists from other organizations.
# Toxicity
Dioxins are absorbed primarily through dietary intake of fat, as this is where they accumulate in animals and humans. In humans, the highly chlorinated dioxins are stored in fatty tissues and are neither readily metabolized nor excreted. The estimated elimination half-life for highly clorinated dioxins (4-8 chlorine atoms) in humans ranges from 7.8 to 132 years [11].
The persistence of a particular dioxin congener in an animal is thought to be a consequence of its structure. It is believed that dioxins with few chlorines, which thus contain hydrogen atoms on adjacent pairs of carbons, can more readily be oxidized by cytochromes P450. The oxidized dioxins can then be more readily excreted rather than stored for long time.
2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is the most toxic of the congeners. Other dioxin congeners (or mixtures thereof) are given a toxicity rating from 0 to 1, where TCDD = 1. This toxicity rating is called the Toxic Equivalence Factor, or TEF. TEFs are consensus values and, because of the strong species dependence for toxicity, are listed separately for mammals, fish, and birds. TEFs for mammalian species are generally applicable to human risk calculations. The TEFs have been developed from detailed assessment of literature data to facilitate both risk assessment and regulatory control [12]. Many other compounds may also have dioxin-like properties, particularly non-ortho PCBs, some of which can have TEFs as high as 0.1.
The total dioxin toxic equivalence (TEQ) value expresses the toxicity as if the mixture were pure TCDD. The TEQ approach and current TEFs have been adopted internationally as the most appropriate way to estimate the potential health risks of mixture of dioxins. Recent data suggest that this type of linear scaling factor may not be the most appropriate treatment for complex mixtures of dioxins; further research into non-linear toxicity models is required to substantiate this hypothesis.
Dioxins and other persistent organic pollutants (POPs) are subject to the Stockholm Convention. The treaty obliges signatories to take measures to eliminate where possible, and minimize where not possible to eliminate, all sources of dioxin.
# Health effects in humans
Dioxins build up primarily in fatty tissues over time (bioaccumulate), so even small exposures may eventually reach dangerous levels. In 1994, EPA reported that dioxin is a probable carcinogen, but notes that non-cancer effects (reproduction and sexual development, immune system) may pose an even greater threat to human health. TCDD, the most toxic of the dibenzodioxins, has a half-life of approximately 8 years in humans, but at high concentrations, the elimination rate is enhanced by metabolism [13]. The health effects of dioxins are mediated by their action on a cellular receptor, the aryl hydrocarbon receptor (AhR) [14].
Dioxins also accumulate in food chains in a fashion similar to other chlorinated compounds (bioaccumulate). This means that even small concentrations in contaminated water can be concentrated up a food chain to dangerous levels due to the long biological half life and low solubility of dioxins.
Exposure to high levels of dioxin in humans causes a severe form of persistent acne, known as chloracne [15]. Other effects in humans may include:
- Developmental abnormalities in the enamel of children's teeth [16] [17].
- Central and Peripheral Nervous System pathology[18]
- Thyroid disorders[19]
- Damage to the Immune systems [20].
- Endometriosis[21]
- Diabetes[22]
# Health effects in other animals
While it has been difficult to prove that dioxins cause specific health effects in humans due to the lack of controlled dose experiments, studies in animals have shown that dioxin causes a wide variety of toxic effects. In particular, TCDD has been shown to be teratogenic, mutagenic, carcinogenic, immunotoxic, and hepatotoxic. Furthermore, alterations in multiple endocrine and growth factor systems have been reported. The most sensitive effects, observed in multiple species, appear to be developmental, including effects on the developing immune, nervous, and reproductive systems [23]. These effects are caused at body burdens close to those reported in humans.
Among the animals for which TCDD toxicity has been studied, there is strong evidence for the following effects:
- Birth defects (teratogenicity)
- Cancer (including neoplasms in the mammalian lung, oral/nasal cavities, thyroid and adrenal glands, and liver, squamous cell carcinoma, and various animal hepatocarcinomas)
- Hepatotoxicity (liver toxicity)
- Endocrine disruption
- Immunosuppression
- Learning [23]
# Studies of dioxin's effects in Vietnam
US veterans' groups and Vietnamese groups, including the Vietnamese government, have convened scientific studies to explore their belief that dioxins were responsible for a host of disorders, including tens of thousands of birth defects in children, amongst Vietnam veterans as well as an estimated one million Vietnamese, through their exposure to Agent Orange during the Vietnam War, which was found to be highly contaminated with TCDD. Several exposure studies showed that some US Vietnam Veterans who were exposed to Agent Orange had serum TCDD levels up to 600 ppt (parts per trillion) many years after they left Vietnam, compared to general population levels of approximately 1 to 2 ppt of TCDD. In Vietnam, TCDD levels up to 1,000,000 ppt have been found in soil and sediments from Agent Orange contaminated areas 3 to 4 decades after spraying. In addition, elevated levels have been measured in food and wildlife in Vietnam [36].
The most recent study, paid for by the National Academy of Sciences, was released in an April 2003 report. This report is currently (March 2007) being revised for release again later in 2007.
The Centers for Disease Control found that dioxin levels in Vietnam veterans [37] were in no way atypical when compared against the rest of the population. The only exception existed for those who directly handled Agent Orange. These were members of Operation Ranch Hand. Long-term studies of the members of Ranch Hand have thus far uncovered a possibility of elevated risks of diabetes.
# Dioxin exposure incidents
- In 1949, in herbicide production plant for 2,4,5-T in Nitro, West Virginia, 240 people were affected when a relief valve opened [38].
- In 1963, a dioxin cloud escapes after an explosion in a Philips-Duphar plant (now Solvay Group) near Amsterdam. In the 1960s, Philips-Duphar produced 2250 tonnes of 'Agent Orange' for the US Army.
- In 1976, large amounts of dioxin were released in an industrial accident at Seveso, although no immediate human fatalities or birth defects occurred [39] [40] [41].
- In 1978, dioxin was one of the contaminants that forced the evacuation of the Love Canal neighborhood of Niagara Falls, New York. Dioxin also caused the 1983 evacuation of Times Beach, Missouri.
- In the 1960s, parts of the Spolana chemical plant in Neratovice, Czechoslovakia, were heavily contaminated by dioxins, when the herbicide 2,4,5-T (also a component of Agent Orange) was produced there. Workers in this factory were exposed to high concentrations of dioxins at that time. Dozens of them fell seriously ill. A possibly large amount of dioxins was flushed from the factory into the Labe river during the 2002 European flood. No direct consequences of this incident have thus far been recorded.
- From 1982 through to 1985, Times Beach, Missouri, was bought out and evacuated under order of the United States Environmental Protection Agency due to high levels of dioxin in the soil [42]. The town eventually disincorporated [43].
- In December 1991, an electrical explosion caused dioxin (created from the oxidation of polychlorinated biphenyl) to spread through four residence halls and two other buildings on the college campus of SUNY New Paltz.
- In May 1999, there was a dioxin crisis in Belgium: quantities of dioxin had entered the food chain through contaminated animal feed. 7,000,000 chickens and 60,000 pigs had to be slaughtered. This scandal was followed by a landslide change in government in the elections one month later.
- On September 11, 2001, explosions released massive amounts of dust into the air. The air was measured for dioxin from September 23, 2001, to November 21, 2001, and reported to be "likely the highest ambient concentration that have ever been reported." [in history]. The United States Environmental Protection Agency report dated October 2002 and released in December of 2002 titled "Exposure and Human Health Evaluation of Airborne Pollution from the World Trade Center Disaster" authored by the EPA Office of Research and Development in Washington states that dioxin levels recorded at a monitoring station on Park Row near City Hall Park in New York between October 12 and 29, 2001, averaged 5.6 parts per trillion, or nearly six times the highest dioxin level ever recorded in the U.S. Dioxin levels in the rubble of the World Trade Centers were much higher with concentrations ranging from 10 to 170 parts per trillion. The report did no measuring of the toxicity of indoor air.
- In a 2001 case study [15], physicians reported clinical changes in a 30 year old woman who had been exposed to a massive dosage (144,000 pg/g blood fat) of dioxin equal to 16,000 times the normal body level; the highest dose of dioxin ever recorded in a human. She suffered from chloracne, nausea, vomiting, epigastric pain, loss of appetite, leukocytosis, anemia, amenorrhoea and thrombocytopenia. However, other notable laboratory tests, such as immune function tests, were relatively normal. The same study also covered a second subject who had received a dosage equivalent to 2,900 times the normal level, who apparently suffered no notable negative effects other than chloracne. These patients were provided with olestra to accelerate dioxin elimination [44].
- In 2004, a notable individual case of dioxin poisoning, Ukrainian politician Viktor Yushchenko was exposed to the second-largest measured dose of dioxins, according to the reports of the physicians responsible for diagnosing him. This is the first known case of a single high dose of TCDD dioxin poisoning, and was diagnosed only after a toxicologist recognized the symptoms of chloracne while viewing television news coverage of his condition [45].
- In the early 2000s, residents of the city of New Plymouth, New Zealand, report many illnesses of people living around and working at the Dow Chemical plant. This plant ceased production of 2,4,5-T in 1987.
- 1,995 people are suing DuPont, claiming dioxin emissions from its plant in DeLisle, Mississppi, caused their cancers, illnesses or loved one's death. In August 2005, Glenn Strong, an oyster fisherman with the rare blood cancer multiple myeloma, was awarded $14 million from DuPont. In another case, parents claim dioxin from pollution caused the death of their 8 year old daughter; the trial is expected to begin May 2007. DuPont's DeLisle plant is one of three titanium dioxide facilities (including Edgemoor, DE, and New Johnsonville, TN) that are the largest producers of dioxin in the country, according to the US EPA's Toxic Release Inventory. | https://www.wikidoc.org/index.php/Dioxin | |
8789f6e00bd122708d5e51581b06e1519c65ed3f | wikidoc | Oxygen | Oxygen
# Overview
Oxygen is the element with atomic number 8 and represented by the symbol O. It is a member of the chalcogen group on the periodic table, and is a highly reactive nonmetallic period 2 element that readily forms compounds (notably oxides) with almost all other elements. At standard temperature and pressure two atoms of the element bind to form dioxygen, a colorless, odorless, tasteless diatomic gas with the formula O2. Oxygen is the third most abundant element in the universe by mass after hydrogen and helium and the most abundant element by mass in the Earth's crust. Oxygen constitutes 88.8% of the mass of water and 20.9% of the volume of air.
All major classes of structural molecules in living organisms, such as proteins, carbohydrates, and fats, contain oxygen, as do the major inorganic compounds that comprise animal shells, teeth, and bone. Oxygen in the form of O2 is produced from water by cyanobacteria, algae and plants during photosynthesis and is used in cellular respiration for all complex life. Oxygen is toxic to anaerobic organisms, which were the dominant form of early life on Earth until O2 began to accumulate in the atmosphere 2.5 billion years ago. Another form (allotrope) of oxygen, ozone (O3), helps protect the biosphere from ultraviolet radiation with the high-altitude ozone layer, but is a pollutant near the surface where it is a by-product of smog.
Oxygen was independently discovered by Joseph Priestley and Carl Wilhelm Scheele in the 1770s, but Priestley is usually given priority because he published his findings first. The name oxygen was coined in 1777 by Antoine Lavoisier, whose experiments with oxygen helped to discredit the then-popular phlogiston theory of combustion and corrosion. Oxygen is produced industrially by fractional distillation of liquefied air, use of zeolites to remove carbon dioxide and nitrogen from air, electrolysis of water and other means. Uses of oxygen include the production of steel, plastics and textiles; rocket propellant; oxygen therapy; and life support in aircraft, submarines, spaceflight and diving.
# Characteristics
## Structure
At standard temperature and pressure, oxygen is a colorless, odorless gas with the molecular formula O2, in which the two oxygen atoms are chemically bonded to each other with a spin triplet electron configuration. This bond has a bond order of two, and is often over-simplified in description as a double bond.
Triplet oxygen is the ground state of the O2 molecule. The electron configuration of the molecule has two unpaired electrons occupying two degenerate molecular orbitals. These orbitals are classified as antibonding (weakening the bond order from three to two), so the diatomic oxygen bond is weaker than the diatomic nitrogen triple bond in which all bonding molecular orbitals are filled, but some antibonding orbitals are not.
In normal triplet form, O2 molecules are paramagnetic—they form a magnet in the presence of a magnetic field—because of the spin magnetic moments of the unpaired electrons in the molecule, and the negative exchange energy between neighboring O2 molecules. Liquid oxygen is attracted to a magnet to a sufficient extent that, in laboratory demonstrations, a bridge of liquid oxygen may be supported against its own weight between the poles of a powerful magnet.
Singlet oxygen, a name given to several higher-energy species of molecular O2 in which all the electron spins are paired, is much more reactive towards common organic molecules. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy of sunlight. It is also produced in the troposphere by the photolysis of ozone by light of short wavelength, and by the immune system as a source of active oxygen. Carotenoids in photosynthetic organisms (and possibly also in animals) play a major role in absorbing energy from singlet oxygen and converting it to the unexcited ground state before it can cause harm to tissues.
## Allotropes
The common allotrope of elemental oxygen on Earth is called dioxygen, O2. It has a bond length of 121 pm and a bond energy of 498 kJ·mol-1. This is the form that is used by complex forms of life, such as animals, in cellular respiration (see Biological role) and is the form that is a major part of the Earth's atmosphere (see Occurrence). Other aspects of O2 are covered in the remainder of this article.
Trioxygen (O3) is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to lung tissue. Ozone is produced in the upper atmosphere when O2 combines with atomic oxygen made by the splitting of O2 by ultraviolet (UV) radiation. Since ozone absorbs strongly in the UV region of the spectrum, it functions as a protective radiation shield for the planet (see ozone layer). Near the earth's surface, however, it is a pollutant formed as a by-product of automobile exhaust.
The metastable molecule tetraoxygen (O4) was discovered in 2001, and was assumed to exist in one of the six phases of solid oxygen. It was proven in 2006 that that phase, created by pressurizing O2 to 20 GPa, is in fact a rhombohedral O8 cluster. This cluster has the potential to be a much more powerful oxidizer than either O2 or O3 and may therefore be used in rocket fuel. A metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa and it was shown in 1998 that at very low temperatures, this phase becomes superconducting.
## Physical properties
Oxygen is more soluble in water than nitrogen; water contains approximately 1 molecule of O2 for every 2 molecules of N2, compared to an atmospheric ratio of approximately 1:4. The solubility of oxygen in water is temperature-dependent, and about twice as much (14.6 mg·L−1) dissolves at 0 °C than at 20 °C (7.6 mg·L−1). At 25 °C and 1 atm of air, freshwater contains about 6.04 milliliters (mL) of oxygen per liter, whereas seawater contains about 4.95 mL per liter. At 5 °C the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for water and 7.2 mL (45% more) per liter for sea water.
Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F), and freezes at 54.36 K (−218.79 °C, −361.82 °F). Both liquid and solid O2 are clear substances with a light sky-blue color caused by absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of blue light). High-purity liquid O2 is usually obtained by the fractional distillation of liquefied air; Liquid oxygen may also be produced by condensation out of air, using liquid nitrogen as a coolant. It is a highly-reactive substance and must be segregated from combustible materials.
## Isotopes and stellar origin
Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the most abundant (99.762% natural abundance). Oxygen isotopes range in mass number from 12 to 28.
Most 16O is synthesized at the end of the helium fusion process in stars but some is made in the neon burning process. 17O is primarily made by the burning of hydrogen into helium during the CNO cycle, making it a common isotope in the hydrogen burning zones of stars. Most 18O is produced when 14N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich zones of stars.
Fourteen radioisotopes have been characterized, the most stable being 15O with a half-life of 122.24 seconds (s) and 14O with a half-life of 70.606 s. All of the remaining radioactive isotopes have half-lives that are less than 27 s and the majority of these have half-lives that are less than 83 milliseconds. The most common decay mode of the isotopes lighter than 16O is electron capture to yield nitrogen, and the most common mode for the isotopes heavier than 18O is beta decay to yield fluorine.
## Occurrence
Oxygen is the most abundant chemical element, by mass, in our biosphere, air, sea and land.
Oxygen is the third most abundant chemical element in the universe, after hydrogen and helium. About 0.9% of the Sun's mass is oxygen. Oxygen constitutes 49.2% of the Earth's crust by mass and is the major component of the world's oceans (88.8% by mass). It is the second most common component of the Earth's atmosphere, taking up 21.0% of its volume and 23.1% of its mass (some 1015 tonnes). Earth is unusual among the planets of the Solar System in having such a high concentration of oxygen gas in its atmosphere: Mars (with 0.1% O2 by volume) and Venus have far lower concentrations. However, the O2 surrounding these other planets is produced solely by ultraviolet radiation impacting oxygen-containing molecules such as carbon dioxide.
The unusually high concentration of oxygen on Earth is the result of the oxygen cycle. This biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for modern Earth's atmosphere. Because of the vast amounts of oxygen gas available in the atmosphere, even if all photosynthesis were to cease completely, it would take all the oxygen-consuming processes at the present rate at least another 5,000 years to strip all the O2 from the atmosphere.
Free oxygen also occurs in solution in the world's water bodies. The increased solubility of O2 at lower temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a much higher density of life due to their higher oxygen content. Polluted water may have reduced amounts of O2 in it, depleted by decaying algae and other biomaterials (see eutrophication). Scientists assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of O2 needed to restore it to a normal concentration.
# Biological role
## Photosynthesis and respiration
In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis. Green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced on earth and the rest is produced by terrestrial plants.
A simplified overall formula for photosynthesis is:
Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires the energy of four photons. Many steps are involved, but the result is the formation of a proton gradient across the thylakoid membrane, which is used to synthesize ATP via photophosphorylation. The O2 remaining after oxidation of the water molecule is released into the atmosphere.
Molecular dioxygen, O2, is essential for cellular respiration in all aerobic organisms. Oxygen is used in mitochondria to help generate adenosine triphosphate (ATP) during oxidative phosphorylation. The reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as:
In vertebrates, O2 is diffused through membranes in the lungs and into red blood cells. Hemoglobin binds O2, changing its color from bluish red to bright red. Other animals use hemocyanin (molluscs and some arthropods) or hemerythrin (spiders and lobsters). A liter of blood can dissolve 200 cc of O2.
Reactive oxygen species, such as superoxide ion (O2−) and hydrogen peroxide (H2O2), are dangerous by-products of oxygen use in organisms. Parts of the immune system of higher organisms, however, create peroxide, superoxide, and singlet oxygen to destroy invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of plants against pathogen attack.
An adult human in rest inhales 1.8 to 2.4 grams of oxygen per minute. This amounts to more than 6 billion tonnes of oxygen inhaled by humanity per year.
## Build-up in the atmosphere
Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria evolved. Free oxygen first appeared in significant quantities during the Paleoproterozoic era (between 2.5 and 1.6 billion years ago). At first, the oxygen combined with dissolved iron in the oceans to form banded iron formations. Free oxygen started to gas out of the oceans 2.7 billion years ago, reaching 10% of its present level around 1.7 billion years ago.
The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have driven most of the anaerobic organisms then living to extinction during the oxygen catastrophe about 2.4 billion years ago. However, cellular respiration using O2 enables aerobic organisms to produce much more ATP than anaerobic organisms, helping the former to dominate Earth's biosphere. Photosynthesis and cellular respiration of O2 allowed for the evolution of eukaryotic cells and ultimately complex multicellular organisms such as plants and animals.
Since the beginning of the Cambrian era 540 million years ago, O2 levels have fluctuated between 15% and 30% per volume. Towards the end of the Carboniferous era (about 300 million years ago) atmospheric O2 levels reached a maximum of 35% by volume, allowing insects and amphibians to grow much larger than today's species. Human activities, including the burning of 7 billion tonnes of fossil fuels each year have had very little effect on the amount of free oxygen in the atmosphere. At the current rate of photosynthesis it would take about 2,000 years to regenerate the entire O2 in the present atmosphere.
# History
## Early experiments
One of the first known experiments on the relationship between combustion and air was conducted by the second century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water resulted in some water rising into the neck.
Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is consumed during combustion and respiration.
In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus or just nitroaereus.
In one experiment he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air's volume before extinguishing the subjects.
From this he surmised that nitroaereus is consumed in both respiration and combustion.
Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body. Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract "De respiratione".
## Phlogiston theory
Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th century but none of them recognized it as an element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes.
Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731,
phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx.
Highly combustible materials that leave little residuum, such as wood or coal, were thought to be made mostly of phlogiston; whereas non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process. The fact that a substance like wood actually gains overall weight in burning was hidden by the buoyancy of the gaseous combustion products. Indeed one of the first clues that the phlogiston theory was incorrect was that metals, too, gain weight in rusting (when they were supposedly losing phlogiston).
## Discovery
Oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide and various nitrates by about 1772. Scheele called the gas 'fire air' because it was the only known supporter of combustion. He wrote an account of this discovery in a manuscript he titled Treatise on Air and Fire, which he sent to his publisher in 1775. However, that document was not published until 1777.
In the meantime, an experiment was conducted by the British clergyman Joseph Priestley on August 1 1774 focused sunlight on mercuric oxide (HgO) inside a glass tube, which liberated a gas he named 'dephlogisticated air'. He noted that candles burned brighter in the gas and that a mouse was more active and lived longer while breathing it. After breathing the gas himself, he wrote: "The feeling of it to my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly light and easy for some time afterwards." Priestley published his findings in 1775 in a paper titled "An Account of Further Discoveries in Air" which was included in the second volume of his book titled Experiments and Observations on Different Kinds of Air. Because he had published his findings first, Priestley is usually given priority in the discovery.
The noted French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. However, Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele also posted a letter to Lavoisier on September 30 1774 that described his own discovery of the previously-unknown substance, but Lavoisier never acknowledged receiving it (a copy of the letter was found in Scheele's belongings after his death).
## Lavoisier's contribution
What Lavoisier did indisputably do (although this was disputed at the time) was to conduct the first adequate quantitative experiments on oxidation and give the first correct explanation of how combustion works. He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and to prove that the substance discovered by Priestley and Scheele was a chemical element.
In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were heated in a closed container. He noted that air rushed in when he opened the container, which indicated that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that increase was the same as the weight of the air that rushed back in. This and other experiments on combustion were documented in his book Sur la combustion en général, which was published in 1777. In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and respiration, and azote (Gk. Template:Polytonic "lifeless"), which did not support either.
Lavoisier renamed 'vital air' to oxygène in 1777 from the Greek roots Template:Polytonic (oxys) (acid, literally "sharp," from the taste of acids) and -γενής (-genēs) (producer, literally begetter), because he mistook oxygen to be a constituent of all acids. Azote later became nitrogen in English, although it has kept the name in French and several other European languages.
Oxygen entered the English language despite opposition by English scientists and the fact that Priestley had priority. This is partly due to a poem praising the gas titled "Oxygen" in the popular book The Botanic Garden (1791) by Erasmus Darwin, grandfather of Charles Darwin.
## Later history
John Dalton's original atomic hypothesis assumed that all elements were monoatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed that water's formula was HO, giving the atomic mass of oxygen as 8 times that of hydrogen, instead of the modern value of about 16. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules.
By the late 19th century scientists realized that air could be liquefied, and its components isolated, by compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool oxygen gas enough to liquefy it. He sent a telegram on December 22 1877 to the French Academy of Sciences in Paris announcing his discovery of liquid oxygen. Just two days later, French physicist Louis Paul Cailletet announced his own method of liquefying molecular oxygen. Only a few drops of the liquid were produced in either case so no meaningful analysis could be conducted.
In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen to study. The first commercially-viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them. Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed O2. This method of welding and cutting metal later became common.
In 1923 the American scientist Robert H. Goddard became the first person to develop a rocket engine; the engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-fueled rocket 56 m at 97 km/h on March 16 1926 in Auburn, Massachusetts, USA.
# Industrial production
Two major methods are employed to produce the 100 million tonnes of O2 extracted from air for industrial uses annually. The most common method is to fractionally-distill liquefied air into its various components, with nitrogen N2 distilling as a vapor while oxygen O2 is left as a liquid.
The other major method of producing O2 gas involves passing a stream of clean, dry air through one bed of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is 90% to 93% O2. Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed, by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-cryogenic technologies (see also the related vacuum swing adsorption).
Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. A similar method is the electrocatalytic O2 evolution from oxides and oxoacids. Chemical catalysts can be used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-support equipment on submarines, and are still part of standard equipment on commercial airliners in case of depressurization emergencies. Another air separation technology involves forcing air to dissolve through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to produce nearly pure O2 gas.
In large quantities, the price of liquid oxygen in 2001 was approximately $0.21/kg. Since the primary cost of production is the energy cost of liquefying the air, the production cost will change as energy cost varies.
For reasons of economy oxygen is often transported in bulk as a liquid in specially-insulated tankers, since one litre of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure and 20 °C. Such tankers are used to refill bulk liquid oxygen storage containers, which stand outside hospitals and other institutions with a need for large volumes of pure oxygen gas. Liquid oxygen is passed through heat exchangers, which convert the cryogenic liquid into gas before it enters the building. Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is useful in certain portable medical applications and oxy-fuel welding and cutting.
# Applications
## Medical
Uptake of O2 from the air is the essential purpose of respiration, so oxygen supplementation is used in medicine. Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders, and any disease that impairs the body's ability to take up and use gaseous oxygen. Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been replaced mostly by the use of oxygen masks or nasal cannulas.
Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the 'bends') are sometimes treated using these devices. Increased O2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and argon, forming in their blood. Increasing the pressure of O2 as soon as possible is part of the treatment.
Oxygen is also used medically for patients who require mechanical ventilation, often at concentrations above the 21% found in ambient air.
## Life support and recreational use
A notable application of O2 as a low-pressure breathing gas is in modern space suits, which surround their occupant's body with pressurized air. These devices use nearly pure oxygen at about one third normal pressure, resulting in a normal blood partial pressure of O2. This trade-off of higher oxygen concentration for lower pressure is needed to maintain flexible spacesuits.
Scuba divers and submariners also rely on artificially-delivered O2, but most often use normal pressure, and/or mixtures of oxygen and air. Pure or nearly pure O2 use in diving at higher-than-sea-level pressures is usually limited to rebreather, decompression, or emergency treatment use at relatively shallow depths (~ 6 meters depth, or less). Deeper diving requires significant dilution of O2 with other gases, such as nitrogen or helium, to help prevent oxygen toxicity.
People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental O2 supplies. Passengers traveling in (pressurized) commercial airplanes have an emergency supply of O2 automatically supplied to them in case of cabin depressurization. Sudden cabin pressure loss activates chemical oxygen generators above each seat, causing oxygen masks to drop and forcing iron filings into the sodium chlorate inside the canister. A steady stream of oxygen gas is produced by the exothermic reaction. However, even this may pose a danger if inappropriately triggered: a ValuJet airplane crashed after use-date-expired O2 canisters, which were being shipped in the cargo hold, activated and caused fire. The canisters were mis-labeled as empty, and carried against dangerous goods regulations.
Oxygen, as a supposed mild euphoric, has a history of recreational use in oxygen bars and in sports. Oxygen bars are establishments, found in Japan, California, and Las Vegas, Nevada since the late 1990s that offer higher than normal O2 exposure for a fee. Professional athletes, especially in American football, also sometimes go off field between plays to wear oxygen masks in order to get a supposed "boost" in performance. However, the reality of a pharmacological effect is doubtful; a placebo or psychological boost being the most plausible explanation. Available studies support a performance boost from enriched O2 mixtures only if they are breathed during actual aerobic exercise. Other recreational uses include pyrotechnic applications, such as George Goble's five-second ignition of barbecue grills.
## Industrial
Smelting of iron ore into steel consumes 55% of commercially-produced oxygen. In this process, O2 is injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess carbon as the respective oxides, SO2 and CO2. The reactions are exothermic, so the temperature increases to 1700 °C.
Another 25% of commercially-produced oxygen is used by the chemical industry. Ethylene is reacted with O2 to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder material used to manufacture a host of products, including antifreeze and polyester polymers (the precursors of many plastics and fabrics).
Most of the remaining 20% of commercially-produced oxygen is used in medical applications, metal cutting and welding, as an oxidizer in rocket fuel, and in water treatment. Oxygen is used in oxyacetylene welding burning acetylene with O2 to produce a very hot flame. In this process, metal up to 60 cm thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of O2. Rocket propulsion requires a fuel and an oxidizer. Larger rockets use liquid oxygen as their oxidizer, which is mixed and ignited with the fuel for propulsion.
## Scientific
Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine organisms to determine what the climate was like millions of years ago (see oxygen isotope ratio cycle). Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than water molecules containing the 12% heavier oxygen-18; this disparity increases at lower temperatures. During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate. Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples that are up to several hundreds of thousands of years old.
Planetary geologists have measured different abundances of oxygen isotopes in samples from the Earth, the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in the Sun, believed to be the same as those of the primordial solar nebula. However, analysis of a silicon wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the coalescence of dust grains that formed the Earth.
Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm. Some remote sensing scientists have proposed using the measurement of the radiance coming from vegetation canopies in those bands to characterize plant health status from a satellite platform. This approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible method of monitoring the carbon cycle from satellites on a global scale.
# Compounds
The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is found in a few compounds such as peroxides. Compounds containing oxygen in other oxidation states are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2 (dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride).
## Oxides and other inorganic compounds
Water (H2O) is the oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are covalently bonded to oxygen in a water molecule but also have an additional attraction (about 23.3 kJ·mol−1 per hydrogen atom) to an adjacent oxygen atom in a separate molecule. These hydrogen bonds between water molecules hold them approximately 15% closer than what would be expected in a simple liquid with just Van der Waals forces.
Due to its electronegativity, oxygen forms chemical bonds with almost all other elements at elevated temperatures to give corresponding oxides. However, some elements readily form oxides at standard conditions for temperature and pressure; the rusting of iron is an example. The surface of metals like aluminium and titanium are oxidized in the presence of air and become coated with a thin film of oxide that passivates the metal and slows further corrosion. Some of the transition metal oxides are found in nature as non-stoichiometric compounds, with a slightly less metal than the chemical formula would show. For example, the natural occurring FeO (wüstite) is actually written as Fe1−xO, where x is usually around 0.05.
Oxygen as a compound is present in the atmosphere in trace quantities in the form of carbon dioxide (CO2). The earth's crustal rock is composed in large part of oxides of silicon (silica SiO2, found in granite and sand), aluminium (aluminium oxide Al2O3, in bauxite and corundum), iron (iron(III) oxide Fe2O3, in hematite and rust) and other metals.
The rest of the Earth's crust is also made of oxygen compounds, in particular calcium carbonate (in limestone) and silicates (in feldspars). Water-soluble silicates in the form of Na4SiO4, Na2SiO3, and Na2Si2O5 are used as detergents and adhesives.
Oxygen also acts as a ligand for transition metals, forming metal–O2 bonds with the iridium atom in Vaska's complex, with the platinum in PtF6, and with the iron center of the heme group of hemoglobin.
## Organic compounds and biomolecules
Among the most important classes of organic compounds that contain oxygen are (where "R" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.
Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process called autoxidation. Most of the organic compounds that contain oxygen are not made by direct action of O2. Organic compounds important in industry and commerce that are made by direct oxidation of a precursor include ethylene oxide and peracetic acid.
The element is found in almost all biomolecules that are important to (or generated by) life. Only a few common complex biomolecules, such as squalene and the carotenes, contain no oxygen. Of the organic compounds with biological relevance, carbohydrates contain the largest proportion by mass of oxygen. All fats, fatty acids, amino acids, and proteins contain oxygen (due to the presence of carbonyl groups in these acids and their ester residues). Oxygen also occurs in phosphate (PO43−) groups in the biologically important energy-carrying molecules ATP and ADP, in the backbone and the purines (except adenine) and pyrimidines of RNA and DNA, and in bones as calcium phosphate and hydroxylapatite.
# Precautions
## Toxicity
Oxygen gas (O2) can be toxic at elevated partial pressures, leading to convulsions and other health problems. Oxygen toxicity usually begins to occur at partial pressures more than 50 kilopascals (kPa), or 2.5 times the normal sea-level O2 partial pressure of about 21 kPa. Therefore, air supplied through oxygen masks in medical applications is typically composed of 30% O2 by volume (about 30 kPa at standard pressure). At one time, premature babies were placed in incubators containing O2-rich air, but this practice was discontinued after some babies were blinded by it.
Breathing pure O2 in space applications, such as in some modern space suits, or in early spacecraft such as Apollo, causes no damage due to the low total pressures used. In the case of spacesuits, the O2 partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting O2 partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level O2 partial pressure (see arterial blood gas).
Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface supplied diving. Prolonged breathing of an air mixture with an O2 partial pressure more than 60 kPa can eventually lead to permanent pulmonary fibrosis. Exposure to a O2 partial pressures greater than 160 kPa may lead to convulsions (normally fatal for divers). Acute oxygen toxicity can occur by breathing an air mixture with 21% O2 at 66 m or more of depth while the same thing can occur by breathing 100% O2 at only 6 m.
## Combustion and other hazards
Highly-concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when concentrated oxidants and fuels are brought into close proximity; however, an ignition event, such as heat or a spark, is needed to trigger combustion. Oxygen itself is not the fuel, but the oxidant. Combustion hazards also apply to compounds of oxygen with a high oxidative potential, such as peroxides, chlorates, nitrates, perchlorates, and dichromates because they can donate oxygen to a fire.
Concentrated O2 will allow combustion to proceed rapidly and energetically. Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of O2 systems requires special training to ensure that ignition sources are minimized. The fire that killed the Apollo 1 crew on a test launch pad spread so rapidly because the capsule was pressurized with pure O2 but at slightly more than atmospheric pressure, instead of the ⅓ normal pressure that would be used in a mission.
Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt can cause these materials to detonate unpredictably on subsequent mechanical impact. On contact with the human body, it can also cause cryogenic burns to the skin and the eyes. | Oxygen
Template:Infobox oxygen
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [2]
# Overview
Oxygen is the element with atomic number 8 and represented by the symbol O. It is a member of the chalcogen group on the periodic table, and is a highly reactive nonmetallic period 2 element that readily forms compounds (notably oxides) with almost all other elements. At standard temperature and pressure two atoms of the element bind to form dioxygen, a colorless, odorless, tasteless diatomic gas with the formula O2. Oxygen is the third most abundant element in the universe by mass after hydrogen and helium[1] and the most abundant element by mass in the Earth's crust.[2] Oxygen constitutes 88.8% of the mass of water and 20.9% of the volume of air.[3]
All major classes of structural molecules in living organisms, such as proteins, carbohydrates, and fats, contain oxygen, as do the major inorganic compounds that comprise animal shells, teeth, and bone. Oxygen in the form of O2 is produced from water by cyanobacteria, algae and plants during photosynthesis and is used in cellular respiration for all complex life. Oxygen is toxic to anaerobic organisms, which were the dominant form of early life on Earth until O2 began to accumulate in the atmosphere 2.5 billion years ago.[4] Another form (allotrope) of oxygen, ozone (O3), helps protect the biosphere from ultraviolet radiation with the high-altitude ozone layer, but is a pollutant near the surface where it is a by-product of smog.
Oxygen was independently discovered by Joseph Priestley and Carl Wilhelm Scheele in the 1770s, but Priestley is usually given priority because he published his findings first. The name oxygen was coined in 1777 by Antoine Lavoisier,[5] whose experiments with oxygen helped to discredit the then-popular phlogiston theory of combustion and corrosion. Oxygen is produced industrially by fractional distillation of liquefied air, use of zeolites to remove carbon dioxide and nitrogen from air, electrolysis of water and other means. Uses of oxygen include the production of steel, plastics and textiles; rocket propellant; oxygen therapy; and life support in aircraft, submarines, spaceflight and diving.
# Characteristics
## Structure
At standard temperature and pressure, oxygen is a colorless, odorless gas with the molecular formula O2, in which the two oxygen atoms are chemically bonded to each other with a spin triplet electron configuration. This bond has a bond order of two, and is often over-simplified in description as a double bond.[6]
Triplet oxygen is the ground state of the O2 molecule.[7] The electron configuration of the molecule has two unpaired electrons occupying two degenerate molecular orbitals.[8] These orbitals are classified as antibonding (weakening the bond order from three to two), so the diatomic oxygen bond is weaker than the diatomic nitrogen triple bond in which all bonding molecular orbitals are filled, but some antibonding orbitals are not.[7]
In normal triplet form, O2 molecules are paramagnetic—they form a magnet in the presence of a magnetic field—because of the spin magnetic moments of the unpaired electrons in the molecule, and the negative exchange energy between neighboring O2 molecules.[9] Liquid oxygen is attracted to a magnet to a sufficient extent that, in laboratory demonstrations, a bridge of liquid oxygen may be supported against its own weight between the poles of a powerful magnet.[10][11]
Singlet oxygen, a name given to several higher-energy species of molecular O2 in which all the electron spins are paired, is much more reactive towards common organic molecules. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy of sunlight.[12] It is also produced in the troposphere by the photolysis of ozone by light of short wavelength,[13] and by the immune system as a source of active oxygen.[14] Carotenoids in photosynthetic organisms (and possibly also in animals) play a major role in absorbing energy from singlet oxygen and converting it to the unexcited ground state before it can cause harm to tissues.[15]
## Allotropes
The common allotrope of elemental oxygen on Earth is called dioxygen, O2. It has a bond length of 121 pm and a bond energy of 498 kJ·mol-1.[16] This is the form that is used by complex forms of life, such as animals, in cellular respiration (see Biological role) and is the form that is a major part of the Earth's atmosphere (see Occurrence). Other aspects of O2 are covered in the remainder of this article.
Trioxygen (O3) is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to lung tissue.[17] Ozone is produced in the upper atmosphere when O2 combines with atomic oxygen made by the splitting of O2 by ultraviolet (UV) radiation.[5] Since ozone absorbs strongly in the UV region of the spectrum, it functions as a protective radiation shield for the planet (see ozone layer).[5] Near the earth's surface, however, it is a pollutant formed as a by-product of automobile exhaust.[18]
The metastable molecule tetraoxygen (O4) was discovered in 2001,[19][20] and was assumed to exist in one of the six phases of solid oxygen. It was proven in 2006 that that phase, created by pressurizing O2 to 20 GPa, is in fact a rhombohedral O8 cluster.[21] This cluster has the potential to be a much more powerful oxidizer than either O2 or O3 and may therefore be used in rocket fuel.[19][20] A metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa[22] and it was shown in 1998 that at very low temperatures, this phase becomes superconducting.[23]
## Physical properties
Oxygen is more soluble in water than nitrogen; water contains approximately 1 molecule of O2 for every 2 molecules of N2, compared to an atmospheric ratio of approximately 1:4. The solubility of oxygen in water is temperature-dependent, and about twice as much (14.6 mg·L−1) dissolves at 0 °C than at 20 °C (7.6 mg·L−1).[24][25] At 25 °C and 1 atm of air, freshwater contains about 6.04 milliliters (mL) of oxygen per liter, whereas seawater contains about 4.95 mL per liter.[26] At 5 °C the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for water and 7.2 mL (45% more) per liter for sea water.
Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F), and freezes at 54.36 K (−218.79 °C, −361.82 °F).[27] Both liquid and solid O2 are clear substances with a light sky-blue color caused by absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of blue light). High-purity liquid O2 is usually obtained by the fractional distillation of liquefied air;[28] Liquid oxygen may also be produced by condensation out of air, using liquid nitrogen as a coolant. It is a highly-reactive substance and must be segregated from combustible materials.[29]
## Isotopes and stellar origin
Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the most abundant (99.762% natural abundance).[30] Oxygen isotopes range in mass number from 12 to 28.[30]
Most 16O is synthesized at the end of the helium fusion process in stars but some is made in the neon burning process.[31] 17O is primarily made by the burning of hydrogen into helium during the CNO cycle, making it a common isotope in the hydrogen burning zones of stars.[31] Most 18O is produced when 14N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich zones of stars.[31]
Fourteen radioisotopes have been characterized, the most stable being 15O with a half-life of 122.24 seconds (s) and 14O with a half-life of 70.606 s.[30] All of the remaining radioactive isotopes have half-lives that are less than 27 s and the majority of these have half-lives that are less than 83 milliseconds.[30] The most common decay mode of the isotopes lighter than 16O is electron capture to yield nitrogen, and the most common mode for the isotopes heavier than 18O is beta decay to yield fluorine.[30]
## Occurrence
Oxygen is the most abundant chemical element, by mass, in our biosphere, air, sea and land.
Oxygen is the third most abundant chemical element in the universe, after hydrogen and helium.[1] About 0.9% of the Sun's mass is oxygen.[3] Oxygen constitutes 49.2% of the Earth's crust by mass[2] and is the major component of the world's oceans (88.8% by mass).[3] It is the second most common component of the Earth's atmosphere, taking up 21.0% of its volume and 23.1% of its mass (some 1015 tonnes).[32][3][33] Earth is unusual among the planets of the Solar System in having such a high concentration of oxygen gas in its atmosphere: Mars (with 0.1% O2 by volume) and Venus have far lower concentrations. However, the O2 surrounding these other planets is produced solely by ultraviolet radiation impacting oxygen-containing molecules such as carbon dioxide.
The unusually high concentration of oxygen on Earth is the result of the oxygen cycle. This biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is photosynthesis, which is responsible for modern Earth's atmosphere. Because of the vast amounts of oxygen gas available in the atmosphere, even if all photosynthesis were to cease completely, it would take all the oxygen-consuming processes at the present rate at least another 5,000 years to strip all the O2 from the atmosphere.[34][35]
Free oxygen also occurs in solution in the world's water bodies. The increased solubility of O2 at lower temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a much higher density of life due to their higher oxygen content.[36] Polluted water may have reduced amounts of O2 in it, depleted by decaying algae and other biomaterials (see eutrophication). Scientists assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of O2 needed to restore it to a normal concentration.[37]
# Biological role
## Photosynthesis and respiration
In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis. Green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced on earth and the rest is produced by terrestrial plants.[38]
A simplified overall formula for photosynthesis is:[39]
Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires the energy of four photons.[40] Many steps are involved, but the result is the formation of a proton gradient across the thylakoid membrane, which is used to synthesize ATP via photophosphorylation.[41] The O2 remaining after oxidation of the water molecule is released into the atmosphere.[42]
Molecular dioxygen, O2, is essential for cellular respiration in all aerobic organisms. Oxygen is used in mitochondria to help generate adenosine triphosphate (ATP) during oxidative phosphorylation. The reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as:
In vertebrates, O2 is diffused through membranes in the lungs and into red blood cells. Hemoglobin binds O2, changing its color from bluish red to bright red.[43][17] Other animals use hemocyanin (molluscs and some arthropods) or hemerythrin (spiders and lobsters).[32] A liter of blood can dissolve 200 cc of O2.[32]
Reactive oxygen species, such as superoxide ion (O2−) and hydrogen peroxide (H2O2), are dangerous by-products of oxygen use in organisms.[32] Parts of the immune system of higher organisms, however, create peroxide, superoxide, and singlet oxygen to destroy invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of plants against pathogen attack.[41]
An adult human in rest inhales 1.8 to 2.4 grams of oxygen per minute.[44] This amounts to more than 6 billion tonnes of oxygen inhaled by humanity per year. [45]
## Build-up in the atmosphere
Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria evolved. Free oxygen first appeared in significant quantities during the Paleoproterozoic era (between 2.5 and 1.6 billion years ago). At first, the oxygen combined with dissolved iron in the oceans to form banded iron formations. Free oxygen started to gas out of the oceans 2.7 billion years ago, reaching 10% of its present level around 1.7 billion years ago.[46]
The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have driven most of the anaerobic organisms then living to extinction during the oxygen catastrophe about 2.4 billion years ago. However, cellular respiration using O2 enables aerobic organisms to produce much more ATP than anaerobic organisms, helping the former to dominate Earth's biosphere.[47] Photosynthesis and cellular respiration of O2 allowed for the evolution of eukaryotic cells and ultimately complex multicellular organisms such as plants and animals.
Since the beginning of the Cambrian era 540 million years ago, O2 levels have fluctuated between 15% and 30% per volume.[48] Towards the end of the Carboniferous era (about 300 million years ago) atmospheric O2 levels reached a maximum of 35% by volume,[48] allowing insects and amphibians to grow much larger than today's species. Human activities, including the burning of 7 billion tonnes of fossil fuels each year have had very little effect on the amount of free oxygen in the atmosphere.[9] At the current rate of photosynthesis it would take about 2,000 years to regenerate the entire O2 in the present atmosphere.[49]
# History
## Early experiments
One of the first known experiments on the relationship between combustion and air was conducted by the second century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water resulted in some water rising into the neck.[50]
Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is consumed during combustion and respiration.[51]
In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John Mayow refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus or just nitroaereus.[52]
In one experiment he found that placing either a mouse or a lit candle in a closed container over water caused the water to rise and replace one-fourteenth of the air's volume before extinguishing the subjects.[53]
From this he surmised that nitroaereus is consumed in both respiration and combustion.
Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it.[52] He also thought that the lungs separate nitroaereus from air and pass it into the blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain substances in the body.[52] Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract "De respiratione".[53]
## Phlogiston theory
Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th century but none of them recognized it as an element.[24] This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes.
Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl by 1731,[54]
phlogiston theory stated that all combustible materials were made of two parts. One part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx.[51]
Highly combustible materials that leave little residuum, such as wood or coal, were thought to be made mostly of phlogiston; whereas non-combustible substances that corrode, such as iron, contained very little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted to test the idea; instead, it was based on observations of what happens when something burns, that most common objects appear to become lighter and seem to lose something in the process.[51] The fact that a substance like wood actually gains overall weight in burning was hidden by the buoyancy of the gaseous combustion products. Indeed one of the first clues that the phlogiston theory was incorrect was that metals, too, gain weight in rusting (when they were supposedly losing phlogiston).
## Discovery
Oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas by heating mercuric oxide and various nitrates by about 1772.[51][3] Scheele called the gas 'fire air' because it was the only known supporter of combustion. He wrote an account of this discovery in a manuscript he titled Treatise on Air and Fire, which he sent to his publisher in 1775. However, that document was not published until 1777.[55]
In the meantime, an experiment was conducted by the British clergyman Joseph Priestley on August 1 1774 focused sunlight on mercuric oxide (HgO) inside a glass tube, which liberated a gas he named 'dephlogisticated air'.[3] He noted that candles burned brighter in the gas and that a mouse was more active and lived longer while breathing it. After breathing the gas himself, he wrote: "The feeling of it to my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly light and easy for some time afterwards."[24] Priestley published his findings in 1775 in a paper titled "An Account of Further Discoveries in Air" which was included in the second volume of his book titled Experiments and Observations on Different Kinds of Air.[56][51] Because he had published his findings first, Priestley is usually given priority in the discovery.
The noted French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance independently. However, Priestley visited Lavoisier in October 1774 and told him about his experiment and how he liberated the new gas. Scheele also posted a letter to Lavoisier on September 30 1774 that described his own discovery of the previously-unknown substance, but Lavoisier never acknowledged receiving it (a copy of the letter was found in Scheele's belongings after his death).[55]
## Lavoisier's contribution
What Lavoisier did indisputably do (although this was disputed at the time) was to conduct the first adequate quantitative experiments on oxidation and give the first correct explanation of how combustion works.[3] He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and to prove that the substance discovered by Priestley and Scheele was a chemical element.
In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were heated in a closed container.[3] He noted that air rushed in when he opened the container, which indicated that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that increase was the same as the weight of the air that rushed back in. This and other experiments on combustion were documented in his book Sur la combustion en général, which was published in 1777.[3] In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and respiration, and azote (Gk. Template:Polytonic "lifeless"), which did not support either.
Lavoisier renamed 'vital air' to oxygène in 1777 from the Greek roots Template:Polytonic (oxys) (acid, literally "sharp," from the taste of acids) and -γενής (-genēs) (producer, literally begetter), because he mistook oxygen to be a constituent of all acids.[5] Azote later became nitrogen in English, although it has kept the name in French and several other European languages.[3]
Oxygen entered the English language despite opposition by English scientists and the fact that Priestley had priority. This is partly due to a poem praising the gas titled "Oxygen" in the popular book The Botanic Garden (1791) by Erasmus Darwin, grandfather of Charles Darwin.[55]
## Later history
John Dalton's original atomic hypothesis assumed that all elements were monoatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed that water's formula was HO, giving the atomic mass of oxygen as 8 times that of hydrogen, instead of the modern value of about 16.[57] In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules.[58][59]
By the late 19th century scientists realized that air could be liquefied, and its components isolated, by compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool oxygen gas enough to liquefy it. He sent a telegram on December 22 1877 to the French Academy of Sciences in Paris announcing his discovery of liquid oxygen.[60] Just two days later, French physicist Louis Paul Cailletet announced his own method of liquefying molecular oxygen.[60] Only a few drops of the liquid were produced in either case so no meaningful analysis could be conducted.
In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen to study.[9] The first commercially-viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them.[61] Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed O2. This method of welding and cutting metal later became common.[61]
In 1923 the American scientist Robert H. Goddard became the first person to develop a rocket engine; the engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-fueled rocket 56 m at 97 km/h on March 16 1926 in Auburn, Massachusetts, USA.[61][62]
# Industrial production
Two major methods are employed to produce the 100 million tonnes of O2 extracted from air for industrial uses annually.[55] The most common method is to fractionally-distill liquefied air into its various components, with nitrogen N2 distilling as a vapor while oxygen O2 is left as a liquid.[55]
The other major method of producing O2 gas involves passing a stream of clean, dry air through one bed of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is 90% to 93% O2.[55] Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed, by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-cryogenic technologies (see also the related vacuum swing adsorption).[63]
Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. A similar method is the electrocatalytic O2 evolution from oxides and oxoacids. Chemical catalysts can be used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-support equipment on submarines, and are still part of standard equipment on commercial airliners in case of depressurization emergencies. Another air separation technology involves forcing air to dissolve through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to produce nearly pure O2 gas.[37]
In large quantities, the price of liquid oxygen in 2001 was approximately $0.21/kg.[64] Since the primary cost of production is the energy cost of liquefying the air, the production cost will change as energy cost varies.
For reasons of economy oxygen is often transported in bulk as a liquid in specially-insulated tankers, since one litre of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure and 20 °C.[55] Such tankers are used to refill bulk liquid oxygen storage containers, which stand outside hospitals and other institutions with a need for large volumes of pure oxygen gas. Liquid oxygen is passed through heat exchangers, which convert the cryogenic liquid into gas before it enters the building. Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is useful in certain portable medical applications and oxy-fuel welding and cutting.[55]
# Applications
Template:Seealso
## Medical
Uptake of O2 from the air is the essential purpose of respiration, so oxygen supplementation is used in medicine. Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders, and any disease that impairs the body's ability to take up and use gaseous oxygen.[65] Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been replaced mostly by the use of oxygen masks or nasal cannulas.
Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the 'bends') are sometimes treated using these devices. Increased O2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and argon, forming in their blood. Increasing the pressure of O2 as soon as possible is part of the treatment.[65]
Oxygen is also used medically for patients who require mechanical ventilation, often at concentrations above the 21% found in ambient air.
## Life support and recreational use
A notable application of O2 as a low-pressure breathing gas is in modern space suits, which surround their occupant's body with pressurized air. These devices use nearly pure oxygen at about one third normal pressure, resulting in a normal blood partial pressure of O2. This trade-off of higher oxygen concentration for lower pressure is needed to maintain flexible spacesuits.
Scuba divers and submariners also rely on artificially-delivered O2, but most often use normal pressure, and/or mixtures of oxygen and air. Pure or nearly pure O2 use in diving at higher-than-sea-level pressures is usually limited to rebreather, decompression, or emergency treatment use at relatively shallow depths (~ 6 meters depth, or less). Deeper diving requires significant dilution of O2 with other gases, such as nitrogen or helium, to help prevent oxygen toxicity.
People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental O2 supplies.[66] Passengers traveling in (pressurized) commercial airplanes have an emergency supply of O2 automatically supplied to them in case of cabin depressurization. Sudden cabin pressure loss activates chemical oxygen generators above each seat, causing oxygen masks to drop and forcing iron filings into the sodium chlorate inside the canister.[37] A steady stream of oxygen gas is produced by the exothermic reaction. However, even this may pose a danger if inappropriately triggered: a ValuJet airplane crashed after use-date-expired O2 canisters, which were being shipped in the cargo hold, activated and caused fire. The canisters were mis-labeled as empty, and carried against dangerous goods regulations.[67]
Oxygen, as a supposed mild euphoric, has a history of recreational use in oxygen bars and in sports. Oxygen bars are establishments, found in Japan, California, and Las Vegas, Nevada since the late 1990s that offer higher than normal O2 exposure for a fee.[68] Professional athletes, especially in American football, also sometimes go off field between plays to wear oxygen masks in order to get a supposed "boost" in performance. However, the reality of a pharmacological effect is doubtful; a placebo or psychological boost being the most plausible explanation.[68] Available studies support a performance boost from enriched O2 mixtures only if they are breathed during actual aerobic exercise.[69] Other recreational uses include pyrotechnic applications, such as George Goble's five-second ignition of barbecue grills.[70]
## Industrial
Smelting of iron ore into steel consumes 55% of commercially-produced oxygen.[37] In this process, O2 is injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess carbon as the respective oxides, SO2 and CO2. The reactions are exothermic, so the temperature increases to 1700 °C.[37]
Another 25% of commercially-produced oxygen is used by the chemical industry.[37] Ethylene is reacted with O2 to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder material used to manufacture a host of products, including antifreeze and polyester polymers (the precursors of many plastics and fabrics).[37]
Most of the remaining 20% of commercially-produced oxygen is used in medical applications, metal cutting and welding, as an oxidizer in rocket fuel, and in water treatment.[37] Oxygen is used in oxyacetylene welding burning acetylene with O2 to produce a very hot flame. In this process, metal up to 60 cm thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of O2.[71] Rocket propulsion requires a fuel and an oxidizer. Larger rockets use liquid oxygen as their oxidizer, which is mixed and ignited with the fuel for propulsion.
## Scientific
Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine organisms to determine what the climate was like millions of years ago (see oxygen isotope ratio cycle). Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than water molecules containing the 12% heavier oxygen-18; this disparity increases at lower temperatures.[72] During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate.[72] Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples that are up to several hundreds of thousands of years old.
Planetary geologists have measured different abundances of oxygen isotopes in samples from the Earth, the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in the Sun, believed to be the same as those of the primordial solar nebula. However, analysis of a silicon wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the coalescence of dust grains that formed the Earth.[73]
Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm. Some remote sensing scientists have proposed using the measurement of the radiance coming from vegetation canopies in those bands to characterize plant health status from a satellite platform.[74] This approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible method of monitoring the carbon cycle from satellites on a global scale.
# Compounds
The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is found in a few compounds such as peroxides.[75] Compounds containing oxygen in other oxidation states are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2 (dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride).
## Oxides and other inorganic compounds
Water (H2O) is the oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are covalently bonded to oxygen in a water molecule but also have an additional attraction (about 23.3 kJ·mol−1 per hydrogen atom) to an adjacent oxygen atom in a separate molecule.[76] These hydrogen bonds between water molecules hold them approximately 15% closer than what would be expected in a simple liquid with just Van der Waals forces.[77][78]
Due to its electronegativity, oxygen forms chemical bonds with almost all other elements at elevated temperatures to give corresponding oxides. However, some elements readily form oxides at standard conditions for temperature and pressure; the rusting of iron is an example. The surface of metals like aluminium and titanium are oxidized in the presence of air and become coated with a thin film of oxide that passivates the metal and slows further corrosion. Some of the transition metal oxides are found in nature as non-stoichiometric compounds, with a slightly less metal than the chemical formula would show. For example, the natural occurring FeO (wüstite) is actually written as Fe1−xO, where x is usually around 0.05.[79]
Oxygen as a compound is present in the atmosphere in trace quantities in the form of carbon dioxide (CO2). The earth's crustal rock is composed in large part of oxides of silicon (silica SiO2, found in granite and sand), aluminium (aluminium oxide Al2O3, in bauxite and corundum), iron (iron(III) oxide Fe2O3, in hematite and rust) and other metals.
The rest of the Earth's crust is also made of oxygen compounds, in particular calcium carbonate (in limestone) and silicates (in feldspars). Water-soluble silicates in the form of Na4SiO4, Na2SiO3, and Na2Si2O5 are used as detergents and adhesives.[80]
Oxygen also acts as a ligand for transition metals, forming metal–O2 bonds with the iridium atom in Vaska's complex,[81] with the platinum in PtF6,[82] and with the iron center of the heme group of hemoglobin.
## Organic compounds and biomolecules
Among the most important classes of organic compounds that contain oxygen are (where "R" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.
Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process called autoxidation.[83] Most of the organic compounds that contain oxygen are not made by direct action of O2. Organic compounds important in industry and commerce that are made by direct oxidation of a precursor include ethylene oxide and peracetic acid.[80]
The element is found in almost all biomolecules that are important to (or generated by) life. Only a few common complex biomolecules, such as squalene and the carotenes, contain no oxygen. Of the organic compounds with biological relevance, carbohydrates contain the largest proportion by mass of oxygen. All fats, fatty acids, amino acids, and proteins contain oxygen (due to the presence of carbonyl groups in these acids and their ester residues). Oxygen also occurs in phosphate (PO43−) groups in the biologically important energy-carrying molecules ATP and ADP, in the backbone and the purines (except adenine) and pyrimidines of RNA and DNA, and in bones as calcium phosphate and hydroxylapatite.
# Precautions
## Toxicity
Oxygen gas (O2) can be toxic at elevated partial pressures, leading to convulsions and other health problems.[84][85] Oxygen toxicity usually begins to occur at partial pressures more than 50 kilopascals (kPa), or 2.5 times the normal sea-level O2 partial pressure of about 21 kPa. Therefore, air supplied through oxygen masks in medical applications is typically composed of 30% O2 by volume (about 30 kPa at standard pressure).[24] At one time, premature babies were placed in incubators containing O2-rich air, but this practice was discontinued after some babies were blinded by it.[24]
Breathing pure O2 in space applications, such as in some modern space suits, or in early spacecraft such as Apollo, causes no damage due to the low total pressures used.[86] In the case of spacesuits, the O2 partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting O2 partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level O2 partial pressure (see arterial blood gas).
Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface supplied diving.[24] Prolonged breathing of an air mixture with an O2 partial pressure more than 60 kPa can eventually lead to permanent pulmonary fibrosis.[87] Exposure to a O2 partial pressures greater than 160 kPa may lead to convulsions (normally fatal for divers). Acute oxygen toxicity can occur by breathing an air mixture with 21% O2 at 66 m or more of depth while the same thing can occur by breathing 100% O2 at only 6 m.[87][88]
## Combustion and other hazards
Template:NFPA 704
Highly-concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when concentrated oxidants and fuels are brought into close proximity; however, an ignition event, such as heat or a spark, is needed to trigger combustion.[89] Oxygen itself is not the fuel, but the oxidant. Combustion hazards also apply to compounds of oxygen with a high oxidative potential, such as peroxides, chlorates, nitrates, perchlorates, and dichromates because they can donate oxygen to a fire.
Concentrated O2 will allow combustion to proceed rapidly and energetically.[89] Steel pipes and storage vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the design and manufacture of O2 systems requires special training to ensure that ignition sources are minimized.[89] The fire that killed the Apollo 1 crew on a test launch pad spread so rapidly because the capsule was pressurized with pure O2 but at slightly more than atmospheric pressure, instead of the ⅓ normal pressure that would be used in a mission.[90][91]
Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt can cause these materials to detonate unpredictably on subsequent mechanical impact.[89] On contact with the human body, it can also cause cryogenic burns to the skin and the eyes. | https://www.wikidoc.org/index.php/Dioxygen | |
d2ae04ac6c54c0d843510d40f7c8578aa92cf0a3 | wikidoc | Ploidy | Ploidy
Please Take Over This Page and Apply to be Editor-In-Chief for this topic:
There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch.
# Overview
Ploidy is the number of homologous sets of chromosomes in a biological cell. The ploidy of cells can vary within an organism. In humans, most cells are diploid (containing one set of chromosomes from each parent), but sex cells (sperm and egg) are haploid. In contrast, tetraploidy (four sets of chromosomes) is a type of polyploidy and is common in plants, and not uncommon in amphibians, reptiles, and various species of insects.
The number of chromosomes in one of the mutually-homologous sets is called the monoploid number (x). This is the same number for every set in every cell of a given organism.
Euploidy is the state of a cell or organism having an integral multiple of the monoploid number, possibly excluding the sex-determining chromosomes. For example, a human cell has 46 chromosomes, which is an integer multiple of the monoploid number, 23. A human with abnormal, but integral, multiples of this full set (e.g. 69 chromosomes) would also be considered as euploid. Aneuploidy is the state of not having euploidy. In humans, examples include having a single extra chromosome (such as Down syndrome), or missing a chromosome (such as Turner syndrome). Aneuploidy is not normally considered -ploidy but -somy, such as trisomy or monosomy.
# Haploid and monoploidy
The haploid number is the number of chromosomes in a gamete of an individual. This is distinct from the monoploid number which is the number of unique chromosomes in a single complete set.
In humans, the monoploid number (x) equals the haploid number (the number in a gamete, n), that is, x = n = 23. In some species (especially plants), these numbers differ. Commercial common wheat is an allopolyploid with six sets of chromosomes, two sets coming originally from each of three different species, with six copies of chromosomes in each cell. The gametes of common wheat are considered as haploid since they contain half the genetic information of somatic cells, but are not monoploid as they still contain three complete sets of chromosomes from the original three different species (n = 3x).
Most fungi and a few algae are normally monoploid organisms. Male bees, wasps and ants are also monoploid. For organisms that only ever have one set of chromosomes, the term monoploid is sometimes used interchangeably with haploid, but this is no longer the preferred terminology.
Plants and some algae switch between a haploid and a diploid or polyploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most diploid organisms produce monoploid sex cells that can combine to form a diploid zygote, for example animals are primarily diploid but produce monoploid gametes. During meiosis, germ cell precursors have their number of chromosomes halved by randomly "choosing" one homologue, resulting in haploid germ cells (sperm and ovum).
# Diploid
Diploid (2n) cells have two homologous copies of each chromosome, usually one from the mother and one from the father. The exact number of chromosomes may be one or two different from the 2n number yet the cell may still be classified as diploid (although with aneuploidy). Nearly all mammals are diploid organisms, although all individuals have some small fraction of cells that display polyploidy.
# Haplodiploidy
A haplodiploid species is one in which one of the sexes has haploid cells and the other has diploid cells. Most commonly, the male is haploid and the female is diploid. In such species, the male develops from unfertilized eggs, a process called arrhenotokous parthenogenesis or simply arrhenotoky, while the female develops from fertilized eggs: the sperm provides a second set of chromosomes when it fertilizes the egg.
Haplodiploidy is found in many species of insects from the order Hymenoptera, particularly ants, bees, and wasps. One consequence of haplodiploidy is that the relatedness of sisters to each other is higher than in diploids; this has been advanced as an explanation for the eusociality common in this order of insects as it increases the power of kin selection. This argument has been disputed on the grounds that haplodiploidy also reduces the relatedness of brothers to sisters, theoretically balancing the above effect.
In some Hymenopteran species, worker insects are also able to produce diploid (and therefore female) fertile offspring, which develop as normal queens. The second set of chromosomes comes not from sperm, but from one of the three polar bodies during anaphase II of meiosis. This process is called thelytokous parthenogenesis or simply thelytoky.
# Haploidisation
Haploidisation (haploidization) is the process of creating a haploid cell (usually from a diploid cell).
A laboratory procedure called haploidisation forces a normal cell to expel half of its chromosomal complement. In mammals this renders this cell chromosomally equal to sperm or egg. This was one of the procedures used by Japanese researchers to produce Kaguya, a fatherless mouse.
Haploidisation sometimes occurs in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis.
# Polyploidy
Polyploidy is the state where all cells have multiple pairs of chromosomes beyond the basic set. These may be from the same species or from closely related species. In the latter case these are known as allopolyploids, amphidiploids or allotetraploids. Allopolyploids can be formed from the hybridisation of two separate species followed by their subsequent chromosome doubling. A good example is the so-called Brassica triangle where three different parent species have hybridized in each pair combination to form three different allopolyploid species. Polyploid plants are probably most often formed from the pairing of meiotically unreduced gametes (Ramsey and Schemske, 2002).
Polyploidy occurs commonly in plants, but rarely in animals. Even in diploid organisms many somatic cells are polyploid due to a process called endoreduplication where duplication of the genome occurs without mitosis (cell division).
# Variable or indefinite ploidy
Depending on growth conditions, prokaryotes such as bacteria may have a chromosome copy number of 1 to 4, and that number is commonly fractional, counting portions of the chromosome partly replicated at a given time. This is because under logarithmic growth conditions the cells are able to replicate their DNA faster than they can divide.
# Mixoploidy
Mixoploidy refers to the presence of two cell lines, one diploid and one polyploid. Though polyploidy in humans is not viable, mixoploidy has been found in live adults and children. There are two types: diploid-triploid mixoplody, in which some cells have 46 chromosomes and some have 69, and diploid-tetraploid mixoploidy, in which some cells have 46 and some have 92 chromosomes.
# Dihaploidy and Polyhaploidy
Dihaploid and polyhaploid cells are formed by haploidisation of polyploids, i.e., by halving the chromosome constitution.
Dihaploids (which are diploid) are important for selective breeding of tetraploid crop plants (notably potatoes), because selection is faster with diploids than with tetraploids. Tetraploids can be reconstituted from the diploids, for example by somatic fusion.
The term “dihaploid” was coined by Bender (1963) to combine in one word the number of genome copies (diploid) and their origin (haploid). The term is well established in this original sense (e.g., Nogler 1984; Pehu 1996), but it has also been used for doubled monoploids or doubled haploids, which are homozygous and used for genetic research (Sprague et al, 1960). | Ploidy
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
Associate Editor-In-Chief: Cafer Zorkun, M.D., Ph.D. [2]
Please Take Over This Page and Apply to be Editor-In-Chief for this topic:
There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [3] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch.
# Overview
Ploidy is the number of homologous sets of chromosomes in a biological cell. The ploidy of cells can vary within an organism. In humans, most cells are diploid (containing one set of chromosomes from each parent), but sex cells (sperm and egg) are haploid. In contrast, tetraploidy (four sets of chromosomes) is a type of polyploidy and is common in plants, and not uncommon in amphibians, reptiles, and various species of insects.
The number of chromosomes in one of the mutually-homologous sets is called the monoploid number (x). This is the same number for every set in every cell of a given organism.
Euploidy is the state of a cell or organism having an integral multiple of the monoploid number, possibly excluding the sex-determining chromosomes. For example, a human cell has 46 chromosomes, which is an integer multiple of the monoploid number, 23. A human with abnormal, but integral, multiples of this full set (e.g. 69 chromosomes) would also be considered as euploid. Aneuploidy is the state of not having euploidy. In humans, examples include having a single extra chromosome (such as Down syndrome), or missing a chromosome (such as Turner syndrome). Aneuploidy is not normally considered -ploidy but -somy, such as trisomy or monosomy.
# Haploid and monoploidy
The haploid number is the number of chromosomes in a gamete of an individual. This is distinct from the monoploid number which is the number of unique chromosomes in a single complete set.
In humans, the monoploid number (x) equals the haploid number (the number in a gamete, n), that is, x = n = 23. In some species (especially plants), these numbers differ. Commercial common wheat is an allopolyploid with six sets of chromosomes, two sets coming originally from each of three different species, with six copies of chromosomes in each cell. The gametes of common wheat are considered as haploid since they contain half the genetic information of somatic cells, but are not monoploid as they still contain three complete sets of chromosomes from the original three different species (n = 3x).
Most fungi and a few algae are normally monoploid organisms. Male bees, wasps and ants are also monoploid. For organisms that only ever have one set of chromosomes, the term monoploid is sometimes used interchangeably with haploid, but this is no longer the preferred terminology.
Plants and some algae switch between a haploid and a diploid or polyploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most diploid organisms produce monoploid sex cells that can combine to form a diploid zygote, for example animals are primarily diploid but produce monoploid gametes. During meiosis, germ cell precursors have their number of chromosomes halved by randomly "choosing" one homologue, resulting in haploid germ cells (sperm and ovum).
# Diploid
Diploid (2n) cells have two homologous copies of each chromosome, usually one from the mother and one from the father. The exact number of chromosomes may be one or two different from the 2n number yet the cell may still be classified as diploid (although with aneuploidy). Nearly all mammals are diploid organisms, although all individuals have some small fraction of cells that display polyploidy.
# Haplodiploidy
A haplodiploid species is one in which one of the sexes has haploid cells and the other has diploid cells. Most commonly, the male is haploid and the female is diploid. In such species, the male develops from unfertilized eggs, a process called arrhenotokous parthenogenesis or simply arrhenotoky, while the female develops from fertilized eggs: the sperm provides a second set of chromosomes when it fertilizes the egg.
Haplodiploidy is found in many species of insects from the order Hymenoptera, particularly ants, bees, and wasps. One consequence of haplodiploidy is that the relatedness of sisters to each other is higher than in diploids; this has been advanced as an explanation for the eusociality common in this order of insects as it increases the power of kin selection. This argument has been disputed on the grounds that haplodiploidy also reduces the relatedness of brothers to sisters, theoretically balancing the above effect.
In some Hymenopteran species, worker insects are also able to produce diploid (and therefore female) fertile offspring, which develop as normal queens. The second set of chromosomes comes not from sperm, but from one of the three polar bodies during anaphase II of meiosis. This process is called thelytokous parthenogenesis or simply thelytoky.
# Haploidisation
Haploidisation (haploidization) is the process of creating a haploid cell (usually from a diploid cell).
A laboratory procedure called haploidisation forces a normal cell to expel half of its chromosomal complement. In mammals this renders this cell chromosomally equal to sperm or egg. This was one of the procedures used by Japanese researchers to produce Kaguya, a fatherless mouse.
Haploidisation sometimes occurs in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis.
# Polyploidy
Polyploidy is the state where all cells have multiple pairs of chromosomes beyond the basic set. These may be from the same species or from closely related species. In the latter case these are known as allopolyploids, amphidiploids or allotetraploids. Allopolyploids can be formed from the hybridisation of two separate species followed by their subsequent chromosome doubling. A good example is the so-called Brassica triangle where three different parent species have hybridized in each pair combination to form three different allopolyploid species. Polyploid plants are probably most often formed from the pairing of meiotically unreduced gametes (Ramsey and Schemske, 2002).
Polyploidy occurs commonly in plants, but rarely in animals. Even in diploid organisms many somatic cells are polyploid due to a process called endoreduplication where duplication of the genome occurs without mitosis (cell division).
# Variable or indefinite ploidy
Depending on growth conditions, prokaryotes such as bacteria may have a chromosome copy number of 1 to 4, and that number is commonly fractional, counting portions of the chromosome partly replicated at a given time. This is because under logarithmic growth conditions the cells are able to replicate their DNA faster than they can divide.
# Mixoploidy
Mixoploidy refers to the presence of two cell lines, one diploid and one polyploid. Though polyploidy in humans is not viable, mixoploidy has been found in live adults and children. There are two types: diploid-triploid mixoplody, in which some cells have 46 chromosomes and some have 69, and diploid-tetraploid mixoploidy, in which some cells have 46 and some have 92 chromosomes.
# Dihaploidy and Polyhaploidy
Dihaploid and polyhaploid cells are formed by haploidisation of polyploids, i.e., by halving the chromosome constitution.
Dihaploids (which are diploid) are important for selective breeding of tetraploid crop plants (notably potatoes), because selection is faster with diploids than with tetraploids. Tetraploids can be reconstituted from the diploids, for example by somatic fusion.
The term “dihaploid” was coined by Bender (1963) to combine in one word the number of genome copies (diploid) and their origin (haploid). The term is well established in this original sense (e.g., Nogler 1984; Pehu 1996), but it has also been used for doubled monoploids or doubled haploids, which are homozygous and used for genetic research (Sprague et al, 1960). | https://www.wikidoc.org/index.php/Diploid | |
5489672c60f24c2d57091674536340fe6d8ef242 | wikidoc | Dock10 | Dock10
Dock10 (Dedicator of cytokinesis), also known as Zizimin3, is a large (~240 kDa) protein involved in intracellular signalling networks. It is a member of the DOCK-D subfamily of the DOCK family of guanine nucleotide exchange factors, which function as activators of small G proteins.
# Discovery
Dock10 was identified via bioinformatic approaches as one of a family of evolutionarily conserved proteins (the DOCK family) that share significant sequence homology. Dock10 is expressed in peripheral blood leukocytes as well as in the brain, spleen, lung and thymus.
# Structure and Function
Dock10 shares the same domain arrangement as other members of the DOCK-D/Zizimin subfamily as well as a high level of sequence similarity. It contains a DHR2 domain that is involved in G protein binding and a DHR1 domain, which, in some DOCK family proteins, interacts with membrane phospholipids. Like other DOCK-D subfamily proteins Dock10 contains an N-terminal PH domain, which, in Dock9/Zizimin1, mediates recruitment to the plasma membrane. The DHR2 domain of Dock10 appears to bind to the small G proteins Cdc42, TC10 and TCL although these interactions are of low affinity. The physiological role of Dock10 is poorly characterised, however a study in lymphocytes has shown that Dock10 expression is upregulated in B-lymphocytes and Chronic Lymphocytic Leukemia (CLL) cells in response to the cytokine IL-4. This suggests that Dock10 may have a role in B-cell activation and proliferation. Another study identified Dock10 as a protein that was overexpressed in some aggressive papillary thyroid carcinomas. | Dock10
Dock10 (Dedicator of cytokinesis), also known as Zizimin3, is a large (~240 kDa) protein involved in intracellular signalling networks.[1] It is a member of the DOCK-D subfamily of the DOCK family of guanine nucleotide exchange factors, which function as activators of small G proteins.
# Discovery
Dock10 was identified via bioinformatic approaches as one of a family of evolutionarily conserved proteins (the DOCK family) that share significant sequence homology.[2] Dock10 is expressed in peripheral blood leukocytes[3] as well as in the brain, spleen, lung and thymus.[4]
# Structure and Function
Dock10 shares the same domain arrangement as other members of the DOCK-D/Zizimin subfamily as well as a high level of sequence similarity.[4] It contains a DHR2 domain that is involved in G protein binding and a DHR1 domain, which, in some DOCK family proteins, interacts with membrane phospholipids. Like other DOCK-D subfamily proteins Dock10 contains an N-terminal PH domain, which, in Dock9/Zizimin1, mediates recruitment to the plasma membrane.[5] The DHR2 domain of Dock10 appears to bind to the small G proteins Cdc42, TC10 and TCL although these interactions are of low affinity.[4] The physiological role of Dock10 is poorly characterised, however a study in lymphocytes has shown that Dock10 expression is upregulated in B-lymphocytes and Chronic Lymphocytic Leukemia (CLL) cells in response to the cytokine IL-4.[3] This suggests that Dock10 may have a role in B-cell activation and proliferation. Another study identified Dock10 as a protein that was overexpressed in some aggressive papillary thyroid carcinomas.[6] | https://www.wikidoc.org/index.php/Dock10 | |
95335fbe205024afd6b486bc28b51e40066e6a22 | wikidoc | Dock11 | Dock11
Dock11 (Dedicator of cytokinesis), also known as Zizimin2, is a large (~240 kDa) protein involved in intracellular signalling networks. It is a member of the DOCK-D subfamily of the DOCK family of guanine nucleotide exchange factors (GEFs) which function as activators of small G proteins. Dock11 activates the small G protein Cdc42.
# Discovery
Dock11 was identified as a protein which is highly expressed in Germinal center B lymphocytes. Subsequent RT-PCR analysis revealed expression of this protein in the spleen, thymus, bone marrow and in peripheral blood lymphocytes. Dock11 is expressed at lower levels in NIH-3T3 fibroblasts, C2C12 myoblasts and Neuro-2A neuroblastoma cells. Dock11 mRNA has also been detected in the pars intermedia.
# Structure and function
Dock11 exhibits the same domain arrangement as other members of the DOCK-D/Zizimin subfamily and shares the highest level of sequence identity with Dock9. It contains a DHR2 domain which mediates GEF activity and a DHR1 domain which may interact with membrane phospholipids. It also contains an N-terminal PH domain which may be involved in its recruitment to the plasma membrane. Dock11 binds and activates nucleotide-free Cdc42 via its DHR2 domain and has also been reported to mediate positive feedback on active, GTP-bound Cdc42, although this interaction required a small N-terminal region of Dock11 in addition to the DHR2 domain. Cdc42 in turn regulates signaling pathways that control diverse cellular functions including morphology, migration, endocytosis and cell cycle progression. Gene expression studies have suggested that Dock11 may have a role in the development of pituitary and testicular tumours. | Dock11
Dock11 (Dedicator of cytokinesis), also known as Zizimin2, is a large (~240 kDa) protein involved in intracellular signalling networks.[1][2] It is a member of the DOCK-D subfamily of the DOCK family of guanine nucleotide exchange factors (GEFs) which function as activators of small G proteins. Dock11 activates the small G protein Cdc42.
# Discovery
Dock11 was identified as a protein which is highly expressed in Germinal center B lymphocytes.[3] Subsequent RT-PCR analysis revealed expression of this protein in the spleen, thymus, bone marrow and in peripheral blood lymphocytes. Dock11 is expressed at lower levels in NIH-3T3 fibroblasts, C2C12 myoblasts and Neuro-2A neuroblastoma cells. Dock11 mRNA has also been detected in the pars intermedia.[4]
# Structure and function
Dock11 exhibits the same domain arrangement as other members of the DOCK-D/Zizimin subfamily and shares the highest level of sequence identity with Dock9.[3] It contains a DHR2 domain which mediates GEF activity and a DHR1 domain which may interact with membrane phospholipids. It also contains an N-terminal PH domain which may be involved in its recruitment to the plasma membrane. Dock11 binds and activates nucleotide-free Cdc42 via its DHR2 domain[3] and has also been reported to mediate positive feedback on active, GTP-bound Cdc42,[5] although this interaction required a small N-terminal region of Dock11 in addition to the DHR2 domain. Cdc42 in turn regulates signaling pathways that control diverse cellular functions including morphology, migration, endocytosis and cell cycle progression.[6] Gene expression studies have suggested that Dock11 may have a role in the development of pituitary and testicular tumours.[4][7] | https://www.wikidoc.org/index.php/Dock11 | |
3adedbb41cab3f940bba329bc724fd7bddb93428 | wikidoc | Dosing | Dosing
For Dosing (medicine), see dose.
# Overview
Dosing is the process of administering a measured amount of a medicine or chemical. In industry, dosing typically refers to the feeding of chemicals in small quantities into a process fluid or atmosphere at specific intervals so as to give said substance sufficient time to react and produce results. The dosing process is used in many fields and industries and typically requires the use of low-capacity pumps or gradual sprays.
# Use
The dosing technique is commonly used by engineers in thermal power stations and in other industries that generate steam. In power stations, treatment chemicals are injected or fed into a boiler in small dosages or at low injection rates. Dosing procedures are also used in textile and similar industries where chemical treatments are involved.
Chemical dosing of chemicals in agriculture is common, and typically consists of using hand held pressure spray pumps or similar devices to gradually disseminate specific chemicals to maximize their effectiveness.
Aerial dosing of chemicals via spray is also used in agricultural in order to eliminate harmful insects.
Chemical dosing is also used in commercial swimming pools to control pH balance, chlorine level, and other water quality criteria. | Dosing
For Dosing (medicine), see dose.
# Overview
Dosing is the process of administering a measured amount of a medicine or chemical. In industry, dosing typically refers to the feeding of chemicals in small quantities into a process fluid or atmosphere at specific intervals so as to give said substance sufficient time to react and produce results. The dosing process is used in many fields and industries and typically requires the use of low-capacity pumps or gradual sprays.
# Use
The dosing technique is commonly used by engineers in thermal power stations and in other industries that generate steam. In power stations, treatment chemicals are injected or fed into a boiler in small dosages or at low injection rates. Dosing procedures are also used in textile and similar industries where chemical treatments are involved.
Chemical dosing of chemicals in agriculture is common, and typically consists of using hand held pressure spray pumps or similar devices to gradually disseminate specific chemicals to maximize their effectiveness.
Aerial dosing of chemicals via spray is also used in agricultural in order to eliminate harmful insects.
Chemical dosing is also used in commercial swimming pools to control pH balance, chlorine level, and other water quality criteria. | https://www.wikidoc.org/index.php/Dosing | |
39c19f849b4468730b1d06b53ff278870f1c8920 | wikidoc | Dottle | Dottle
Dottle is the wet and sour-smelling mass of unburned tobacco found at the bottom of a tobacco pipe. Dottle is produced for a combination of two reasons. One; the smoker is a wet smoker, that is, he pushes a considerable amount of saliva down the stem and into the bowl. Two; the tobacco being smoked is excessively moist. Puffing too fast can also be a factor depending on the humidity of the tobacco. The foul liquid that collects at the bottom of a pipe results in gurgling and can be accidentally sucked up. Pushing a pipe cleaner down the stem can remedy this problem to a point.
Dottles are generally considered troublesome because they lessen the time one may spend smoking a bowl. Dottles can also give a sour taste to the smoke as it is approached by the hot ember. If dottle is not promptly removed after smoking, the pipe may eventually give a foul taste to any tobacco smoked in it. When this happens, pipe sweetening is required.
Some pipes are designed to specifically lessen or prevent the formation of dottle and excessive moisture. The most common are the calabash pipe and the Dry System pipes made by Peterson.
In the Sherlock Holmes stories, Sherlock had a habit of drying out all the dottles from the day's pipes on a corner of his mantlepiece to be smoked the following morning. | Dottle
Dottle is the wet and sour-smelling mass of unburned tobacco found at the bottom of a tobacco pipe. Dottle is produced for a combination of two reasons. One; the smoker is a wet smoker, that is, he pushes a considerable amount of saliva down the stem and into the bowl. Two; the tobacco being smoked is excessively moist. Puffing too fast can also be a factor depending on the humidity of the tobacco. The foul liquid that collects at the bottom of a pipe results in gurgling and can be accidentally sucked up. Pushing a pipe cleaner down the stem can remedy this problem to a point.
Dottles are generally considered troublesome because they lessen the time one may spend smoking a bowl. Dottles can also give a sour taste to the smoke as it is approached by the hot ember. If dottle is not promptly removed after smoking, the pipe may eventually give a foul taste to any tobacco smoked in it. When this happens, pipe sweetening is required.
Some pipes are designed to specifically lessen or prevent the formation of dottle and excessive moisture. The most common are the calabash pipe and the Dry System pipes made by Peterson.
In the Sherlock Holmes stories, Sherlock had a habit of drying out all the dottles from the day's pipes on a corner of his mantlepiece [1] to be smoked the following morning.
Template:Culture-stub | https://www.wikidoc.org/index.php/Dottle | |
3b073c4cad5b8979bb020f0f59310eeaca87943d | wikidoc | Drosha | Drosha
Drosha is a Class 2 ribonuclease III enzyme that in humans is encoded by the DROSHA (formerly RNASEN) gene.
# Function
Members of the ribonuclease III superfamily of double-stranded (ds) RNA-specific endoribonucleases participate in diverse RNA maturation and decay pathways in eukaryotic and prokaryotic cells. The RNase III Drosha is the core nuclease that executes the initiation step of microRNA (miRNA) processing in the nucleus.
The microRNAs thus generated are short RNA molecules that regulate a wide variety of other genes by interacting with the RNA-induced silencing complex (RISC) to induce cleavage of complementary messenger RNA (mRNA) as part of the RNA interference pathway. A microRNA molecule is synthesized as a long RNA primary transcript known as a pri-miRNA, which is cleaved by Drosha to produce a characteristic stem-loop structure of about 70 base pairs long, known as a pre-miRNA. Drosha exists as part of a protein complex called the Microprocessor complex, which also contains the double-stranded RNA binding protein DGCR8 (called Pasha in D. melanogaster and C. elegans). DGCR8 is essential for Drosha activity and is capable of binding single-stranded fragments of the pri-miRNA that are required for proper processing.
Human Drosha was cloned in 2000, when it was identified as a nuclear dsRNA ribonuclease involved in the processing of ribosomal RNA precursors. The other two human enzymes that participate in the processing and activity of miRNA are the Dicer and Argonaute proteins.
Both Drosha and DGCR8 are localized to the cell nucleus, where processing of pri-miRNA to pre-miRNA occurs. This latter molecule is then further processed by the RNase Dicer into mature miRNAs in the cell cytoplasm. There also exists an isoform of Drosha that does not contain a nuclear localization signal, which results in the generation of c-Drosha. This variant has been shown to localize to the cell cytoplasm rather than the nucleus, but the effects on pri-miRNA processing are yet unclear.
Both Drosha and Dicer also participate in the DNA damage response.
# Clinical significance
Drosha and other miRNA processing enzymes may be important in cancer prognosis. Both Drosha and Dicer can function as master regulators of miRNA processing and have been observed to be down-regulated in some types of breast cancer. The alternative splicing patterns of Drosha in The Cancer Genome Atlas have also indicated that c-drosha appears to be enriched in various types of breast cancer, colon cancer, and esophagus cancer. However, the exact nature of the association between microRNA processing and tumorigenesis is unclear, but its function can be effectively examined by siRNA knockdown based on an independent validation. | Drosha
Drosha is a Class 2 ribonuclease III enzyme [1] that in humans is encoded by the DROSHA (formerly RNASEN) gene.[2][3][4]
# Function
Members of the ribonuclease III superfamily of double-stranded (ds) RNA-specific endoribonucleases participate in diverse RNA maturation and decay pathways in eukaryotic and prokaryotic cells.[5] The RNase III Drosha is the core nuclease that executes the initiation step of microRNA (miRNA) processing in the nucleus.[4][6]
The microRNAs thus generated are short RNA molecules that regulate a wide variety of other genes by interacting with the RNA-induced silencing complex (RISC) to induce cleavage of complementary messenger RNA (mRNA) as part of the RNA interference pathway. A microRNA molecule is synthesized as a long RNA primary transcript known as a pri-miRNA, which is cleaved by Drosha to produce a characteristic stem-loop structure of about 70 base pairs long, known as a pre-miRNA.[6] Drosha exists as part of a protein complex called the Microprocessor complex, which also contains the double-stranded RNA binding protein DGCR8 (called Pasha in D. melanogaster and C. elegans).[7] DGCR8 is essential for Drosha activity and is capable of binding single-stranded fragments of the pri-miRNA that are required for proper processing.[8]
Human Drosha was cloned in 2000, when it was identified as a nuclear dsRNA ribonuclease involved in the processing of ribosomal RNA precursors. The other two human enzymes that participate in the processing and activity of miRNA are the Dicer and Argonaute proteins.
Both Drosha and DGCR8 are localized to the cell nucleus, where processing of pri-miRNA to pre-miRNA occurs. This latter molecule is then further processed by the RNase Dicer into mature miRNAs in the cell cytoplasm.[6] There also exists an isoform of Drosha that does not contain a nuclear localization signal, which results in the generation of c-Drosha.[9][10] This variant has been shown to localize to the cell cytoplasm rather than the nucleus, but the effects on pri-miRNA processing are yet unclear.
Both Drosha and Dicer also participate in the DNA damage response.[11]
# Clinical significance
Drosha and other miRNA processing enzymes may be important in cancer prognosis.[12] Both Drosha and Dicer can function as master regulators of miRNA processing and have been observed to be down-regulated in some types of breast cancer.[13] The alternative splicing patterns of Drosha in The Cancer Genome Atlas have also indicated that c-drosha appears to be enriched in various types of breast cancer, colon cancer, and esophagus cancer.[10] However, the exact nature of the association between microRNA processing and tumorigenesis is unclear,[14] but its function can be effectively examined by siRNA knockdown based on an independent validation.[15] | https://www.wikidoc.org/index.php/Drosha | |
ec18b77913373807caab503ec69deb949c639754 | wikidoc | Durian | Durian
The durian (Template:IPA2) is the fruit of trees of the genus Durio belonging to the Malvaceae, a large family which includes hibiscus, okra, cotton, mallows and linden trees. Widely known and revered in Southeast Asia as the "King of Fruits," the fruit is distinctive for its large size, unique odour, and a formidable thorn-covered husk. The unusual smell of the ripe fruit is very strong and penetrating, even when the husk of the fruit is still intact.
The name durian comes from the Malay word duri (thorn) together with Malay suffix that is -an (for building a noun in Malay), meaning "thorny fruit."
There are 30 recognised Durio species, all native to Southeast Asia and at least nine of which produce edible fruit. Durio zibethinus is the only species available in the international market; other species are sold in their local region.
The fruit can grow up to 30 centimetres (12 in) long and 15 centimetres (6 in) in diameter, and typically weighs one to three kilograms (2 to 7 lb). Its shape ranges from oblong to round, the colour of its husk green to brown and its flesh pale-yellow to red, depending on species. The hard outer husk is covered with sharp, prickly thorns, while the edible custard-like flesh within emits the strong, distinctive odour, which is regarded as either fragrant or overpowering and offensive. The taste of the flesh has been described as nutty and sweet.
# Species
Durian trees are relatively large, growing up to 25–50 metres (80–165 ft) in height, depending on species. The leaves are evergreen, opposite, elliptic to oblong and 10–18 centimetres (4–7 in) long. The flowers are produced in three to thirty clusters together on large branches and the trunk, each flower having a calyx (sepals) and 5 (rarely 4 or 6) petals. Durian trees have one or two flowering and fruiting periods each year, although the timing of these varies depending on species, cultivars and localities. A typical durian tree can bear fruit after four or five years. The durian fruit, which can hang from any branch, matures in about three months after pollination. Among the thirty known species of Durio, so far nine species have been identified to produce edible fruits: D. zibethinus, D. dulcis, D. grandiflorus, D. graveolens, D. kutejensis, D. lowianus, D. macrantha, D. oxleyanus and D. testudinarum. However, there are many species for which the fruit has never been collected or properly examined, and other species with edible fruit may exist.
D. zibethinus is the only species commercially cultivated on a large scale and available outside of its native region. Since this species is open-pollinated, it shows considerable diversity in fruit colour and odour, size of flesh and seed, and tree phenology. In the species name, zibethinus refers to the Indian civet, Viverra zibetha. There is disagreement regarding whether this name, bestowed by Linnaeus, refers to civets being so fond of the durian that the fruit was used as bait to entrap them, or to the durian smelling like the civet.
Durian flowers are large and feathery with copious nectar, and give off a heavy, sour and buttery odour. These features are typical of flowers which are pollinated by certain species of bats while they eat nectar and pollen. According to a research conducted in Malaysia during 1970s, durians were pollinated almost exclusively by cave fruit bats (Eonycteris spelaea). However, a more recent research done in 1996 indicated that two species, D. grandiflorus and D. oblongus, were pollinated by spiderhunters (Nectariniidae) and that the other species, D. kutejensis, was pollinated by giant honey bees and birds as well as bats.
## Cultivars
Numerous cultivars (also called "clones") of durian have arisen in southeastern Asia over the centuries. They used to be grown from seeds with superior quality, but are now propagated by layering, marcotting, or more commonly, by grafting, including bud, veneer, wedge, whip or U-grafting onto seedlings of random rootstocks. Different cultivars can be distinguished to some extent by variations in the fruit shape, such as the shape of the spines. Durian consumers do express preferences for specific cultivars, which fetch higher prices in the market.
Most cultivars have both a common name and also a code number starting with "D". For example, some popular clones are Kop (D99), Chanee (D123), Tuan Mek Hijau (D145), Kan Yao (D158), Mon Thong (D159), Kradum Thong, and with no common name, D24. Each cultivar has a distinct taste and odour. More than 200 cultivars of D. zibethinus exist in Thailand, Chanee being the most preferred rootstock due to its resistance to infection by Phytophthora palmivora. Among all the cultivars in Thailand, though, only four see large scale commercial cultivation: Chanee, Kradum Thong, Mon Thong, and Kan Yao. There are more than 100 registered cultivars in Malaysia and many superior cultivars have been identified through competitions held at the annual Malaysian Agriculture, Horticulture and Agrotourism Show. In Vietnam, the same process has been done through competitions held by the Southern Fruit Research Institute.
In recent times, Songpol Somsri, a Thai government scientist, crossbred more than ninety varieties of durian to create Chantaburi No. 1, a cultivar without the characteristic odour, which is awaiting final approval from the local Ministry of Agriculture. Another hybrid he created, named Chantaburi No. 3, develops the odour about three days after the fruit is picked, which enables an odourless transport and satisfies consumers who prefer the pungent odour.
# Cultivation and availability
The durian is native to Indonesia, Malaysia and Brunei. There is some debate as to whether the durian is native to the Philippines, or has been introduced. The durian is grown in areas with a similar climate; it is strictly tropical and stops growing when mean daily temperatures drop below 22 °C (71 °F).
The centre of ecological diversity for durians is the island of Borneo, where the fruit of the edible species of Durio including D. zibethinus, D. dulcis, D. graveolens, D. kutejensis, D. oxleyanus and D. testudinarium are sold in local markets. In Brunei, D. zibethinus is not grown because consumers prefer other species such as D. graveolens, D. kutejensis and D. oxyleyanus. These species are commonly distributed in Brunei and together with other species like D. testudinarium and D. dulcis, represent rich genetic diversity.
Although the durian is not native to Thailand, the country is currently one of the major exporters of durians, growing 781,000 tonnes (860,000 S/T) of the world's total harvest of 1,400,000 tonnes (1,540,000 S/T) in 1999, exporting 111,000 tonnes (122,000 S/T). Malaysia and Indonesia followed, both producing about 265,000 tonnes (292,000 S/T) each. Malaysia exported 35,000 tonnes (38,600 S/T) in 1999. In the Philippines, the centre of durian production is the Davao Region. The Kadayawan festival is an annual celebration featuring the durian in Davao City. Other places where durians are grown include Cambodia, Laos, Vietnam, Myanmar, India, Sri Lanka, West Indies, Florida, Hawaii, Papua New Guinea, Polynesian Islands, Madagascar, southern China (Hainan Island), northern Australia, and Pulau Ubin island in Singapore.
Durian was introduced into Australia in the early 1960s and clonal material was first introduced in 1975. Over thirty clones of D. zibethinus and six Durio species have been subsequently introduced into Australia. China is the major importer, purchasing 65,000 tonnes (72,000 S/T) in 1999, followed by Singapore with 40,000 tonnes (44,000 S/T) and Taiwan with 5,000 tonnes (5,500 S/T). In the same year, the United States imported 2,000 tonnes (2,200 S/T), mostly frozen, and the European Community imported 500 tonnes (550 S/T).
The durian is a seasonal fruit, unlike some other non-seasonal tropical fruits such as the papaya which are available throughout the year. In Peninsular Malaysia and Singapore, the season for durians is typically from June to August, which coincides with that of the mangosteen. Prices of durians are relatively high as compared with other fruits. For example, in Singapore, the strong demand for high quality cultivars such as the D24, Sultan, and Mao Shan Wang has resulted in typical retail prices of between S$8 to S$15 (US$5 to US$10) per kilogram of whole fruit. With an average weight of about 1.5 kilograms, a durian fruit would therefore set the consumer back by about S$12 to S$22 (US$8 to US$15). The edible portion of the fruit, known as the aril (usually referred to as the "flesh" or "pulp") only accounts for about 15-30% of the mass of the entire fruit. Many consumers in Singapore are nevertheless quite willing to spend up to around S$75 (US$50) in a single purchase of about half a dozen of the favoured fruit to be shared by family members.
In season durians can be found in mainstream Japanese supermarkets while, in the West, they are sold mainly by Asian markets.
# Flavour and odour
The unusual flavour and odour of the fruit have prompted many people to search for an accurate description, with widely divergent and passionate views expressed, ranging from highly appreciative to deep disgust.
Writing in 1856, the British naturalist Alfred Russel Wallace provides a much-quoted description of the flavour of the durian:
Wallace cautions that "the smell of the ripe fruit is certainly at first disagreeable"; more recent descriptions by westerners can be more graphic. Travel and food writer Richard Sterling says:
Other comparisons have been made with the civet, sewage, stale vomit, skunk spray, and used surgical swabs.
The wide range of descriptions for the odour of durian may have a great deal to do with the wide variability of durian odour itself. Durians from different species or clones can have significantly different aromas; for example, red durian (D. dulcis) has a deep caramel flavour with a turpentine odour, while red-fleshed durian (D. graveolens) emits a fragrance of roasted almonds. The degree of ripeness has a great effect on the flavour as well. Three scientific analyses of the composition of durian aroma — from 1972, 1980, and 1995 — each found a different mix of volatile compounds, including esters, ketones and many different organosulfur compounds, with no agreement on which may be primarily responsible for the distinctive odour.
This strong odour can be detected half a mile away by animals, thus luring them. In addition, the fruit is extremely appetising to a variety of animals, from squirrels to mouse deer, pigs, orangutan, elephants, and even carnivorous tigers. While some of these animals eat the fruit and dispose of the seed under the parent plant, others swallow the seed with the fruit and then transport it some distance before excreting, with the seed being dispersed as the result. The thorny armored covering of the fruit may have evolved because it discourages smaller animals, since larger animals are more likely to transport the seeds far from the parent tree.
## Ripeness and selection
According to Larousse Gastronomique, the durian fruit is ready to eat when its husk begins to crack. However, the ideal stage of ripeness to be enjoyed varies from region to region in Southeast Asia and also by species. Some species grow so tall, they can only be collected once they have fallen to the ground, whereas most cultivars of D. zibethinus (such as Mon Thong) are nearly always cut from the tree and allowed to ripen while waiting to be sold. Some people in southern Thailand prefer their durians relatively young, when the clusters of fruit within the shell are still crisp in texture and mild in flavour. In northern Thailand, the preference is for the fruit to be as soft and pungent in aroma as possible. In Malaysia and Singapore, most consumers also prefer the fruit to be quite ripe and may even risk allowing the fruit to continue ripening after its husk has already cracked open on its own. In this state, the flesh becomes richly creamy, slightly alcoholic, the aroma pronounced and the flavour highly complex.
The differing preferences regarding ripeness among different consumers makes it hard to issue general statements about choosing a "good" durian. A durian that falls off the tree continues to ripen for two to four days, but after five or six days most would consider it overripe and unpalatable. The usual advice for a durian consumer choosing a whole fruit in the market is to examine the quality of the stem or stalk, which loses moisture as it ages: a big, solid stem is a sign of freshness. Reportedly, unscrupulous merchants wrap, paint, or remove the stalks altogether. Another frequent piece of advice is to shake the fruit and listen for the sound of the seeds moving within, indicating that the durian is very ripe, and the pulp has dried out somewhat.
# History
The durian has been known and consumed in southeastern Asia since prehistoric times, but has only been known to the western world for about 600 years. The earliest known European reference on the durian is the record of Nicolo Conti who travelled to southeastern Asia in 15th century. Garcia de Orta described durians in Colóquios dos Simples e Drogas da India published in 1563. In 1741, Herbarium Amboinense by the German botanist Georg Eberhard Rumphius was published, providing the most detailed and accurate account of durians for over a century. The genus Durio has a complex taxonomy that has seen the subtraction and addition of many species since it was created by Rumphius. During the early stages of its taxonomical study, there was some confusion between durian and the soursop (Annona muricata), for both of these species had thorny green fruit. It is also interesting to note the Malay name for the soursop is durian Belanda, meaning Dutch durian. In 18th century, Weinmann considered the durian to belong to Castaneae as its fruit was similar to the horse chestnut.
D. zibethinus was introduced into Ceylon by the Portuguese in the 16th century and was reintroduced many times later. It has been planted in the Americas but confined to botanical gardens. The first seedlings were sent from Kew Botanic Gardens of England, to St. Aromen of Dominica in 1884. The durian has been cultivated for centuries at the village level, probably since the late 18th century, and commercially in south-eastern Asia since the mid 20th century. In his book My Tropic Isle, E. J. Banfield tells how, in the early 20th century, a Singapore friend sent him a durian seed which he planted and cared for on his tropical island off the north coast of Queensland.
In 1949, the British botanist E. J. H. Corner published The Durian Theory or the Origin of the Modern Tree. His idea was that endozoochory (the enticement of animals to transport seeds in their stomach) arose before any other method of seed dispersal, and that primitive ancestors of Durio species were the earliest practitioners of that strategy, especially the red durian fruit exemplifying the primitive fruit of flowering plants.
Since the early 1990s, the domestic and international demand for durian in the Association of South-East Asian Nations (ASEAN) region has increased dramatically, partly due to the increasing affluence in Asia.
# Uses
## Culinary
Durian fruit is used to flavour a wide variety of sweet edibles such as traditional Malay candy, ice kachang, dodol, rose biscuits, and, with a touch of modern innovation, ice cream, milkshakes, mooncakes, Yule logs and cappuccino. Pulut Durian is glutinous rice steamed with coconut milk and served with ripened durian. In Sabah, red durian is fried with onions and chilli and served as a side dish. Red-fleshed durian is traditionally added to sajur, an Indonesian soup made from fresh water fish. Tempoyak refers to fermented durian, usually made from lower quality durian that is unsuitable for direct consumption. Tempoyak can be eaten either cooked or uncooked, is normally eaten with rice, and can also be used for making curry. Sambal Tempoyak is a Sumatran dish made from the fermented durian fruit, coconut milk, and a collection of spicy ingredients known as sambal.
In Thailand, blocks of durian paste are sold in the markets, though much of the paste is adulterated with pumpkin. Unripe durians may be cooked as vegetable, except in the Philippines, where all uses are sweet rather than savoury. Malaysians make both sugared and salted preserves from durian. When durian is minced with salt, onions and vinegar, it is called boder. The durian seeds, which are the size of chestnuts, can be eaten whether they are boiled, roasted or fried in coconut oil, with a texture that is similar to taro or yam, but stickier. In Java, the seeds are sliced thin and cooked with sugar as a confectionery. Uncooked durian seeds are toxic due to cyclopropene fatty acids and should not be ingested. Young leaves and shoots of the durian are occasionally cooked as greens. Sometimes the ash of the burned rind is added to special cakes. The petals of durian flowers are eaten in the Batak provinces of Indonesia, while in the Moluccas islands the husk of the durian fruit is used as fuel to smoke fish. The nectar and pollen of the durian flower that honeybees collect is an important honey source, but the characteristics of the honey are unknown.
## Nutritional and medicinal
Durian fruit contains a high amount of sugar, vitamin C, potassium, and the serotoninergic amino acid tryptophan, and is a good source of carbohydrates, proteins, and fats. It is recommended as a good source of raw fats by several raw food advocates, while others classify it as a high-glycemic or high-fat food, recommending to minimise its consumption.
In Malaysia, a decoction of the leaves and roots used to be prescribed as an antipyretic. The leaf juice is applied on the head of a fever patient. The most complete description of the medicinal use of the durian as remedies for fevers is a Malay prescription, collected by Burkill and Haniff in 1930. It instructs the reader to boil the roots of Hibiscus rosa-sinensis with the roots of Durio zibethinus, Nephelium longan, Nephelium mutabile and Artocarpus integrifolia, and drink the decoction or use it as a poultice.
In 1920s, Durian Fruit Products, Inc., of New York City launched a product called "Dur-India" as a health food supplement, selling at US$9 for a dozen bottles, each containing 63 tablets. The tablets allegedly contained durian and a species of the genus Allium from India and vitamin E. The company promoted the supplement saying that they provide "more concentrated healthful energy in food form than any other product the world affords".
Discover Magazine reported an incident where a woman ate a durian and ended up critically ill from potassium overdose.
# Customs and beliefs
Southeast Asian folk beliefs, as well as traditional Chinese medicine, consider the durian fruit to have warming properties liable to cause excessive sweating. The traditional method to counteract this is to pour water into the empty shell of the fruit after the pulp has been consumed, and drink it. An alternative method is to eat the durian in accompaniment with mangosteen that is considered to have cooling properties. People with high blood pressure or pregnant women are traditionally advised not to consume durian.
Another common local belief is that the durian is harmful when eaten along with coffee or alcoholic beverages. The latter belief can be traced back at least to 18th century when Rumphius declared that one should not drink alcohol after eating durians as it will cause indigestion and bad breath. J. D. Gimlette stated in his Malay Poisons and Charm Cures in 1929 that it was said that the durian fruit must not be eaten with brandy. In 1981, J. R. Croft wrote in his Bombacaceae: In Handbooks of the Flora of Papua New Guinea that a feeling of morbidity often follows the consumption of alcohol too soon after eating durian. Several medical investigations on the validity of this belief have been conducted, with varying conclusions.
The Javanese believe durian to have aphrodisiac qualities, and impose a strict set of rules on what may or may not be consumed with the durian or shortly after. The warnings against the supposed lecherous quality of this fruit soon spread to the West, as the Swedenborgian philosopher Herman Vetterling commented on so-called "erotic properties" of the durian in the early 20th century.
A durian falling on a person's head can cause serious injuries because it is heavy and armed with sharp thorns, and may fall from a significant height, so wearing a hardhat is recommended when collecting the fruit. Alfred Russel Wallace writes that death rarely ensues from it, because the copious effusion of blood prevents the inflammation which might otherwise take place. A common saying is that a durian has eyes and can see where it is falling, because the fruit allegedly never fall during daylight hours when people may be hurt. A saying in Indonesian, ketiban durian runtuh, which translates to "getting a fallen durian", means receiving an unexpected luck or fortune.
A naturally spineless variety of durian growing wild in Davao, Philippines was discovered in the 1960s, and fruits borne on trees grown from seeds of this fruit were also spineless. Sometimes spineless durians are produced artificially by scraping scales off the immature fruits, since the bases of the scales develop into the spines as the fruits mature.
# Cultural influence
The durian is commonly known as the "king of the fruits", a label that can be attributed to its formidable look and overpowering odour. Due to its unusual characteristics, the durian has been referenced or parodied in various cultural mediums. To foreigners the durian is often perceived as a symbol of revulsion, as it can be seen in Dodoria, one of the villains in the Japanese anime Dragon Ball Z. Dodoria, whose name has been derived from the durian, was given an unattractive appearance and a sinister role which required slaughtering numerous characters. In the Castlevania videogame series, "Rotten Durian" is an item that removes 500 HP from the character if consumed; its in-game description reads "Has introduced you to a whole new world of unpleasant odors." The role-playing game, Tales of Destiny includes the durian (spelt Dorian by translators) as part of the edible food list. While fairly expensive and filling, the fruit, when consumed, also comes with an additional benefit of reducing random encounters by repelling monsters - no doubt with its smell.
In its native southeastern Asia, however, the durian is an everyday food and portrayed in the local media in accordance with the different cultural perception it has in the region. The durian symbolised the subjective nature of ugliness and beauty in Hong Kong director Fruit Chan's 2000 film Durian Durian (榴槤飄飄, Liulian piao piao), and was a nickname for the reckless but lovable protagonist of the eponymous Singaporean TV comedy Durian King played by Adrian Pang. Likewise, the oddly shaped Esplanade building in Singapore is often called "The Durian" by locals, although its design was not based on the fruit.
One of the names Thailand contributed to the list of storm names for Western North Pacific tropical cyclones was 'Durian', which was retired after the second storm of this name in 2006. Being a fruit much loved by a variety of wild beasts, the durian sometimes signifies the long-forgotten animalistic aspect of humans, as in the legend of Orang Mawas, the Malaysian version of Bigfoot, and Orang Pendek, its Sumatran version, both of which have been claimed to feast on durians.
# Notes
- ↑ Jump up to: 1.0 1.1 1.2 Heaton, Donald D. (2006). A Consumers Guide on World Fruit. BookSurge Publishing. pp. p. 54–56. ISBN 1419639552.CS1 maint: Extra text (link) .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Oxford English Dictionary. Oxford University Press. 1897. Via durion, the Malay name for the plant.
- ↑ Huxley, A. (Ed.) (1992). New RHS Dictionary of Gardening. Macmillan. ISBN 1-56159-001-0.CS1 maint: Extra text: authors list (link)
- ↑ Jump up to: 4.0 4.1 4.2 O'Gara, E., Guest, D. I. and Hassan, N. M. (2004). "Botany and Production of Durian (Durio zibethinus) in Southeast Asia" (PDF). Australian Centre for International Agricultural Research (ACIAR). Retrieved 2006-03-05.CS1 maint: Multiple names: authors list (link)
- ↑ Jump up to: 5.00 5.01 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 Brown, Michael J. (1997). Durio — A Bibliographic Review (PDF). International Plant Genetic Resources Institute (IPGRI). ISBN 92-9043-318-3. Retrieved 2007-03-14.
- ↑ Jump up to: 6.0 6.1 6.2 6.3 6.4 6.5 Morton, J. F. (1987). Fruits of Warm Climates. Florida Flair Books. ISBN 0-9610184-1-0.
- ↑ Brown, Michael J. (1997). Durio — A Bibliographic Review (PDF). International Plant Genetic Resources Institute (IPGRI). pp. p. 2, also, see pp. 5&ndash, 6 regarding whether Linnaeus or Murray is the correct authority for the binomial name. ISBN 92-9043-318-3. Retrieved 2007-03-14.CS1 maint: Extra text (link)
- ↑ Whitten, Tony (2001). The Ecology of Sumatra. Periplus. pp. p. 329. ISBN 962-593-074-4.CS1 maint: Extra text (link)
- ↑ Yumoto, Takakazu (2000). "Bird-pollination of Three Durio Species (Bombacaceae) in a Tropical Rainforest in Sarawak, Malaysia". American Journal of Botany. 87 (8): p. 1181–1188.CS1 maint: Extra text (link)
- ↑ "Comprehensive List of Durian Clones Registered by the Agriculture Department (of Malaysia)". Durian OnLine. Retrieved 2006-03-05.
- ↑ Jump up to: 11.0 11.1 11.2 Fuller, Thomas (2007-04-08). "Fans Sour on Sweeter Version of Asia's Smelliest Fruit". New York Times. Retrieved 2007-04-08. Check date values in: |date= (help)
- ↑ M.B. Osman, Z.A. Mohamed, S. Idris and R. Aman (1995). "Tropical fruit production and genetic resources in Southeast Asia: Identifying the priority fruit species". International Plant Genetic Resources Institute (IPGRI). ISBN 92-9043-249-7. Retrieved 2007-03-14.CS1 maint: Multiple names: authors list (link)
- ↑ Jump up to: 13.0 13.1 13.2 "Committee on Commodity Problems — VI. Overview of Minor Tropical Fruits". FAO. December 2001. Retrieved 2006-03-04.
- ↑ Watson, B. J (1983). "Durian". Fact Sheet No. 6.: Rare Fruits Council of Australia.
- ↑ Jump up to: 15.0 15.1 15.2 "ST Foodies Club - Durian King". The Straits Times. 2006. Retrieved 2007-07-25.
- ↑ Jump up to: 16.0 16.1 Wallace, Alfred Russel (1856). "On the Bamboo and Durian of Borneo". Retrieved 2007-03-12.
- ↑ Winokur, Jon (Ed.) (2003). The Traveling Curmudgeon: Irreverent Notes, Quotes, and Anecdotes on Dismal Destinations, Excess Baggage, the Full Upright Position, and Other Reasons Not to Go There. Sasquatch Books. pp. p. 102. ISBN 1-57061-389-3.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 18.0 18.1 18.2 18.3 18.4 Davidson, Alan (1999). The Oxford Companion to Food. Oxford University Press. pp. p. 263. ISBN 0-19-211579-0.CS1 maint: Extra text (link)
- ↑ O'Gara, E., Guest, D. I. and Hassan, N. M. (2004). "Occurrence, Distribution and Utilisation of Durian Germplasm" (PDF). Australian Centre for International Agricultural Research (ACIAR). Retrieved 2007-03-13.CS1 maint: Multiple names: authors list (link)
- ↑ Marinelli, Janet (Ed.) (1998). Brooklyn Botanic Garden Gardener's Desk Reference. Henry Holt and Co. pp. p. 691. ISBN 0-8050-5095-7.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 21.0 21.1 McGee, Harold (2004). On Food and Cooking (Revised Edition). Scribner. pp. p. 379. ISBN 0-684-80001-2.CS1 maint: Extra text (link)
- ↑ Montagne, Prosper (Ed.) (2001). Larousse Gastronomique. Clarkson Potter. pp. p. 439. ISBN 0609609718.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 23.0 23.1 "Durian & Mangosteens". Prositech.com. Retrieved 2006-07-01.
- ↑ Davidson, Alan (1999). The Oxford Companion to Food. Oxford University Press. pp. p. 737. ISBN 0-19-211579-0.CS1 maint: Extra text (link)
- ↑ Jump up to: 25.0 25.1 "Agroforestry Tree Database - Durio zibethinus". International Center for Research in Agroforestry. Retrieved 2007-03-12.
- ↑ Banfield, E. J., (1911). My Tropic Isle. T. Fisher Unwin. Retrieved 2007-03-14.CS1 maint: Multiple names: authors list (link)
- ↑ "Traditional Cuisine". Sabah Tourism Promotion Corporation. Retrieved 2007-03-10.
- ↑ "Durian Recipe Gallery". Durian Online. Retrieved 2006-03-03.
- ↑ "Question No. 18085: Is it true that durian seeds are poisonous?". Singapore Science Centre. 2006. Retrieved 2006-03-20.
- ↑ Crane, E. (Ed.) (1976). Honey: A Comprehensive Survey. Bee Research Association.CS1 maint: Extra text: authors list (link)
- ↑ Wolfe, David (2002). Eating For Beauty. Maul Brothers Publishing. ISBN 0965353370.
- ↑ Boutenko, Victoria (2001). 12 Steps to Raw Foods: How to End Your Addiction to Cooked Food. Raw Family. pp. p. 6. ISBN 0970481934.CS1 maint: Extra text (link)
- ↑ Mars, Brigitte (2004). Rawsome!: Maximizing Health, Energy, and Culinary Delight With the Raw Foods Diet. Basic Health Publications. pp. p.103. ISBN 1591200601.CS1 maint: Extra text (link)
- ↑ Cousens, Gabriel (2003). Rainbow Green Live-Food Cuisine. North Atlantic Books. pp. p. 34. ISBN 1556434650.CS1 maint: Extra text (link)
- ↑ Klein, David (2005). "Vegan Healing Diet Guidelines". Self Healing Colitis & Crohn's. Living Nutrition Publications. ISBN 0971752613.
- ↑ Burkill, I.H. and Haniff, M. (1930). "Malay village medicine, prescriptions collected". Gardens Bulletin Straits Settlements (6): p. 176–177.CS1 maint: Multiple names: authors list (link) CS1 maint: Extra text (link)
- ↑ Dajer, Tony (2007-03-13). "Vital Signs: Potassium Overload". Discover Magazine. Retrieved 2007-06-19. Check date values in: |date= (help)
- ↑ Huang, Kee C. (1998). The Pharmacology of Chinese Herbs (Second Edition). CRC Press. pp. p. 2. ISBN 0849316650.CS1 maint: Extra text (link)
- ↑ McElroy, Anne and Townsend, Patricia K. (2003). Medical Anthropology in Ecological Perspective. Westview Press. pp. p. 253. ISBN 0813338212.CS1 maint: Multiple names: authors list (link) CS1 maint: Extra text (link)
- ↑ Vetterling, Herman (2003, first printed in 1923). Illuminate of Gorlitz or Jakob Bohme's Life and Philosophy, Part 3. Kessinger Publishing. ISBN 0-7661-4788-6. Check date values in: |year= (help) p. 1380.
- ↑ Solomon, Charmaine (1998). "Encyclopedia of Asian Food". Periplus. Retrieved 2007-07-26.
- ↑ Echols, John M. (1989). An Indonesian-English Dictionary. Cornell University Press. pp. p. 292. ISBN 0801421276. Unknown parameter |coauthors= ignored (help)CS1 maint: Extra text (link)
- ↑ The mangosteen, called as the "queen of fruits", is petite and mild in comparison. The mangosteen season coincides with that of the durian and is seen as a complement, which is probably how the mangosteen received the complementary title.
- ↑ Template:Ja icon "ドラゴンボール登場人物名前由来". ドラゴンボールマニア (Dragon Ball Mania). Retrieved 2007-02-11.
- ↑ Jump up to: 45.0 45.1 "Uniquely Singapore - July 2006 Issue". Singapore Tourism Board. 2006. Retrieved 2007-07-31.
- ↑ "Tropical Cyclone Names". Japan Meteorological Agency. Retrieved 2007-03-10.
- ↑ Lian, Hah Foong (2000-01-02). "Village abuzz over sighting of 'mawas'". Star Publications, Malaysia. Retrieved 2007-03-09. Check date values in: |date= (help)
- ↑ "Do 'orang pendek' really exist?". Jambiexplorer.com. Retrieved 2006-03-19. | Durian
Template:Otheruses4
The durian (Template:IPA2) is the fruit of trees of the genus Durio belonging to the Malvaceae, a large family which includes hibiscus, okra, cotton, mallows and linden trees. Widely known and revered in Southeast Asia as the "King of Fruits,"[1] the fruit is distinctive for its large size, unique odour, and a formidable thorn-covered husk. The unusual smell of the ripe fruit is very strong and penetrating, even when the husk of the fruit is still intact.
The name durian comes from the Malay word duri (thorn) together with Malay suffix that is -an (for building a noun in Malay), meaning "thorny fruit."[2][3]
There are 30 recognised Durio species, all native to Southeast Asia and at least nine of which produce edible fruit.[4] Durio zibethinus is the only species available in the international market; other species are sold in their local region.
The fruit can grow up to 30 centimetres (12 in) long and 15 centimetres (6 in) in diameter,[5][6] and typically weighs one to three kilograms (2 to 7 lb).[5] Its shape ranges from oblong to round, the colour of its husk green to brown and its flesh pale-yellow to red, depending on species.[5] The hard outer husk is covered with sharp, prickly thorns, while the edible custard-like flesh within emits the strong, distinctive odour, which is regarded as either fragrant or overpowering and offensive. The taste of the flesh has been described as nutty and sweet.
# Species
Durian trees are relatively large, growing up to 25–50 metres (80–165 ft) in height, depending on species. The leaves are evergreen, opposite, elliptic to oblong and 10–18 centimetres (4–7 in) long. The flowers are produced in three to thirty clusters together on large branches and the trunk, each flower having a calyx (sepals) and 5 (rarely 4 or 6) petals. Durian trees have one or two flowering and fruiting periods each year, although the timing of these varies depending on species, cultivars and localities. A typical durian tree can bear fruit after four or five years. The durian fruit, which can hang from any branch, matures in about three months after pollination. Among the thirty known species of Durio, so far nine species have been identified to produce edible fruits: D. zibethinus, D. dulcis, D. grandiflorus, D. graveolens, D. kutejensis, D. lowianus, D. macrantha, D. oxleyanus and D. testudinarum. However, there are many species for which the fruit has never been collected or properly examined, and other species with edible fruit may exist.[5]
D. zibethinus is the only species commercially cultivated on a large scale and available outside of its native region. Since this species is open-pollinated, it shows considerable diversity in fruit colour and odour, size of flesh and seed, and tree phenology. In the species name, zibethinus refers to the Indian civet, Viverra zibetha. There is disagreement regarding whether this name, bestowed by Linnaeus, refers to civets being so fond of the durian that the fruit was used as bait to entrap them, or to the durian smelling like the civet.[7]
Durian flowers are large and feathery with copious nectar, and give off a heavy, sour and buttery odour. These features are typical of flowers which are pollinated by certain species of bats while they eat nectar and pollen.[8] According to a research conducted in Malaysia during 1970s, durians were pollinated almost exclusively by cave fruit bats (Eonycteris spelaea).[5] However, a more recent research done in 1996 indicated that two species, D. grandiflorus and D. oblongus, were pollinated by spiderhunters (Nectariniidae) and that the other species, D. kutejensis, was pollinated by giant honey bees and birds as well as bats.[9]
## Cultivars
Numerous cultivars (also called "clones") of durian have arisen in southeastern Asia over the centuries. They used to be grown from seeds with superior quality, but are now propagated by layering, marcotting, or more commonly, by grafting, including bud, veneer, wedge, whip or U-grafting onto seedlings of random rootstocks. Different cultivars can be distinguished to some extent by variations in the fruit shape, such as the shape of the spines.[5] Durian consumers do express preferences for specific cultivars, which fetch higher prices in the market.[10]
Most cultivars have both a common name and also a code number starting with "D". For example, some popular clones are Kop (D99), Chanee (D123), Tuan Mek Hijau (D145), Kan Yao (D158), Mon Thong (D159), Kradum Thong, and with no common name, D24. Each cultivar has a distinct taste and odour. More than 200 cultivars of D. zibethinus exist in Thailand, Chanee being the most preferred rootstock due to its resistance to infection by Phytophthora palmivora. Among all the cultivars in Thailand, though, only four see large scale commercial cultivation: Chanee, Kradum Thong, Mon Thong, and Kan Yao. There are more than 100 registered cultivars in Malaysia and many superior cultivars have been identified through competitions held at the annual Malaysian Agriculture, Horticulture and Agrotourism Show. In Vietnam, the same process has been done through competitions held by the Southern Fruit Research Institute.
In recent times, Songpol Somsri, a Thai government scientist, crossbred more than ninety varieties of durian to create Chantaburi No. 1, a cultivar without the characteristic odour, which is awaiting final approval from the local Ministry of Agriculture.[11] Another hybrid he created, named Chantaburi No. 3, develops the odour about three days after the fruit is picked, which enables an odourless transport and satisfies consumers who prefer the pungent odour.[11]
# Cultivation and availability
The durian is native to Indonesia, Malaysia and Brunei. There is some debate as to whether the durian is native to the Philippines, or has been introduced.[5] The durian is grown in areas with a similar climate; it is strictly tropical and stops growing when mean daily temperatures drop below 22 °C (71 °F).[4]
The centre of ecological diversity for durians is the island of Borneo, where the fruit of the edible species of Durio including D. zibethinus, D. dulcis, D. graveolens, D. kutejensis, D. oxleyanus and D. testudinarium are sold in local markets. In Brunei, D. zibethinus is not grown because consumers prefer other species such as D. graveolens, D. kutejensis and D. oxyleyanus. These species are commonly distributed in Brunei and together with other species like D. testudinarium and D. dulcis, represent rich genetic diversity.[12]
Although the durian is not native to Thailand, the country is currently one of the major exporters of durians, growing 781,000 tonnes (860,000 S/T) of the world's total harvest of 1,400,000 tonnes (1,540,000 S/T) in 1999, exporting 111,000 tonnes (122,000 S/T).[13] Malaysia and Indonesia followed, both producing about 265,000 tonnes (292,000 S/T) each. Malaysia exported 35,000 tonnes (38,600 S/T) in 1999.[13] In the Philippines, the centre of durian production is the Davao Region. The Kadayawan festival is an annual celebration featuring the durian in Davao City. Other places where durians are grown include Cambodia, Laos, Vietnam, Myanmar, India, Sri Lanka, West Indies, Florida, Hawaii, Papua New Guinea, Polynesian Islands, Madagascar, southern China (Hainan Island), northern Australia, and Pulau Ubin island in Singapore.
Durian was introduced into Australia in the early 1960s and clonal material was first introduced in 1975. Over thirty clones of D. zibethinus and six Durio species have been subsequently introduced into Australia.[14] China is the major importer, purchasing 65,000 tonnes (72,000 S/T) in 1999, followed by Singapore with 40,000 tonnes (44,000 S/T) and Taiwan with 5,000 tonnes (5,500 S/T). In the same year, the United States imported 2,000 tonnes (2,200 S/T), mostly frozen, and the European Community imported 500 tonnes (550 S/T).[13]
The durian is a seasonal fruit, unlike some other non-seasonal tropical fruits such as the papaya which are available throughout the year. In Peninsular Malaysia and Singapore, the season for durians is typically from June to August, which coincides with that of the mangosteen.[5] Prices of durians are relatively high as compared with other fruits. For example, in Singapore, the strong demand for high quality cultivars such as the D24, Sultan, and Mao Shan Wang has resulted in typical retail prices of between S$8 to S$15 (US$5 to US$10) per kilogram of whole fruit.[15] With an average weight of about 1.5 kilograms, a durian fruit would therefore set the consumer back by about S$12 to S$22 (US$8 to US$15).[15] The edible portion of the fruit, known as the aril (usually referred to as the "flesh" or "pulp") only accounts for about 15-30% of the mass of the entire fruit.[5] Many consumers in Singapore are nevertheless quite willing to spend up to around S$75 (US$50) in a single purchase of about half a dozen of the favoured fruit to be shared by family members.[15]
In season durians can be found in mainstream Japanese supermarkets while, in the West, they are sold mainly by Asian markets.
# Flavour and odour
The unusual flavour and odour of the fruit have prompted many people to search for an accurate description, with widely divergent and passionate views expressed, ranging from highly appreciative to deep disgust.
Writing in 1856, the British naturalist Alfred Russel Wallace provides a much-quoted description of the flavour of the durian:
Wallace cautions that "the smell of the ripe fruit is certainly at first disagreeable"; more recent descriptions by westerners can be more graphic. Travel and food writer Richard Sterling says:
Other comparisons have been made with the civet, sewage, stale vomit, skunk spray, and used surgical swabs.[18]
The wide range of descriptions for the odour of durian may have a great deal to do with the wide variability of durian odour itself. Durians from different species or clones can have significantly different aromas; for example, red durian (D. dulcis) has a deep caramel flavour with a turpentine odour, while red-fleshed durian (D. graveolens) emits a fragrance of roasted almonds.[19] The degree of ripeness has a great effect on the flavour as well.[5] Three scientific analyses of the composition of durian aroma — from 1972, 1980, and 1995 — each found a different mix of volatile compounds, including esters, ketones and many different organosulfur compounds, with no agreement on which may be primarily responsible for the distinctive odour.[5]
This strong odour can be detected half a mile away by animals, thus luring them. In addition, the fruit is extremely appetising to a variety of animals, from squirrels to mouse deer, pigs, orangutan, elephants, and even carnivorous tigers. While some of these animals eat the fruit and dispose of the seed under the parent plant, others swallow the seed with the fruit and then transport it some distance before excreting, with the seed being dispersed as the result.[20] The thorny armored covering of the fruit may have evolved because it discourages smaller animals, since larger animals are more likely to transport the seeds far from the parent tree.[21]
## Ripeness and selection
According to Larousse Gastronomique, the durian fruit is ready to eat when its husk begins to crack.[22] However, the ideal stage of ripeness to be enjoyed varies from region to region in Southeast Asia and also by species. Some species grow so tall, they can only be collected once they have fallen to the ground, whereas most cultivars of D. zibethinus (such as Mon Thong) are nearly always cut from the tree and allowed to ripen while waiting to be sold. Some people in southern Thailand prefer their durians relatively young, when the clusters of fruit within the shell are still crisp in texture and mild in flavour. In northern Thailand, the preference is for the fruit to be as soft and pungent in aroma as possible. In Malaysia and Singapore, most consumers also prefer the fruit to be quite ripe and may even risk allowing the fruit to continue ripening after its husk has already cracked open on its own. In this state, the flesh becomes richly creamy, slightly alcoholic,[18] the aroma pronounced and the flavour highly complex.
The differing preferences regarding ripeness among different consumers makes it hard to issue general statements about choosing a "good" durian. A durian that falls off the tree continues to ripen for two to four days, but after five or six days most would consider it overripe and unpalatable.[6] The usual advice for a durian consumer choosing a whole fruit in the market is to examine the quality of the stem or stalk, which loses moisture as it ages: a big, solid stem is a sign of freshness.[23] Reportedly, unscrupulous merchants wrap, paint, or remove the stalks altogether. Another frequent piece of advice is to shake the fruit and listen for the sound of the seeds moving within, indicating that the durian is very ripe, and the pulp has dried out somewhat.[23]
# History
The durian has been known and consumed in southeastern Asia since prehistoric times, but has only been known to the western world for about 600 years. The earliest known European reference on the durian is the record of Nicolo Conti who travelled to southeastern Asia in 15th century.[5] Garcia de Orta described durians in Colóquios dos Simples e Drogas da India published in 1563. In 1741, Herbarium Amboinense by the German botanist Georg Eberhard Rumphius was published, providing the most detailed and accurate account of durians for over a century. The genus Durio has a complex taxonomy that has seen the subtraction and addition of many species since it was created by Rumphius.[4] During the early stages of its taxonomical study, there was some confusion between durian and the soursop (Annona muricata), for both of these species had thorny green fruit.[5] It is also interesting to note the Malay name for the soursop is durian Belanda, meaning Dutch durian.[24] In 18th century, Weinmann considered the durian to belong to Castaneae as its fruit was similar to the horse chestnut.
D. zibethinus was introduced into Ceylon by the Portuguese in the 16th century and was reintroduced many times later. It has been planted in the Americas but confined to botanical gardens. The first seedlings were sent from Kew Botanic Gardens of England, to St. Aromen of Dominica in 1884.[25] The durian has been cultivated for centuries at the village level, probably since the late 18th century, and commercially in south-eastern Asia since the mid 20th century.[5] In his book My Tropic Isle, E. J. Banfield tells how, in the early 20th century, a Singapore friend sent him a durian seed which he planted and cared for on his tropical island off the north coast of Queensland.[26]
In 1949, the British botanist E. J. H. Corner published The Durian Theory or the Origin of the Modern Tree. His idea was that endozoochory (the enticement of animals to transport seeds in their stomach) arose before any other method of seed dispersal, and that primitive ancestors of Durio species were the earliest practitioners of that strategy, especially the red durian fruit exemplifying the primitive fruit of flowering plants.
Since the early 1990s, the domestic and international demand for durian in the Association of South-East Asian Nations (ASEAN) region has increased dramatically, partly due to the increasing affluence in Asia.[5]
# Uses
## Culinary
Durian fruit is used to flavour a wide variety of sweet edibles such as traditional Malay candy, ice kachang, dodol, rose biscuits, and, with a touch of modern innovation, ice cream, milkshakes, mooncakes, Yule logs and cappuccino. Pulut Durian is glutinous rice steamed with coconut milk and served with ripened durian. In Sabah, red durian is fried with onions and chilli and served as a side dish.[27] Red-fleshed durian is traditionally added to sajur, an Indonesian soup made from fresh water fish.[1] Tempoyak refers to fermented durian, usually made from lower quality durian that is unsuitable for direct consumption.[28] Tempoyak can be eaten either cooked or uncooked, is normally eaten with rice, and can also be used for making curry. Sambal Tempoyak is a Sumatran dish made from the fermented durian fruit, coconut milk, and a collection of spicy ingredients known as sambal.
In Thailand, blocks of durian paste are sold in the markets, though much of the paste is adulterated with pumpkin.[6] Unripe durians may be cooked as vegetable, except in the Philippines, where all uses are sweet rather than savoury. Malaysians make both sugared and salted preserves from durian. When durian is minced with salt, onions and vinegar, it is called boder. The durian seeds, which are the size of chestnuts, can be eaten whether they are boiled, roasted or fried in coconut oil, with a texture that is similar to taro or yam, but stickier. In Java, the seeds are sliced thin and cooked with sugar as a confectionery. Uncooked durian seeds are toxic due to cyclopropene fatty acids and should not be ingested.[29] Young leaves and shoots of the durian are occasionally cooked as greens. Sometimes the ash of the burned rind is added to special cakes.[6] The petals of durian flowers are eaten in the Batak provinces of Indonesia, while in the Moluccas islands the husk of the durian fruit is used as fuel to smoke fish. The nectar and pollen of the durian flower that honeybees collect is an important honey source, but the characteristics of the honey are unknown.[30]
## Nutritional and medicinal
Template:Nutritionalvalue
Durian fruit contains a high amount of sugar,[21] vitamin C, potassium, and the serotoninergic amino acid tryptophan,[31] and is a good source of carbohydrates, proteins, and fats.[1][25] It is recommended as a good source of raw fats by several raw food advocates,[32][33] while others classify it as a high-glycemic or high-fat food, recommending to minimise its consumption.[34][35]
In Malaysia, a decoction of the leaves and roots used to be prescribed as an antipyretic. The leaf juice is applied on the head of a fever patient.[6] The most complete description of the medicinal use of the durian as remedies for fevers is a Malay prescription, collected by Burkill and Haniff in 1930. It instructs the reader to boil the roots of Hibiscus rosa-sinensis with the roots of Durio zibethinus, Nephelium longan, Nephelium mutabile and Artocarpus integrifolia, and drink the decoction or use it as a poultice.[36]
In 1920s, Durian Fruit Products, Inc., of New York City launched a product called "Dur-India" as a health food supplement, selling at US$9 for a dozen bottles, each containing 63 tablets. The tablets allegedly contained durian and a species of the genus Allium from India and vitamin E. The company promoted the supplement saying that they provide "more concentrated healthful energy in food form than any other product the world affords".[6]
Discover Magazine reported an incident where a woman ate a durian and ended up critically ill from potassium overdose.[37]
# Customs and beliefs
Southeast Asian folk beliefs, as well as traditional Chinese medicine, consider the durian fruit to have warming properties liable to cause excessive sweating.[38] The traditional method to counteract this is to pour water into the empty shell of the fruit after the pulp has been consumed, and drink it.[18] An alternative method is to eat the durian in accompaniment with mangosteen that is considered to have cooling properties. People with high blood pressure or pregnant women are traditionally advised not to consume durian.[11][39]
Another common local belief is that the durian is harmful when eaten along with coffee[18] or alcoholic beverages.[5] The latter belief can be traced back at least to 18th century when Rumphius declared that one should not drink alcohol after eating durians as it will cause indigestion and bad breath. J. D. Gimlette stated in his Malay Poisons and Charm Cures in 1929 that it was said that the durian fruit must not be eaten with brandy. In 1981, J. R. Croft wrote in his Bombacaceae: In Handbooks of the Flora of Papua New Guinea that a feeling of morbidity often follows the consumption of alcohol too soon after eating durian. Several medical investigations on the validity of this belief have been conducted, with varying conclusions.[5]
The Javanese believe durian to have aphrodisiac qualities, and impose a strict set of rules on what may or may not be consumed with the durian or shortly after.[18] The warnings against the supposed lecherous quality of this fruit soon spread to the West, as the Swedenborgian philosopher Herman Vetterling commented on so-called "erotic properties" of the durian in the early 20th century.[40]
A durian falling on a person's head can cause serious injuries because it is heavy and armed with sharp thorns, and may fall from a significant height, so wearing a hardhat is recommended when collecting the fruit. Alfred Russel Wallace writes that death rarely ensues from it, because the copious effusion of blood prevents the inflammation which might otherwise take place.[16] A common saying is that a durian has eyes and can see where it is falling, because the fruit allegedly never fall during daylight hours when people may be hurt.[41] A saying in Indonesian, ketiban durian runtuh, which translates to "getting a fallen durian", means receiving an unexpected luck or fortune.[42]
A naturally spineless variety of durian growing wild in Davao, Philippines was discovered in the 1960s, and fruits borne on trees grown from seeds of this fruit were also spineless.[5] Sometimes spineless durians are produced artificially by scraping scales off the immature fruits, since the bases of the scales develop into the spines as the fruits mature.[5]
# Cultural influence
The durian is commonly known as the "king of the fruits", a label that can be attributed to its formidable look and overpowering odour.[43] Due to its unusual characteristics, the durian has been referenced or parodied in various cultural mediums. To foreigners the durian is often perceived as a symbol of revulsion, as it can be seen in Dodoria, one of the villains in the Japanese anime Dragon Ball Z. Dodoria, whose name has been derived from the durian,[44] was given an unattractive appearance and a sinister role which required slaughtering numerous characters. In the Castlevania videogame series, "Rotten Durian" is an item that removes 500 HP from the character if consumed; its in-game description reads "Has introduced you to a whole new world of unpleasant odors." The role-playing game, Tales of Destiny includes the durian (spelt Dorian by translators) as part of the edible food list. While fairly expensive and filling, the fruit, when consumed, also comes with an additional benefit of reducing random encounters by repelling monsters - no doubt with its smell.
In its native southeastern Asia, however, the durian is an everyday food and portrayed in the local media in accordance with the different cultural perception it has in the region. The durian symbolised the subjective nature of ugliness and beauty in Hong Kong director Fruit Chan's 2000 film Durian Durian (榴槤飄飄, Liulian piao piao), and was a nickname for the reckless but lovable protagonist of the eponymous Singaporean TV comedy Durian King played by Adrian Pang.[45] Likewise, the oddly shaped Esplanade building in Singapore is often called "The Durian" by locals, although its design was not based on the fruit.[45]
One of the names Thailand contributed to the list of storm names for Western North Pacific tropical cyclones was 'Durian',[46] which was retired after the second storm of this name in 2006. Being a fruit much loved by a variety of wild beasts, the durian sometimes signifies the long-forgotten animalistic aspect of humans, as in the legend of Orang Mawas, the Malaysian version of Bigfoot, and Orang Pendek, its Sumatran version, both of which have been claimed to feast on durians.[47][48]
# Notes
- ↑ Jump up to: 1.0 1.1 1.2 Heaton, Donald D. (2006). A Consumers Guide on World Fruit. BookSurge Publishing. pp. p. 54–56. ISBN 1419639552.CS1 maint: Extra text (link) .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
- ↑ Oxford English Dictionary. Oxford University Press. 1897. Via durion, the Malay name for the plant.
- ↑ Huxley, A. (Ed.) (1992). New RHS Dictionary of Gardening. Macmillan. ISBN 1-56159-001-0.CS1 maint: Extra text: authors list (link)
- ↑ Jump up to: 4.0 4.1 4.2 O'Gara, E., Guest, D. I. and Hassan, N. M. (2004). "Botany and Production of Durian (Durio zibethinus) in Southeast Asia" (PDF). Australian Centre for International Agricultural Research (ACIAR). Retrieved 2006-03-05.CS1 maint: Multiple names: authors list (link)
- ↑ Jump up to: 5.00 5.01 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 Brown, Michael J. (1997). Durio — A Bibliographic Review (PDF). International Plant Genetic Resources Institute (IPGRI). ISBN 92-9043-318-3. Retrieved 2007-03-14.
- ↑ Jump up to: 6.0 6.1 6.2 6.3 6.4 6.5 Morton, J. F. (1987). Fruits of Warm Climates. Florida Flair Books. ISBN 0-9610184-1-0.
- ↑ Brown, Michael J. (1997). Durio — A Bibliographic Review (PDF). International Plant Genetic Resources Institute (IPGRI). pp. p. 2, also, see pp. 5&ndash, 6 regarding whether Linnaeus or Murray is the correct authority for the binomial name. ISBN 92-9043-318-3. Retrieved 2007-03-14.CS1 maint: Extra text (link)
- ↑ Whitten, Tony (2001). The Ecology of Sumatra. Periplus. pp. p. 329. ISBN 962-593-074-4.CS1 maint: Extra text (link)
- ↑ Yumoto, Takakazu (2000). "Bird-pollination of Three Durio Species (Bombacaceae) in a Tropical Rainforest in Sarawak, Malaysia". American Journal of Botany. 87 (8): p. 1181–1188.CS1 maint: Extra text (link)
- ↑ "Comprehensive List of Durian Clones Registered by the Agriculture Department (of Malaysia)". Durian OnLine. Retrieved 2006-03-05.
- ↑ Jump up to: 11.0 11.1 11.2 Fuller, Thomas (2007-04-08). "Fans Sour on Sweeter Version of Asia's Smelliest Fruit". New York Times. Retrieved 2007-04-08. Check date values in: |date= (help)
- ↑ M.B. Osman, Z.A. Mohamed, S. Idris and R. Aman (1995). "Tropical fruit production and genetic resources in Southeast Asia: Identifying the priority fruit species". International Plant Genetic Resources Institute (IPGRI). ISBN 92-9043-249-7. Retrieved 2007-03-14.CS1 maint: Multiple names: authors list (link)
- ↑ Jump up to: 13.0 13.1 13.2 "Committee on Commodity Problems — VI. Overview of Minor Tropical Fruits". FAO. December 2001. Retrieved 2006-03-04.
- ↑ Watson, B. J (1983). "Durian". Fact Sheet No. 6.: Rare Fruits Council of Australia.
- ↑ Jump up to: 15.0 15.1 15.2 "ST Foodies Club - Durian King". The Straits Times. 2006. Retrieved 2007-07-25.
- ↑ Jump up to: 16.0 16.1 Wallace, Alfred Russel (1856). "On the Bamboo and Durian of Borneo". Retrieved 2007-03-12.
- ↑ Winokur, Jon (Ed.) (2003). The Traveling Curmudgeon: Irreverent Notes, Quotes, and Anecdotes on Dismal Destinations, Excess Baggage, the Full Upright Position, and Other Reasons Not to Go There. Sasquatch Books. pp. p. 102. ISBN 1-57061-389-3.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 18.0 18.1 18.2 18.3 18.4 Davidson, Alan (1999). The Oxford Companion to Food. Oxford University Press. pp. p. 263. ISBN 0-19-211579-0.CS1 maint: Extra text (link)
- ↑ O'Gara, E., Guest, D. I. and Hassan, N. M. (2004). "Occurrence, Distribution and Utilisation of Durian Germplasm" (PDF). Australian Centre for International Agricultural Research (ACIAR). Retrieved 2007-03-13.CS1 maint: Multiple names: authors list (link)
- ↑ Marinelli, Janet (Ed.) (1998). Brooklyn Botanic Garden Gardener's Desk Reference. Henry Holt and Co. pp. p. 691. ISBN 0-8050-5095-7.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 21.0 21.1 McGee, Harold (2004). On Food and Cooking (Revised Edition). Scribner. pp. p. 379. ISBN 0-684-80001-2.CS1 maint: Extra text (link)
- ↑ Montagne, Prosper (Ed.) (2001). Larousse Gastronomique. Clarkson Potter. pp. p. 439. ISBN 0609609718.CS1 maint: Extra text: authors list (link) CS1 maint: Extra text (link)
- ↑ Jump up to: 23.0 23.1 "Durian & Mangosteens". Prositech.com. Retrieved 2006-07-01.
- ↑ Davidson, Alan (1999). The Oxford Companion to Food. Oxford University Press. pp. p. 737. ISBN 0-19-211579-0.CS1 maint: Extra text (link)
- ↑ Jump up to: 25.0 25.1 "Agroforestry Tree Database - Durio zibethinus". International Center for Research in Agroforestry. Retrieved 2007-03-12.
- ↑ Banfield, E. J., (1911). My Tropic Isle. T. Fisher Unwin. Retrieved 2007-03-14.CS1 maint: Multiple names: authors list (link)
- ↑ "Traditional Cuisine". Sabah Tourism Promotion Corporation. Retrieved 2007-03-10.
- ↑ "Durian Recipe Gallery". Durian Online. Retrieved 2006-03-03.
- ↑ "Question No. 18085: Is it true that durian seeds are poisonous?". Singapore Science Centre. 2006. Retrieved 2006-03-20.
- ↑ Crane, E. (Ed.) (1976). Honey: A Comprehensive Survey. Bee Research Association.CS1 maint: Extra text: authors list (link)
- ↑ Wolfe, David (2002). Eating For Beauty. Maul Brothers Publishing. ISBN 0965353370.
- ↑ Boutenko, Victoria (2001). 12 Steps to Raw Foods: How to End Your Addiction to Cooked Food. Raw Family. pp. p. 6. ISBN 0970481934.CS1 maint: Extra text (link)
- ↑ Mars, Brigitte (2004). Rawsome!: Maximizing Health, Energy, and Culinary Delight With the Raw Foods Diet. Basic Health Publications. pp. p.103. ISBN 1591200601.CS1 maint: Extra text (link)
- ↑ Cousens, Gabriel (2003). Rainbow Green Live-Food Cuisine. North Atlantic Books. pp. p. 34. ISBN 1556434650.CS1 maint: Extra text (link)
- ↑ Klein, David (2005). "Vegan Healing Diet Guidelines". Self Healing Colitis & Crohn's. Living Nutrition Publications. ISBN 0971752613.
- ↑ Burkill, I.H. and Haniff, M. (1930). "Malay village medicine, prescriptions collected". Gardens Bulletin Straits Settlements (6): p. 176–177.CS1 maint: Multiple names: authors list (link) CS1 maint: Extra text (link)
- ↑ Dajer, Tony (2007-03-13). "Vital Signs: Potassium Overload". Discover Magazine. Retrieved 2007-06-19. Check date values in: |date= (help)
- ↑ Huang, Kee C. (1998). The Pharmacology of Chinese Herbs (Second Edition). CRC Press. pp. p. 2. ISBN 0849316650.CS1 maint: Extra text (link)
- ↑ McElroy, Anne and Townsend, Patricia K. (2003). Medical Anthropology in Ecological Perspective. Westview Press. pp. p. 253. ISBN 0813338212.CS1 maint: Multiple names: authors list (link) CS1 maint: Extra text (link)
- ↑ Vetterling, Herman (2003, first printed in 1923). Illuminate of Gorlitz or Jakob Bohme's Life and Philosophy, Part 3. Kessinger Publishing. ISBN 0-7661-4788-6. Check date values in: |year= (help) p. 1380.
- ↑ Solomon, Charmaine (1998). "Encyclopedia of Asian Food". Periplus. Retrieved 2007-07-26.
- ↑ Echols, John M. (1989). An Indonesian-English Dictionary. Cornell University Press. pp. p. 292. ISBN 0801421276. Unknown parameter |coauthors= ignored (help)CS1 maint: Extra text (link)
- ↑ The mangosteen, called as the "queen of fruits", is petite and mild in comparison. The mangosteen season coincides with that of the durian and is seen as a complement, which is probably how the mangosteen received the complementary title.
- ↑ Template:Ja icon "ドラゴンボール登場人物名前由来". ドラゴンボールマニア (Dragon Ball Mania). Retrieved 2007-02-11.
- ↑ Jump up to: 45.0 45.1 "Uniquely Singapore - July 2006 Issue". Singapore Tourism Board. 2006. Retrieved 2007-07-31.
- ↑ "Tropical Cyclone Names". Japan Meteorological Agency. Retrieved 2007-03-10.
- ↑ Lian, Hah Foong (2000-01-02). "Village abuzz over sighting of 'mawas'". Star Publications, Malaysia. Retrieved 2007-03-09. Check date values in: |date= (help)
- ↑ "Do 'orang pendek' really exist?". Jambiexplorer.com. Retrieved 2006-03-19.
# External links
Template:Cookbook
- Germplasm Resources Information Network: Durio
- Brooklyn Botanic Garden: Durian—The real Forbidden Fruit
- Durio zibethinus (Bombacaceae)
- Durian Palace
- Philippines Department of Agriculture - Durian Farming tips
- NYT article on Odorless Durian
- Video: How to Open a Durian
Template:Featured article
ar:دريان
bg:Дуриан
cs:Durian
da:Durian
de:Durian
eo:Durio
ko:두리안
id:Durian
it:Durian
he:דוריאן
ka:დურიო
lt:Durijus
ms:Pokok Durian
nl:Doerian
fi:Durio
sv:Durian
th:ทุเรียน | https://www.wikidoc.org/index.php/Durian | |
492c63750ed5d4fb070b778ef307ac7038e0fb10 | wikidoc | Dynepo | Dynepo
Dynepo is a form of pharmaceutical erythropoietin (EPO) under development as a pharmaceutical product by Shire Pharmaceuticals. The first development steps were performed by HMR and Aventis. Aventis obtained the license in Europe in 2002. The company expects to launch the product in Europe in 2006, although patents held by the American biotechnology company Amgen, Inc. may preclude its sale in the United States.
EPO is a natural human hormone that stimulates formation of red blood cells. Pharmaceutical EPO, made via recombinant DNA technology is used to treat anemia, but it has also been used by doping athletes to improve their aerobic performance and stamina.
Unlike existing forms of pharmaceutical EPO manufactured in cultured animal cells, Dynepo is to be made in cultured human cells. It is therefore expected to have an authentic human form of sialic acid and other oligosaccharide residues. This characteristic may make it a longer-acting product than existing brands, but clinical data have not yet been made public. It should also make Dynepo undetectable in the existing urine test for EPO used to detect doping by athletes. However, on September 28, 2007, the French sports newspaper L'Equipe reported that the French anti-doping laboratory LNDD had detected Dynepo in the urine of Michael Rasmussen during the 2007 Tour de France. Rasmussen was sacked by his own team while leading the Tour, because of irregularities in his reporting of his whereabouts during the lead-up to the tour. The story also remarked that Dynepo was found in the urine of several other riders, yet criteria for a positive test for Dynepo are not defined. | Dynepo
Dynepo is a form of pharmaceutical erythropoietin (EPO) under development as a pharmaceutical product by Shire Pharmaceuticals. The first development steps were performed by HMR and Aventis. Aventis obtained the license in Europe in 2002. The company expects to launch the product in Europe in 2006, although patents held by the American biotechnology company Amgen, Inc. may preclude its sale in the United States.
EPO is a natural human hormone that stimulates formation of red blood cells. Pharmaceutical EPO, made via recombinant DNA technology is used to treat anemia, but it has also been used by doping athletes to improve their aerobic performance and stamina.
Unlike existing forms of pharmaceutical EPO manufactured in cultured animal cells, Dynepo is to be made in cultured human cells. It is therefore expected to have an authentic human form of sialic acid and other oligosaccharide residues. This characteristic may make it a longer-acting product than existing brands, but clinical data have not yet been made public. It should also make Dynepo undetectable in the existing urine test for EPO used to detect doping by athletes. However, on September 28, 2007, the French sports newspaper L'Equipe reported that the French anti-doping laboratory LNDD had detected Dynepo in the urine of Michael Rasmussen during the 2007 Tour de France. Rasmussen was sacked by his own team while leading the Tour, because of irregularities in his reporting of his whereabouts during the lead-up to the tour. The story also remarked that Dynepo was found in the urine of several other riders, yet criteria for a positive test for Dynepo are not defined.
Template:Drug-stub | https://www.wikidoc.org/index.php/Dynepo | |
f82279557ac25a939191a9af45f53c2a6d0b601a | wikidoc | EEF1A2 | EEF1A2
Elongation factor 1-alpha 2 is a protein that in humans is encoded by the EEF1A2 gene.
# Function
This gene encodes an isoform of the alpha subunit of the elongation factor-1 complex, which is responsible for the enzymatic delivery of aminoacyl tRNAs to the ribosome. This isoform (alpha 2) is expressed in brain, heart and skeletal muscle, and the other isoform (alpha 1) is expressed in brain, placenta, lung, liver, kidney, and pancreas.
# Clinical significance
This gene may be critical in the development of ovarian cancer.
# Regulation
EEF1A2 is a direct target of miRNA-663 and miRNA-744. | EEF1A2
Elongation factor 1-alpha 2 is a protein that in humans is encoded by the EEF1A2 gene.[1][2][3]
# Function
This gene encodes an isoform of the alpha subunit of the elongation factor-1 complex, which is responsible for the enzymatic delivery of aminoacyl tRNAs to the ribosome. This isoform (alpha 2) is expressed in brain, heart and skeletal muscle, and the other isoform (alpha 1) is expressed in brain, placenta, lung, liver, kidney, and pancreas.
# Clinical significance
This gene may be critical in the development of ovarian cancer.[3]
# Regulation
EEF1A2 is a direct target of miRNA-663 and miRNA-744.[4] | https://www.wikidoc.org/index.php/EEF1A2 | |
e286c58c7f83c7b74f8aff54c3459d0867d7fb59 | wikidoc | EFCBP2 | EFCBP2
N-terminal EF-hand calcium-binding protein 2 is a protein that in humans is encoded by the NECAB2 gene.
# Model organisms
Model organisms have been used in the study of NECAB2 function. A conditional knockout mouse line, called Necab2tm1a(KOMP)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty five tests were carried out on mutant mice but no significant abnormalities were observed. | EFCBP2
N-terminal EF-hand calcium-binding protein 2 is a protein that in humans is encoded by the NECAB2 gene.[1][2]
# Model organisms
Model organisms have been used in the study of NECAB2 function. A conditional knockout mouse line, called Necab2tm1a(KOMP)Wtsi[7][8] was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.[9][10][11]
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion.[5][12] Twenty five tests were carried out on mutant mice but no significant abnormalities were observed.[5] | https://www.wikidoc.org/index.php/EFCBP2 | |
afbab0b26fbfe1ea2f4c9e6673778ef2c2bd563b | wikidoc | EFEMP1 | EFEMP1
EGF-containing fibulin-like extracellular matrix protein 1 is a protein that in humans is encoded by the EFEMP1 gene.
# Gene
This gene encodes a member of the fibulin family of extracellular matrix glycoproteins. Like all members of this family, the encoded protein contains tandemly repeated epidermal growth factor-like repeats followed by a C-terminus fibulin-type domain. This gene is upregulated in malignant gliomas and may play a role in the aggressive nature of these tumors. Mutations in this gene are associated with Doyne honeycomb retinal dystrophy. Alternatively spliced transcript variants that encode the same protein have been described.. This gene spans approximately 18 kb of genomic DNA and consists of 12 exons. Alternative splice patterns in the 5' UTR result in three transcript variants encoding the same extracellular matrix protein.
# Clinical significance
Mutations in this gene are associated with Doyne honeycomb retinal dystrophy.
EFEMP1/Fibulin-3 has recently been reported as a potential biomarker to facilitate the identification of patients with pleural mesothelioma.
# Interactions
EFEMP1 has been shown to interact with ARAF. | EFEMP1
EGF-containing fibulin-like extracellular matrix protein 1 is a protein that in humans is encoded by the EFEMP1 gene.[1][2][3]
# Gene
This gene encodes a member of the fibulin family of extracellular matrix glycoproteins. Like all members of this family, the encoded protein contains tandemly repeated epidermal growth factor-like repeats followed by a C-terminus fibulin-type domain. This gene is upregulated in malignant gliomas and may play a role in the aggressive nature of these tumors. Mutations in this gene are associated with Doyne honeycomb retinal dystrophy. Alternatively spliced transcript variants that encode the same protein have been described.[provided by RefSeq, Nov 2009]. This gene spans approximately 18 kb of genomic DNA and consists of 12 exons. Alternative splice patterns in the 5' UTR result in three transcript variants encoding the same extracellular matrix protein.[3]
# Clinical significance
Mutations in this gene are associated with Doyne honeycomb retinal dystrophy.[3]
EFEMP1/Fibulin-3 has recently been reported as a potential biomarker to facilitate the identification of patients with pleural mesothelioma.[4]
# Interactions
EFEMP1 has been shown to interact with ARAF.[5] | https://www.wikidoc.org/index.php/EFEMP1 | |
92cc8686fb991521593ef9aaed6569c0e4e3a189 | wikidoc | EIF1AX | EIF1AX
Eukaryotic translation initiation factor 1A, X-chromosomal (eIF1A) is a protein that in humans is encoded by the EIF1AX gene. This gene encodes an essential eukaryotic translation initiation factor. The protein is a component of the 43S pre-initiation complex (PIC), which mediates the recruitment of the small 40S ribosomal subunit to the 5' cap of messenger RNAs.
# Function
eIF1A is a small protein (17 kDa in budding yeast) and a component of the 43S preinitiation complexes (PIC). eIF1A binds near the ribosomal A-site, in a manner similar to the functionally related bacterial counterpart IF1.
# Clinical significance
Mutations in this gene have been recurrently seen associated to cases of uveal melanoma with disomy 3.
# Interactions
EIF1AX has been shown to interact with IPO13. | EIF1AX
Eukaryotic translation initiation factor 1A, X-chromosomal (eIF1A) is a protein that in humans is encoded by the EIF1AX gene.[1][2][3] This gene encodes an essential eukaryotic translation initiation factor. The protein is a component of the 43S pre-initiation complex (PIC), which mediates the recruitment of the small 40S ribosomal subunit to the 5' cap of messenger RNAs.[3]
# Function
eIF1A is a small protein (17 kDa in budding yeast) and a component of the 43S preinitiation complexes (PIC). eIF1A binds near the ribosomal A-site, in a manner similar to the functionally related bacterial counterpart IF1.[4]
# Clinical significance
Mutations in this gene have been recurrently seen associated to cases of uveal melanoma with disomy 3.[5]
# Interactions
EIF1AX has been shown to interact with IPO13.[6] | https://www.wikidoc.org/index.php/EIF1AX | |
7c4b196cd9a7abd47f54222a060518bb7af34f73 | wikidoc | EIF2C1 | EIF2C1
Protein argonaute-1 is a protein that in humans is encoded by the EIF2C1 gene.
# Function
This gene encodes a member of the Argonaute family of proteins which play a role in RNA interference. The encoded protein is highly basic, and contains a PAZ domain and a PIWI domain. It may interact with dicer1 and play a role in short-interfering-RNA-mediated gene silencing. This gene is located on chromosome 1 in a cluster of closely related family members including argonaute 3, and argonaute 4.
# Model organisms
Model organisms have been used in the study of EIF2C1 function. A conditional knockout mouse line, called Eif2c1tm1a(KOMP)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty two tests were carried out on mutant mice and two significant abnormalities were observed: homozygous mutants were subviable and females also had decreased circulating aspartate transaminase levels. | EIF2C1
Protein argonaute-1 is a protein that in humans is encoded by the EIF2C1 gene.[1][2][3]
# Function
This gene encodes a member of the Argonaute family of proteins which play a role in RNA interference. The encoded protein is highly basic, and contains a PAZ domain and a PIWI domain. It may interact with dicer1 and play a role in short-interfering-RNA-mediated gene silencing. This gene is located on chromosome 1 in a cluster of closely related family members including argonaute 3, and argonaute 4.[3]
# Model organisms
Model organisms have been used in the study of EIF2C1 function. A conditional knockout mouse line, called Eif2c1tm1a(KOMP)Wtsi[8][9] was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.[10][11][12]
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion.[6][13] Twenty two tests were carried out on mutant mice and two significant abnormalities were observed: homozygous mutants were subviable and females also had decreased circulating aspartate transaminase levels. [6] | https://www.wikidoc.org/index.php/EIF2C1 | |
cf7df991fcc52ffda957695a616f7e7bfa12bdea | wikidoc | EIF2S1 | EIF2S1
Eukaryotic translation initiation factor 2 subunit 1 (eIF2α) is a protein that in humans is encoded by the EIF2S1 gene.
# Function
The protein encoded by this gene is the alpha (α) subunit of the translation initiation factor eIF2 complex which catalyzes an early regulated step of protein synthesis initiation, promoting the binding of the initiator tRNA (Met-tRNAiMet) to 40S ribosomal subunits. Binding occurs as a ternary complex of methionyl-tRNA, eIF2, and GTP. eIF2 is composed of 3 nonidentical subunits, alpha (α, 36 kD, this article), beta (β, 38 kD), and gamma (γ, 52 kD). The rate of formation of the ternary complex is modulated by the phosphorylation state of eIF2α.
# Clinical significance
After reperfusion following brain ischemia, there is inhibition of neuron protein synthesis due to phosphorylation of eIF2α. There is colocalization between phosphorylated eIF2α and cytosolic cytochrome c, which is released from mitochondria in apoptosis. Phosphorylated Eif2-alpha appeared before cytochrome c release, suggesting that phosphorylation of eIF2α triggers cytochrome c release during apoptotic cell death.
Mice heterozygous for the S51A mutation become obese and diabetic on a high-fat diet. Glucose intolerance resulted from reduced insulin secretion, defective transport of proinsulin, and a reduced number of insulin granules in beta cells. Hence proper functioning of eIF2α appears essential for preventing diet-induced type II diabetes.
# Dephosphorylation inhibitors
Salubrinal is a selective inhibitor of enzymes that dephosphorylate eIF2α. Salubrinal also blocks eIF2α dephosphorylation by a herpes simplex virus protein and inhibits viral replication. eIF2α phosphorylation is cytoprotective during endoplasmic reticulum stress. | EIF2S1
Eukaryotic translation initiation factor 2 subunit 1 (eIF2α) is a protein that in humans is encoded by the EIF2S1 gene.[1][2]
# Function
The protein encoded by this gene is the alpha (α) subunit of the translation initiation factor eIF2 complex which catalyzes an early regulated step of protein synthesis initiation, promoting the binding of the initiator tRNA (Met-tRNAiMet) to 40S ribosomal subunits. Binding occurs as a ternary complex of methionyl-tRNA, eIF2, and GTP. eIF2 is composed of 3 nonidentical subunits, alpha (α, 36 kD, this article), beta (β, 38 kD), and gamma (γ, 52 kD). The rate of formation of the ternary complex is modulated by the phosphorylation state of eIF2α.[2]
# Clinical significance
After reperfusion following brain ischemia, there is inhibition of neuron protein synthesis due to phosphorylation of eIF2α. There is colocalization between phosphorylated eIF2α and cytosolic cytochrome c, which is released from mitochondria in apoptosis. Phosphorylated Eif2-alpha appeared before cytochrome c release, suggesting that phosphorylation of eIF2α triggers cytochrome c release during apoptotic cell death.[3]
Mice heterozygous for the S51A mutation become obese and diabetic on a high-fat diet. Glucose intolerance resulted from reduced insulin secretion, defective transport of proinsulin, and a reduced number of insulin granules in beta cells. Hence proper functioning of eIF2α appears essential for preventing diet-induced type II diabetes.[4]
# Dephosphorylation inhibitors
Salubrinal is a selective inhibitor of enzymes that dephosphorylate eIF2α.[5] Salubrinal also blocks eIF2α dephosphorylation by a herpes simplex virus protein and inhibits viral replication. eIF2α phosphorylation is cytoprotective during endoplasmic reticulum stress.[6][7] | https://www.wikidoc.org/index.php/EIF2S1 | |
fa4198a51210a2212c493285dcf43d74b37d1c0d | wikidoc | EIF2S2 | EIF2S2
Eukaryotic translation initiation factor 2 subunit 2 (eIF2β) is a protein that in humans is encoded by the EIF2S2 gene.
# Function
Eukaryotic translation initiation factor 2 (eIF2) functions in the early steps of protein synthesis by forming a ternary complex with GTP and initiator tRNA and binding to a 40S ribosomal subunit. eIF2 is composed of three subunits, alpha (α), beta (β, this article), and gamma (γ), with the protein encoded by this gene representing the beta subunit. The beta subunit catalyzes the exchange of GDP for GTP, which recycles the eIF2 complex for another round of initiation.
# Regulation
Both eIF2α and eIF2β expression is regulated by the NRF1 transcription factor. | EIF2S2
Eukaryotic translation initiation factor 2 subunit 2 (eIF2β) is a protein that in humans is encoded by the EIF2S2 gene.[1][2]
# Function
Eukaryotic translation initiation factor 2 (eIF2) functions in the early steps of protein synthesis by forming a ternary complex with GTP and initiator tRNA and binding to a 40S ribosomal subunit. eIF2 is composed of three subunits, alpha (α), beta (β, this article), and gamma (γ), with the protein encoded by this gene representing the beta subunit. The beta subunit catalyzes the exchange of GDP for GTP, which recycles the eIF2 complex for another round of initiation.[2]
# Regulation
Both eIF2α and eIF2β expression is regulated by the NRF1 transcription factor.[3] | https://www.wikidoc.org/index.php/EIF2S2 | |
6164bfb8912ae6e8e887a241dfb032eca91d4d73 | wikidoc | EIF4E3 | EIF4E3
Eukaryotic translation initiation factor 4E family member 3 is a protein that in humans is encoded by the EIF4E3 gene.
EIF4E3 belongs to the EIF4E family of translational initiation factors that interact with the 5-prime cap structure of mRNA and recruit mRNA to the ribosome.
# Model organisms
Model organisms have been used in the study of EIF4E3 function. A conditional knockout mouse line, called Eif4e3tm1a(KOMP)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty four tests were carried out on mutant mice but no significant abnormalities were observed. | EIF4E3
Eukaryotic translation initiation factor 4E family member 3 is a protein that in humans is encoded by the EIF4E3 gene.[1]
EIF4E3 belongs to the EIF4E family of translational initiation factors that interact with the 5-prime cap structure of mRNA and recruit mRNA to the ribosome.[1][2]
# Model organisms
Model organisms have been used in the study of EIF4E3 function. A conditional knockout mouse line, called Eif4e3tm1a(KOMP)Wtsi[7][8] was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.[9][10][11]
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion.[5][12] Twenty four tests were carried out on mutant mice but no significant abnormalities were observed.[5] | https://www.wikidoc.org/index.php/EIF4E3 | |
c9ba0ec2cfcde95820855425e803c0a46baba909 | wikidoc | ENTPD2 | ENTPD2
Ectonucleoside triphosphate diphosphohydrolase 2 is an enzyme that in humans is encoded by the ENTPD2 gene.
The protein encoded by this gene is the type 2 enzyme of the ecto-nucleoside triphosphate diphosphohydrolase family (E-NTPDase). E-NTPDases are a family of ecto-nucleosidases that hydrolyze 5'-triphosphates. This ecto-ATPase is an integral membrane protein. Alternative splicing of this gene results in multiple transcript variants.
It has been shown, by scientists from the University of Warwick, that E-NTPDase2 stimulates the growth of the eye: by testing the enzyme on tadpoles, the tadpoles were found to develop extra eyes on their body. | ENTPD2
Ectonucleoside triphosphate diphosphohydrolase 2 is an enzyme that in humans is encoded by the ENTPD2 gene.[1][2]
The protein encoded by this gene is the type 2 enzyme of the ecto-nucleoside triphosphate diphosphohydrolase family (E-NTPDase). E-NTPDases are a family of ecto-nucleosidases that hydrolyze 5'-triphosphates. This ecto-ATPase is an integral membrane protein. Alternative splicing of this gene results in multiple transcript variants.[2]
It has been shown, by scientists from the University of Warwick, that E-NTPDase2 stimulates the growth of the eye: by testing the enzyme on tadpoles, the tadpoles were found to develop extra eyes on their body.[citation needed] | https://www.wikidoc.org/index.php/ENTPD2 | |
ce05eeae23124a1cfa8a3bd95bb78c07ee0e126f | wikidoc | ENTPD6 | ENTPD6
Ectonucleoside triphosphate diphosphohydrolase 6 is an enzyme that in humans is encoded by the ENTPD6 gene.
# Function
ENTPD6 is similar to E-type nucleotidases (NTPases). NTPases, such as CD39, mediate catabolism of extracellular nucleotides. ENTPD6 contains 4 apyrase-conserved regions which is characteristic of NTPases.
# Model organisms
Model organisms have been used in the study of ENTPD6 function. A conditional knockout mouse line called Entpd6tm1a(KOMP)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping | ENTPD6
Ectonucleoside triphosphate diphosphohydrolase 6 is an enzyme that in humans is encoded by the ENTPD6 gene.[1][2]
# Function
ENTPD6 is similar to E-type nucleotidases (NTPases). NTPases, such as CD39, mediate catabolism of extracellular nucleotides. ENTPD6 contains 4 apyrase-conserved regions which is characteristic of NTPases.[2]
# Model organisms
Model organisms have been used in the study of ENTPD6 function. A conditional knockout mouse line called Entpd6tm1a(KOMP)Wtsi was generated at the Wellcome Trust Sanger Institute.[3] Male and female animals underwent a standardized phenotypic screen[4] to determine the effects of deletion.[5][6][7][8] Additional screens performed: - In-depth immunological phenotyping[9] | https://www.wikidoc.org/index.php/ENTPD6 | |
0230104cdba3172cadcc0f845c8242ab62d0bb9f | wikidoc | EN 207 | EN 207
EN 207 is the European norm for laser safety eyewear. Any laser eye protection sold within the European Community must be certified and labeled with the CE mark. According to this standard, laser safety glasses should not only absorb laser light of a given wavelength, but they should also be able to withstand a direct hit from the laser without breaking or melting. In this respect, the European norm is more strict than the American norm (ANSI Z 136) that only regulates the required optical density. More precisely, the safety glasses should be able to withstand a continuous wave laser for 10 seconds, or 100 pulses for a pulsed laser.
An EN 207 specification might read IR 315--532 L6. Here, the letters IR indicate the laser working mode, in this case a pulsed mode. The range 315--532 indicates the wavelength range in nanometers. Finally, the scale number L6 indicates a lower limit for the optical density, i.e. the transmittance within this wavelength range is less than 10-6.
# Laser working modes
EN 207 specifies four laser working modes:
# Scale numbers
The scale numbers range from L1 to L10, where the number is a lower
limit for the optical density, i.e. Ln means that OD>n, or
T , where T is the transmittance. The
minimum scale number for a given laser depends on the working mode and
the wavelength as follows:
- the laser operates at 1064 nm and has a pulse duration of 10 ns, 100 mJ/cm2 (or 103 J/m2). You have goggles that are specified as DIR 1064 L5. The pulse duration indicates that we should look at the R specification, with scale number n=5, which gives an upper limit of 5×102 J/m2, which means that these goggles do not offer suitable protection for this particular laser.
- the laser operates at 780 nm, is continuous wave with a power of 50 mW/cm2 (P = 500 W/m2). This means you need a D protection level of log(500)-1=1.69, which is rounded up to 2. In other words, the safety goggles should be at least D 780 L2.
From the scale it can be inferred that the power densities that correspond to n=0 are considered safe without protective eyewear. | EN 207
EN 207 is the European norm for laser safety eyewear. Any laser eye protection sold within the European Community must be certified and labeled with the CE mark. According to this standard, laser safety glasses should not only absorb laser light of a given wavelength, but they should also be able to withstand a direct hit from the laser without breaking or melting. In this respect, the European norm is more strict than the American norm (ANSI Z 136) that only regulates the required optical density. More precisely, the safety glasses should be able to withstand a continuous wave laser for 10 seconds, or 100 pulses for a pulsed laser.
An EN 207 specification might read IR 315--532 L6. Here, the letters IR indicate the laser working mode, in this case a pulsed mode. The range 315--532 indicates the wavelength range in nanometers. Finally, the scale number L6 indicates a lower limit for the optical density, i.e. the transmittance within this wavelength range is less than 10-6.[1]
# Laser working modes
EN 207 specifies four laser working modes:
# Scale numbers
The scale numbers range from L1 to L10, where the number is a lower
limit for the optical density, i.e. Ln means that OD>n, or
<math>T < 10^{-n}</math>, where T is the transmittance. The
minimum scale number for a given laser depends on the working mode and
the wavelength as follows:
- the laser operates at 1064 nm and has a pulse duration of 10 ns, 100 mJ/cm2 (or 103 J/m2). You have goggles that are specified as DIR 1064 L5. The pulse duration indicates that we should look at the R specification, with scale number n=5, which gives an upper limit of 5×102 J/m2, which means that these goggles do not offer suitable protection for this particular laser.
- the laser operates at 780 nm, is continuous wave with a power of 50 mW/cm2 (P = 500 W/m2). This means you need a D protection level of <math>log(500)-1=1.69</math>, which is rounded up to 2. In other words, the safety goggles should be at least D 780 L2.
From the scale it can be inferred that the power densities that correspond to <math>n=0</math> are considered safe without protective eyewear. | https://www.wikidoc.org/index.php/EN_207 | |
2e2fa7c598d7826d0ca3f568f68f2da4425ce557 | wikidoc | ERGIC2 | ERGIC2
Endoplasmic reticulum-Golgi intermediate compartment protein 2 (ERGIC2) is a gene located on human chromosome 12p11. It encodes a protein of 377 amino acid residues. ERGIC2 protein is also known as PTX1, CDA14 or Erv41.
The biological function of ERGIC2 protein is unknown, although it was initially identified as a candidate tumor suppressor of prostate cancer, and has been shown to induce cell growth arrest and senescence, to suppress colony formation in soft agar, and to decrease invasive potential of human prostate cancer cell line (PC-3 cells). It is now believed to be a chaperon molecule involved in protein trafficking between the endoplasmic reticulum-Golgi intermediate compartment (ERGIC) and Golgi. The protein contains two hydrophobic transmembrane domains that help anchoring the molecule on the ER membrane, such that its large luminal domain orients inside the ER lumen and both the N- and C-termini are facing the cytosol. ERGIC2 forms a complex with two other proteins, ERGIC3 and ERGIC32, resulting in a shuttle for protein trafficking between ER and Golgi. It has been shown to interact with a number of proteins, such as beta-amyloid, protein elongation factor 1alpha, and otoferlin. Therefore, it may play an important role in cellular functions besides of being a component of a protein trafficking shuttle.
More recently, a variant transcript of ERGIC2 has been reported. It has a deletion of four bases at the junction of exons 8 and 9, resulting a frame-shift mutation after codon #189. The variant transcript encodes a truncated protein of 215 residues, which loses 45% of the luminal domain and the transmembrane domain near the C-terminus. This effectively abrogates its function as a protein transporter. A similar variant is also reported in armadillo. So this is not a random mutation. The function of this truncated protein is unknown. | ERGIC2
Endoplasmic reticulum-Golgi intermediate compartment protein 2 (ERGIC2) [1] is a gene located on human chromosome 12p11. It encodes a protein of 377 amino acid residues. ERGIC2 protein is also known as PTX1, CDA14 or Erv41.
The biological function of ERGIC2 protein is unknown, although it was initially identified as a candidate tumor suppressor of prostate cancer,[2] and has been shown to induce cell growth arrest and senescence, to suppress colony formation in soft agar, and to decrease invasive potential of human prostate cancer cell line (PC-3 cells).[3] It is now believed to be a chaperon molecule involved in protein trafficking between the endoplasmic reticulum-Golgi intermediate compartment (ERGIC) and Golgi. The protein contains two hydrophobic transmembrane domains that help anchoring the molecule on the ER membrane, such that its large luminal domain orients inside the ER lumen and both the N- and C-termini are facing the cytosol. ERGIC2 forms a complex with two other proteins, ERGIC3 and ERGIC32, resulting in a shuttle for protein trafficking between ER and Golgi.[4] It has been shown to interact with a number of proteins, such as beta-amyloid,[5] protein elongation factor 1alpha,[6] and otoferlin.[7] Therefore, it may play an important role in cellular functions besides of being a component of a protein trafficking shuttle.
More recently, a variant transcript of ERGIC2 has been reported.[8] It has a deletion of four bases at the junction of exons 8 and 9, resulting a frame-shift mutation after codon #189. The variant transcript encodes a truncated protein of 215 residues, which loses 45% of the luminal domain and the transmembrane domain near the C-terminus. This effectively abrogates its function as a protein transporter. A similar variant is also reported in armadillo. So this is not a random mutation. The function of this truncated protein is unknown. | https://www.wikidoc.org/index.php/ERGIC2 | |
0e0df1388b1b9114105fa47a4a029a63586685cd | wikidoc | EXOSAT | EXOSAT
The Exosat satellite was operational from May 1983 until April 1986 and in that time made 1780 observations in the X-ray band of most classes of astronomical object including active galactic nuclei, stellar coronae, cataclysmic variables, white dwarfs, X-ray binaries, clusters of galaxies, and supernova remnants. The payload consisted of three instruments that produced spectra, images and light curves in various energy bands.
This European Space Agency (ESA) satellite for direct-pointing and lunar-occultation observation of X-ray sources beyond the solar system was launched into a highly eccentric orbit (apogee 200,000 km, perigee 500 km) almost perpendicular to that of the moon on May 26, 1983. The instrumentation includes two low-energy imaging telescopes (LEIT) with Wolter I X-ray optics (for the 0.04-2 keV energy range), a medium-energy experiment using Ar/CO2 and Xe/CO2 detectors (for 1.5-50 keV), a Xe/He gas scintillation spectrometer (GSPC) (covering 2-80 keV), and a reprogrammable onboard data-processing computer. Exosat is capable of observing an object (in the direct-pointing mode) for up to 80 hours and of locating sources to within at least 10 arcsec with the LEIT and about 2 arcsec with GSPC.
# History of Exosat
During the period from 1967 to 1969, the European Space Research Organisation (ESRO) studied two separate missions: a European X-ray observatory satellite, as a combined X- and gamma-ray observatory (Cos-A), and a gamma-ray observatory (Cos-B). Cos-A was dropped after the initial study, and Cos-B was proceeded with.
Later in 1969 a separate satellite (the Highly Eccentric Lunar Occultation Satellite - Helos) was proposed. The Helos mission was to determine accurately the location of bright X-ray sources using the lunar occultation technique. In 1973 the observatory part of the mission was added, and mission approval from the European Space Agency Council was given for Helos, now renamed Exosat.
It was decided that the observatory should be made available to a wide community, rather than be restricted to instrument developers, as had been the case for all previous ESA (ESRO) scientific programmes. For the first time in an ESA project, this led to the approach of payload funding and management by the Agency. Instrument design and development became a shared responsibility between ESA and hardware groups.
In July 1981 ESA released the first Announcement of Opportunity (AO) for participation in the Exosat observation programme to the scientific community of its Member States. By November 1, 1981, the closing of the AO window, some 500 observing proposals had been received. Of these, 200 were selected for the first nine months of operation.
Exosat was the first ESA spacecraft to carry on board a digital computer (OBC), with its main purpose being scientific data processing. Spacecraft monitoring and control were secondary. To provide the data handling subsystem with an exceptional flexibility of operation, the OBC and Central Terminal Unit were in-flight reprogrammable. This flexibility far exceeded any other ESA spacecraft built up to then.
# Satellite operations
Each of the three axes were stabilized and the optical axes of the three scientific instruments were coaligned. The entrance apertures of the scientific instruments were all located on one face of the central body. Once in orbit the flaps which cover the entrances to the ME and LEIT were swung open to act as thermal and stray-light shields for the telescopes and star trackers, respectively.
The orbit of Exosat was different from any previous X-ray astronomy satellite. To maximize the number of sources occulted by the Moon, a highly eccentric orbit (e ~ 0.93) with a 90.6 hr period and an inclination of 73° was chosen. The initial apogee was 191,000 km and perigee 350 km. To be outside the Earth's radiation belts, the scientific instruments were operated above ~50,000 km, giving up to ~76 hr per 90 hr orbit. There was no need for any onboard data storage as Exosat was visible from the ground station at Villafranca, Spain for practically the entire time the scientific instruments were operated. | EXOSAT
Editor-in-Chief: Henry A. Hoff
The Exosat satellite was operational from May 1983 until April 1986 and in that time made 1780 observations in the X-ray band of most classes of astronomical object including active galactic nuclei, stellar coronae, cataclysmic variables, white dwarfs, X-ray binaries, clusters of galaxies, and supernova remnants. The payload consisted of three instruments that produced spectra, images and light curves in various energy bands.
This European Space Agency (ESA) satellite for direct-pointing and lunar-occultation observation of X-ray sources beyond the solar system was launched into a highly eccentric orbit (apogee 200,000 km, perigee 500 km) almost perpendicular to that of the moon on May 26, 1983. The instrumentation includes two low-energy imaging telescopes (LEIT) with Wolter I X-ray optics (for the 0.04-2 keV energy range), a medium-energy experiment using Ar/CO2 and Xe/CO2 detectors (for 1.5-50 keV), a Xe/He gas scintillation spectrometer (GSPC) (covering 2-80 keV), and a reprogrammable onboard data-processing computer. Exosat is capable of observing an object (in the direct-pointing mode) for up to 80 hours and of locating sources to within at least 10 arcsec with the LEIT and about 2 arcsec with GSPC.[1]
# History of Exosat
During the period from 1967 to 1969, the European Space Research Organisation (ESRO) studied two separate missions: a European X-ray observatory satellite, as a combined X- and gamma-ray observatory (Cos-A), and a gamma-ray observatory (Cos-B). Cos-A was dropped after the initial study, and Cos-B was proceeded with.
Later in 1969 a separate satellite (the Highly Eccentric Lunar Occultation Satellite - Helos) was proposed. The Helos mission was to determine accurately the location of bright X-ray sources using the lunar occultation technique. In 1973 the observatory part of the mission was added, and mission approval from the European Space Agency Council was given[2] for Helos, now renamed Exosat.
It was decided that the observatory should be made available to a wide community, rather than be restricted to instrument developers, as had been the case for all previous ESA (ESRO) scientific programmes. For the first time in an ESA project, this led to the approach of payload funding and management by the Agency. Instrument design and development became a shared responsibility between ESA and hardware groups.
In July 1981 ESA released the first Announcement of Opportunity (AO) for participation in the Exosat observation programme to the scientific community of its Member States. By November 1, 1981, the closing of the AO window, some 500 observing proposals had been received. Of these, 200 were selected for the first nine months of operation.[1]
Exosat was the first ESA spacecraft to carry on board a digital computer (OBC), with its main purpose being scientific data processing. Spacecraft monitoring and control were secondary. To provide the data handling subsystem with an exceptional flexibility of operation, the OBC and Central Terminal Unit were in-flight reprogrammable. This flexibility far exceeded any other ESA spacecraft built up to then.
# Satellite operations
Each of the three axes were stabilized and the optical axes of the three scientific instruments were coaligned. The entrance apertures of the scientific instruments were all located on one face of the central body. Once in orbit the flaps which cover the entrances to the ME and LEIT were swung open to act as thermal and stray-light shields for the telescopes and star trackers, respectively.[1]
The orbit of Exosat was different from any previous X-ray astronomy satellite. To maximize the number of sources occulted by the Moon, a highly eccentric orbit (e ~ 0.93) with a 90.6 hr period and an inclination of 73° was chosen.[3] The initial apogee was 191,000 km and perigee 350 km. To be outside the Earth's radiation belts, the scientific instruments were operated above ~50,000 km, giving up to ~76 hr per 90 hr orbit.[3] There was no need for any onboard data storage as Exosat was visible from the ground station at Villafranca, Spain for practically the entire time the scientific instruments were operated. | https://www.wikidoc.org/index.php/EXOSAT | |
80fdf4dc5d4ecd74eb35ed155ff16803be897617 | wikidoc | Eating | Eating
In general terms, eating (formally, ingestion) is the process of consuming nutrition, i.e. food, for the purpose of providing for the nutritional needs of an animal, particularly their energy requirements and to grow. All animals must eat other organisms in order to survive: carnivores eat other animals, herbivores eat plants, and omnivores consume a mixture of both.
While the process of eating varies from species to species, in humans eating is performed by placing food in the mouth, chewing and then swallowing it. Eaten food is then digested.
Manners are an important aspect of social eating in almost all human societies.
# Eating practices
Many homes have a separate kitchen room or outside (in the tropics) kitchen area devoted to preparation of meals and food, and many also have a dining room or another designated area for eating. Dishware, silverware, drinkware for eating and cookware and other implements for cooking come in an almost infinite array of forms and sizes. Most societies also have restaurants and food vendors, so that people may eat when away from home, lack the time to prepare food, or wish to use eating as a social occasion. Occasionally, such as at potlucks and food festivals, eating is in fact the primary purpose of the social gathering.
Most individuals have fairly regular meals, formally known as daily patterns of eating, and commonly most eating occurs during two to three meals per day, with snacks consisting of smaller amounts of food being consumed in between. The issue of healthy eating has long been an important concern to individuals and cultures. Among other practices, fasting, dieting, and vegetarianism are all techniques employed by individuals and encouraged by societies to increase longevity and health. Some religions promote vegetarianism, considering it wrong to consume animals. Leading nutritionists believe that instead of indulging oneself in three large meals each day, it is much healthier and easier on the metabolism to eat five smaller meals each day (e.g. better digestion, easier on the lower intestine to deposit wastes; whereas larger meals are tougher on the digestive tract and may call for the use of laxatives)(No references cited. Vague anecdotal terms, please rewrite more concisely). However, psychiatrists with Yale Medical School have found that people who suffer from Binge Eating Disorder (BED) and consume three meals per day weigh less than those who have more frequent meals. Eating can also be a way of making money (see competitive eating). Pie and sometimes cheese eating contests are one of these competitions. Sometimes people eat on picnics with family or friends.
It is an urban legend that eating fast will make you fat. Studies has disproved the theory that the body cannot keep up with the pace of the food going into the digestive tract, and thus will store the food that it cannot process as fats or energy stores. This is unscientific, as all food that enters via the mouth must pass through the entire digestive system and be broken down into simpler, usable forms that the body can make use of.
# Disorders
Physiologically, eating is generally triggered by hunger, but there are numerous physical and psychological conditions that can affect appetite and disrupt normal eating patterns. These include depression, food allergies, ingestion of certain chemicals, bulimia, anorexia nervosa, pituitary gland misfunction and other endocrine problems, and numerous other illnesses and eating disorders.
A chronic lack of nutritious food can cause various illnesses, and will eventually lead to starvation. When this happens in a locality on a massive scale it is considered a famine.
If eating and drinking is not possible, as is often the case when recovering from surgery, alternatives are enteral nutrition and parenteral nutrition. | Eating
Template:TOCright
In general terms, eating (formally, ingestion) is the process of consuming nutrition, i.e. food, for the purpose of providing for the nutritional needs of an animal, particularly their energy requirements and to grow. All animals must eat other organisms in order to survive: carnivores eat other animals, herbivores eat plants, and omnivores consume a mixture of both.
While the process of eating varies from species to species, in humans eating is performed by placing food in the mouth, chewing and then swallowing it. Eaten food is then digested.
Manners are an important aspect of social eating in almost all human societies.
# Eating practices
Many homes have a separate kitchen room or outside (in the tropics) kitchen area devoted to preparation of meals and food, and many also have a dining room or another designated area for eating. Dishware, silverware, drinkware for eating and cookware and other implements for cooking come in an almost infinite array of forms and sizes. Most societies also have restaurants and food vendors, so that people may eat when away from home, lack the time to prepare food, or wish to use eating as a social occasion. Occasionally, such as at potlucks and food festivals, eating is in fact the primary purpose of the social gathering.
Most individuals have fairly regular meals, formally known as daily patterns of eating, and commonly most eating occurs during two to three meals per day, with snacks consisting of smaller amounts of food being consumed in between. The issue of healthy eating has long been an important concern to individuals and cultures. Among other practices, fasting, dieting, and vegetarianism are all techniques employed by individuals and encouraged by societies to increase longevity and health. Some religions promote vegetarianism, considering it wrong to consume animals. Leading nutritionists believe that instead of indulging oneself in three large meals each day, it is much healthier and easier on the metabolism to eat five smaller meals each day (e.g. better digestion, easier on the lower intestine to deposit wastes; whereas larger meals are tougher on the digestive tract and may call for the use of laxatives)(No references cited. Vague anecdotal terms, please rewrite more concisely). However, psychiatrists with Yale Medical School have found that people who suffer from Binge Eating Disorder (BED) and consume three meals per day weigh less than those who have more frequent meals. Eating can also be a way of making money (see competitive eating). Pie and sometimes cheese eating contests are one of these competitions. Sometimes people eat on picnics with family or friends.
It is an urban legend that eating fast will make you fat. Studies has disproved the theory that the body cannot keep up with the pace of the food going into the digestive tract, and thus will store the food that it cannot process as fats or energy stores. This is unscientific, as all food that enters via the mouth must pass through the entire digestive system and be broken down into simpler, usable forms that the body can make use of.
# Disorders
Physiologically, eating is generally triggered by hunger, but there are numerous physical and psychological conditions that can affect appetite and disrupt normal eating patterns. These include depression, food allergies, ingestion of certain chemicals, bulimia, anorexia nervosa, pituitary gland misfunction and other endocrine problems, and numerous other illnesses and eating disorders.
A chronic lack of nutritious food can cause various illnesses, and will eventually lead to starvation. When this happens in a locality on a massive scale it is considered a famine.
If eating and drinking is not possible, as is often the case when recovering from surgery, alternatives are enteral nutrition and parenteral nutrition. | https://www.wikidoc.org/index.php/Eating | |
05c49d908b1a3c8d3b689b1124fe122191a1a59a | wikidoc | Ebriid | Ebriid
The Ebridea is a group of phagotrophic flagellate protist present in marine coastal plankton communities worldwide. Ebria tripartita is one of two (possibly four) described extant species in the Ebridea.
Members of this group are named for their idiosyncratic method of movement (ebrius, "drunk").
Ebriids are usually encountered in low abundance and have a peculiar combination of ultrastructural characters including a large nucleus with permanently condensed chromosomes and an internal skeleton composed of siliceous rods.
The taxonomic history of the group has been tumultuous and has included a variety of affiliations, such as silicoflagellates, dinoflagellates, 'radiolarians' and 'neomonads'.
Recently the Ebridea is treated as a eukaryotic taxon with an unclear phylogenetic position, but last molecular studies (Canadian Institute for Advanced Research) place of ebriids within the Cercozoa. | Ebriid
The Ebridea is a group of phagotrophic flagellate protist present in marine coastal plankton communities worldwide. Ebria tripartita is one of two (possibly four) described extant species in the Ebridea.
Members of this group are named for their idiosyncratic method of movement (ebrius, "drunk").
Ebriids are usually encountered in low abundance and have a peculiar combination of ultrastructural characters including a large nucleus with permanently condensed chromosomes and an internal skeleton composed of siliceous rods.
The taxonomic history of the group has been tumultuous and has included a variety of affiliations, such as silicoflagellates, dinoflagellates, 'radiolarians' and 'neomonads'.
Recently the Ebridea is treated as a eukaryotic taxon with an unclear phylogenetic position, but last molecular studies (Canadian Institute for Advanced Research) place of ebriids within the Cercozoa.[1] | https://www.wikidoc.org/index.php/Ebridea | |
60a8bee508370424b363539bfa13b4397ea96535 | wikidoc | El Tor | El Tor
El Tor is the name given to a particular strain of the bacterium Vibrio cholerae, the causative agent of cholera. Also known as O1, it has been the dominant strain in the seventh global pandemic. It is distinguished from the classic strain at a genetic level, although both are in the O1 serogroup and both contain Inaba, Ogawa and Hikojima serotypes. It was first identified in 1905 at a camp in El-Tor, Egypt.
El Tor was identified again in an outbreak in 1937 but the pandemic did not arise until 1961 in Sulawesi. El Tor spread through Asia (Bangladesh in 1963, India in 1964) and then into the Middle East, Africa and Europe. From North Africa it spread into Italy by 1973. In the late 1970s there were small outbreaks in Japan and in the South Pacific.
The extent of the pandemic has been due to the relative mildness (lower expression level) of El Tor, the disease has many more asymptomatic carriers than is usual, outnumbering active cases by up to 50:1. El Tor also remains in the body for longer and survives better than other known types. The actual infection is also relatively mild, or at least rarely fatal. Additionally El Tor is capable of host-to-host transmission, unlike the classic strain.
fa:التور | El Tor
El Tor is the name given to a particular strain of the bacterium Vibrio cholerae, the causative agent of cholera. Also known as O1, it has been the dominant strain in the seventh global pandemic. It is distinguished from the classic strain at a genetic level, although both are in the O1 serogroup and both contain Inaba, Ogawa and Hikojima serotypes. It was first identified in 1905 at a camp in El-Tor, Egypt.
El Tor was identified again in an outbreak in 1937 but the pandemic did not arise until 1961 in Sulawesi. El Tor spread through Asia (Bangladesh in 1963, India in 1964) and then into the Middle East, Africa and Europe. From North Africa it spread into Italy by 1973. In the late 1970s there were small outbreaks in Japan and in the South Pacific.
The extent of the pandemic has been due to the relative mildness (lower expression level) of El Tor, the disease has many more asymptomatic carriers than is usual, outnumbering active cases by up to 50:1. El Tor also remains in the body for longer and survives better than other known types. The actual infection is also relatively mild, or at least rarely fatal. Additionally El Tor is capable of host-to-host transmission, unlike the classic strain.
fa:التور
Template:WikiDoc Sources | https://www.wikidoc.org/index.php/El_Tor | |
9258c0737660d6083f7436a18712c40099bfa7b9 | wikidoc | Elafin | Elafin
Elafin, also known as peptidase inhibitor 3 or skin-derived antileukoprotease (SKALP), is a protein that in humans is encoded by the PI3 gene.
# Function
This gene encodes an elastase-specific protease inhibitor, which contains a WAP-type four-disulfide core (WFDC) domain, and is thus a member of the WFDC domain family. Most WFDC gene members are localized to chromosome 20q12-q13 in two clusters: centromeric and telomeric. This gene belongs to the centromeric cluster.
# Clinical significance
Elafin has been found to have utility in serving as a biomarker for graft versus host disease of the skin.
Elafin plays some role in gut inflammation. | Elafin
Elafin, also known as peptidase inhibitor 3 or skin-derived antileukoprotease (SKALP), is a protein that in humans is encoded by the PI3 gene.[1][2][3]
# Function
This gene encodes an elastase-specific protease inhibitor, which contains a WAP-type four-disulfide core (WFDC) domain, and is thus a member of the WFDC domain family. Most WFDC gene members are localized to chromosome 20q12-q13 in two clusters: centromeric and telomeric. This gene belongs to the centromeric cluster.[3]
# Clinical significance
Elafin has been found to have utility in serving as a biomarker for graft versus host disease of the skin.[4]
Elafin plays some role in gut inflammation.
[5] | https://www.wikidoc.org/index.php/Elafin | |
2edf6db6c7ae22736f03cdef412e5d0930dae14b | wikidoc | Elixir | Elixir
An elixir (From Arabic,الإكسير Al-Ikseer) is a pharmaceutical preparation containing an active ingredient (such as morphine) that is dissolved in a solution that contains some percentage of ethyl alcohol and is designed to be taken orally.
Elixir may also refer to:
- Elixir bhu, a cultural festival organized each year in the second half of February at the Institute of Medical Sciences
- Elixir (metal band), a British heavy metal band.
- Elixir of life, also known as the elixir of immortality
- Elixir (band), Goa trance music
- Elixir Studios, a British video game developer
- Elixir (comics), a fictional mutant in the Marvel Universe
- Elixir Strings, for electric, acoustic, and bass guitars as well as banjo and mandolin
- L'elisir d'amore, The Elixir of Love, an opera | Elixir
An elixir (From Arabic,الإكسير Al-Ikseer) is a pharmaceutical preparation containing an active ingredient (such as morphine) that is dissolved in a solution that contains some percentage of ethyl alcohol and is designed to be taken orally.
Elixir may also refer to:
- Elixir bhu, a cultural festival organized each year in the second half of February at the Institute of Medical Sciences
- Elixir (metal band), a British heavy metal band.
- Elixir of life, also known as the elixir of immortality
- Elixir (band), Goa trance music
- Elixir Studios, a British video game developer
- Elixir (comics), a fictional mutant in the Marvel Universe
- Elixir Strings, for electric, acoustic, and bass guitars as well as banjo and mandolin
- L'elisir d'amore, The Elixir of Love, an opera | https://www.wikidoc.org/index.php/Elixir | |
225dbe3278aa58667c01b39508caeffd46fc79f6 | wikidoc | Embryo | Embryo
An embryo (from Greek: Template:Polytonic, plural Template:Polytonic, lit. "that which grows," from en- "in" + bryein "to swell, be full") is a multicellular diploid eukaryote in its earliest stage of development, from the time of first cell division until birth, hatching, or germination. In humans, it is called an embryo from the moment of fertilisation until the end of the 8th week, whereafter it is instead called a fetus.
# Development
The development of the embryo is called embryogenesis. In organisms that reproduce sexually, once a sperm fertilizes an egg cell, the result is a cell called the zygote that has all the DNA of two parents. The resulting embryo derives 50 percent of its genetic makeup from each parent. In plants, animals, and some protists, the zygote will begin to divide by mitosis to produce a multicellular organism. The result of this process is an embryo.
In animals, the development of the zygote into an embryo proceeds through specific recognizable stages of blastula, gastrula, and organogenesis. The blastula stage typically features a fluid-filled cavity, the blastocoel, surrounded by a sphere or sheet of cells, also called blastomeres.
During gastrulation the cells of the blastula undergo coordinated processes of cell division, invasion, and/or migration to form two (diploblastic) or three (triploblastic) tissue layers. In triploblastic organisms, the three germ layers are called endoderm, ectoderm and mesoderm. However, the position and arrangement of the germ layers are highly species-specific, depending on the type of embryo produced. In vertebrates, a special population of embryonic cells called the neural crest has been proposed as a "fourth germ layer", and is thought to have been an important novelty in the evolution of head structures.
During organogenesis, molecular and cellular interactions between germ layers, combined with the cells' developmental potential or competence to respond, prompt the further differentiation of organ-specific cell types. For example, in neurogenesis, a subpopulation of ectoderm cells is set aside to become the brain, spinal cord and peripheral nerves. Modern developmental biology is extensively probing the molecular basis for every type of organogenesis, including angiogenesis (formation of new blood vessels from pre-existing ones), chondrogenesis (cartilage), myogenesis (muscle), osteogenesis (bone), and many others.
Generally, if a structure pre-dates another structure in evolutionary terms, then it often appears earlier than the other in an embryo; this general observation is sometimes summarized by the phrase "ontogeny recapitulates phylogeny." For example, the backbone is a common structure among all vertebrates such as fish, reptiles and mammals, and the backbone also appears as one of the earliest structures laid out in all vertebrate embryos. The cerebrum in humans, which is the most sophisticated part of the brain, develops last. This rule is not absolute, but it is recognized as being partly applicable to development of the human embryo.
# Embryos of plants and animals
- Plants: In botany, a seed plant embryo is part of a seed, consisting of precursor tissues for the leaves, stem (see hypocotyl), and root (see radicle), as well as one or more cotyledons. Once the embryo begins to germinate — grow out from the seed — it is called a seedling. Plants that do not produce seeds, but do produce an embryo, include the bryophytes and ferns. In these plants, the embryo is a young plant that grows attached to a parental gametophyte.
- Animals: The embryo of a placental mammal is defined as the organism between the first division of the zygote (a fertilized ovum) until it becomes a fetus. In humans, the embryo is defined as the product of conception from implantation in the uterus through the eighth week of development. An embryo is called a fetus at a more advanced stage of development and up until birth or hatching. In humans, this is from the eighth week of gestation.
# The human embryo
## Growth
Week 1-3 5-7 days after fertilization, the blastula attaches to the wall of the uterus (endometrium). When it comes into contact with the endometrium it performs implantation. Implantation connections between the mother and the embryo will begin to form, including the umbilical cord. The embryo's growth centers around an axis, which will become the spine and spinal cord. The brain, spinal cord, heart, and gastrointestinal tract begin to form.
Week 4-5 Chemicals produced by the embryo stop the woman's menstrual cycle. Neurogenesis is underway, showing brain activity at about the 6th week. The heart will begin to beat around the same time. Limb buds appear where the arms and legs will grow later. Organogenesis begins. The head represents about one half of the embryo's axial length, and more than half of the embryo's mass. The brain develops into five areas, along with vertebra and bones beginning to form. The heart starts to beat and blood starts to flow.
Week 6-8 Myogenesis and neurogenesis have progressed to where the embryo is capable of motion, and the eyes begin to form. Organogenesis and growth continue. Hair has started to form along with all essential organs. Facial features are beginning to develop. At the end of the 8th week, the embryonic stage is over, and the fetal stage begins.
## Status
The status of the human embryo is debated by some bioethicists. Some Christian Ethicists believe that an embryo does, in fact, possess personhood. Gilbert Meileander, Christian ethics professor at the private Lutheran university Valparaiso University for example, identifies conception as the point at which a new individual human being comes into existence, since "when sperm and ovum join to form the zygote, the individual's genotype is established." The NIH defines the embryonic stage as the beginning of developed human form
# Footnotes
- ↑ 3D Pregnancy (Image from gestational age of 6 weeks). Retrieved 2007-08-28. A rotatable 3D version of this photo is available here, and a drawing is available here.
- ↑ Gould, Stephen. Ontogeny and Philogeny, page 206 (1977): "recapitulation was not 'disproved'; it could not be, for too many well-established cases fit its expectations."
- ↑ NIH Medical Encyclopedia
- ↑ NIH Medical Encyclopedia
- ↑ NIH Medical Encyclopedia
- ↑ Gilbert Meilander, Bioethics: A Primer for Christians (2nd ed.; Grand Rapids: Eerdmans, 2005), p. 29.
- ↑ NIH Medical Encyclopedia | Embryo
Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]
An embryo (from Greek: Template:Polytonic, plural Template:Polytonic, lit. "that which grows," from en- "in" + bryein "to swell, be full") is a multicellular diploid eukaryote in its earliest stage of development, from the time of first cell division until birth, hatching, or germination. In humans, it is called an embryo from the moment of fertilisation until the end of the 8th week, whereafter it is instead called a fetus.
# Development
The development of the embryo is called embryogenesis. In organisms that reproduce sexually, once a sperm fertilizes an egg cell, the result is a cell called the zygote that has all the DNA of two parents. The resulting embryo derives 50 percent of its genetic makeup from each parent. In plants, animals, and some protists, the zygote will begin to divide by mitosis to produce a multicellular organism. The result of this process is an embryo.
In animals, the development of the zygote into an embryo proceeds through specific recognizable stages of blastula, gastrula, and organogenesis. The blastula stage typically features a fluid-filled cavity, the blastocoel, surrounded by a sphere or sheet of cells, also called blastomeres.
During gastrulation the cells of the blastula undergo coordinated processes of cell division, invasion, and/or migration to form two (diploblastic) or three (triploblastic) tissue layers. In triploblastic organisms, the three germ layers are called endoderm, ectoderm and mesoderm. However, the position and arrangement of the germ layers are highly species-specific, depending on the type of embryo produced. In vertebrates, a special population of embryonic cells called the neural crest has been proposed as a "fourth germ layer", and is thought to have been an important novelty in the evolution of head structures.
During organogenesis, molecular and cellular interactions between germ layers, combined with the cells' developmental potential or competence to respond, prompt the further differentiation of organ-specific cell types. For example, in neurogenesis, a subpopulation of ectoderm cells is set aside to become the brain, spinal cord and peripheral nerves. Modern developmental biology is extensively probing the molecular basis for every type of organogenesis, including angiogenesis (formation of new blood vessels from pre-existing ones), chondrogenesis (cartilage), myogenesis (muscle), osteogenesis (bone), and many others.
Generally, if a structure pre-dates another structure in evolutionary terms, then it often appears earlier than the other in an embryo; this general observation is sometimes summarized by the phrase "ontogeny recapitulates phylogeny."[2] For example, the backbone is a common structure among all vertebrates such as fish, reptiles and mammals, and the backbone also appears as one of the earliest structures laid out in all vertebrate embryos. The cerebrum in humans, which is the most sophisticated part of the brain, develops last. This rule is not absolute, but it is recognized as being partly applicable to development of the human embryo.
# Embryos of plants and animals
- Plants: In botany, a seed plant embryo is part of a seed, consisting of precursor tissues for the leaves, stem (see hypocotyl), and root (see radicle), as well as one or more cotyledons. Once the embryo begins to germinate — grow out from the seed — it is called a seedling. Plants that do not produce seeds, but do produce an embryo, include the bryophytes and ferns. In these plants, the embryo is a young plant that grows attached to a parental gametophyte.
- Animals: The embryo of a placental mammal is defined as the organism between the first division of the zygote (a fertilized ovum) until it becomes a fetus. In humans, the embryo is defined as the product of conception from implantation in the uterus through the eighth week of development. An embryo is called a fetus at a more advanced stage of development and up until birth or hatching. In humans, this is from the eighth week of gestation.
# The human embryo
## Growth
Template:Seealso
Week 1-3 5-7 days after fertilization, the blastula attaches to the wall of the uterus (endometrium). When it comes into contact with the endometrium it performs implantation. Implantation connections between the mother and the embryo will begin to form, including the umbilical cord. The embryo's growth centers around an axis, which will become the spine and spinal cord. The brain, spinal cord, heart, and gastrointestinal tract begin to form.[3]
Week 4-5 Chemicals produced by the embryo stop the woman's menstrual cycle. Neurogenesis is underway, showing brain activity at about the 6th week. The heart will begin to beat around the same time. Limb buds appear where the arms and legs will grow later. Organogenesis begins. The head represents about one half of the embryo's axial length, and more than half of the embryo's mass. The brain develops into five areas, along with vertebra and bones beginning to form. The heart starts to beat and blood starts to flow.[4]
Week 6-8 Myogenesis and neurogenesis have progressed to where the embryo is capable of motion, and the eyes begin to form. Organogenesis and growth continue. Hair has started to form along with all essential organs. Facial features are beginning to develop. At the end of the 8th week, the embryonic stage is over, and the fetal stage begins.[5]
## Status
The status of the human embryo is debated by some bioethicists. Some Christian Ethicists believe that an embryo does, in fact, possess personhood. Gilbert Meileander, Christian ethics professor at the private Lutheran university Valparaiso University for example, identifies conception as the point at which a new individual human being comes into existence, since "when sperm and ovum join to form the zygote, the individual's genotype is established."[6] The NIH defines the embryonic stage as the beginning of developed human form [7]
# Footnotes
- ↑ 3D Pregnancy (Image from gestational age of 6 weeks). Retrieved 2007-08-28. A rotatable 3D version of this photo is available here, and a drawing is available here.
- ↑ Gould, Stephen. Ontogeny and Philogeny, page 206 (1977): "recapitulation was not 'disproved'; it could not be, for too many well-established cases fit its expectations."
- ↑ NIH Medical Encyclopedia http://www.nlm.nih.gov/medlineplus/ency/article/002398.htm
- ↑ NIH Medical Encyclopedia http://www.nlm.nih.gov/medlineplus/ency/article/002398.htm
- ↑ NIH Medical Encyclopedia http://www.nlm.nih.gov/medlineplus/ency/article/002398.htm
- ↑ Gilbert Meilander, Bioethics: A Primer for Christians (2nd ed.; Grand Rapids: Eerdmans, 2005), p. 29.
- ↑ NIH Medical Encyclopedia http://www.nlm.nih.gov/medlineplus/ency/article/002398.htm | https://www.wikidoc.org/index.php/Embryo | |
5b39cbeb2da4339409b02cc95d5758ffc4da291f | wikidoc | Emerin | Emerin
Emerin is a protein that in humans is encoded by the EMD gene, also known as the STA gene. Emerin, together with LEMD3, is a LEM domain-containing integral protein of the inner nuclear membrane in vertebrates. Emerin is highly expressed in cardiac and skeletal muscle. In cardiac muscle, emerin localizes to adherens junctions within intercalated discs where it appears to function in mechanotransduction of cellular strain and in beta-catenin signaling. Mutations in emerin cause X-linked recessive Emery–Dreifuss muscular dystrophy, cardiac conduction abnormalities and dilated cardiomyopathy.
It is named after Alan Emery.
# Structure
Emerin is a 29.0 kDa (34 kDa observed MW) protein composed of 254 amino acids. Emerin is a serine-rich protein with an N-terminal 20-amino acid hydrophobic region that is flanked by charged residues; the hydrophobic region may be important for anchoring the protein to the membrane, with the charged terminal tails being cytosolic. In cardiac, skeletal, and smooth muscle, emerin localizes to the inner nuclear membrane; expression of emerin is highest in skeletal and cardiac muscle. In cardiac muscle specifically, emerin also resides at adherens junctions within intercalated discs.
# Function
Emerin is a serine-rich nuclear membrane protein and a member of the nuclear lamina-associated protein family. It mediates membrane anchorage to the cytoskeleton. Emery–Dreifuss muscular dystrophy is an X-linked inherited degenerative myopathy resulting from mutation in the EMD (also known clinically as STA) gene. Emerin appears to be involved in mechanotransduction, as emerin-deficient mouse fibroblasts failed to transduce normal mechanosensitive gene expression responses to strain stimuli. In cardiac muscle, emerin is also found complexed to beta-catenin at adherens junctions of intercalated discs, and cardiomyocytes from hearts lacking emerin showed beta-catenin redistribution as well as perturbed intercalated disc architecture and myocyte shape. This interaction appears to be regulated by glycogen synthase kinase 3 beta.
# Clinical significance
Mutations in emerin cause X-linked recessive Emery–Dreifuss muscular dystrophy, which is characterized by early contractures in the Achilles tendons, elbows and post-cervical muscles; muscle weakness proximal in the upper limbs and distal in lower limbs; along with cardiac conduction defects that range from sinus bradycardia, PR prolongation to complete heart block. In these patients, immunostaining of emerin is lost in various tissues, including muscle, skin fibroblasts, and leukocytes, however diagnostic protocols involve mutational analysis rather than protein staining. In nearly all cases, mutations result in a complete deletion, or undetectable levels, of emerin protein. Approximately 20% of cases have X chromosomes with an inversion within the Xq28 region.
Moreover, recent research have found that the absence of functional emerin may decrease the infectivity of HIV-1. Thus, it is speculated that patients suffering from Emery–Dreifuss muscular dystrophy may have immunity to or show an irregular infection pattern to HIV-1.
# Interactions
Emerin has been shown to interact with:
- ACTA1,
- ACTG2,
- BANF1,
- BCLAF1,
- CTNNB1,
- GMCL1,
- LMNA,
- PSME1,
- SYNE1,
- SYNE2,
- TMEM43, and
- YTHDC1. | Emerin
Emerin is a protein that in humans is encoded by the EMD gene, also known as the STA gene. Emerin, together with LEMD3, is a LEM domain-containing integral protein of the inner nuclear membrane in vertebrates. Emerin is highly expressed in cardiac and skeletal muscle. In cardiac muscle, emerin localizes to adherens junctions within intercalated discs where it appears to function in mechanotransduction of cellular strain and in beta-catenin signaling. Mutations in emerin cause X-linked recessive Emery–Dreifuss muscular dystrophy, cardiac conduction abnormalities and dilated cardiomyopathy.
It is named after Alan Emery.[1]
# Structure
Emerin is a 29.0 kDa (34 kDa observed MW) protein composed of 254 amino acids.[2] Emerin is a serine-rich protein with an N-terminal 20-amino acid hydrophobic region that is flanked by charged residues; the hydrophobic region may be important for anchoring the protein to the membrane, with the charged terminal tails being cytosolic.[3] In cardiac, skeletal, and smooth muscle, emerin localizes to the inner nuclear membrane;[4][5] expression of emerin is highest in skeletal and cardiac muscle.[3] In cardiac muscle specifically, emerin also resides at adherens junctions within intercalated discs.[6][7][8]
# Function
Emerin is a serine-rich nuclear membrane protein and a member of the nuclear lamina-associated protein family. It mediates membrane anchorage to the cytoskeleton. Emery–Dreifuss muscular dystrophy is an X-linked inherited degenerative myopathy resulting from mutation in the EMD (also known clinically as STA) gene.[9] Emerin appears to be involved in mechanotransduction, as emerin-deficient mouse fibroblasts failed to transduce normal mechanosensitive gene expression responses to strain stimuli.[10] In cardiac muscle, emerin is also found complexed to beta-catenin at adherens junctions of intercalated discs, and cardiomyocytes from hearts lacking emerin showed beta-catenin redistribution as well as perturbed intercalated disc architecture and myocyte shape. This interaction appears to be regulated by glycogen synthase kinase 3 beta.[11]
# Clinical significance
Mutations in emerin cause X-linked recessive Emery–Dreifuss muscular dystrophy, which is characterized by early contractures in the Achilles tendons, elbows and post-cervical muscles; muscle weakness proximal in the upper limbs and distal in lower limbs; along with cardiac conduction defects that range from sinus bradycardia, PR prolongation to complete heart block.[12] In these patients, immunostaining of emerin is lost in various tissues, including muscle, skin fibroblasts, and leukocytes, however diagnostic protocols involve mutational analysis rather than protein staining.[12] In nearly all cases, mutations result in a complete deletion, or undetectable levels, of emerin protein. Approximately 20% of cases have X chromosomes with an inversion within the Xq28 region.[13]
Moreover, recent research have found that the absence of functional emerin may decrease the infectivity of HIV-1. Thus, it is speculated that patients suffering from Emery–Dreifuss muscular dystrophy may have immunity to or show an irregular infection pattern to HIV-1.[14]
# Interactions
Emerin has been shown to interact with:
- ACTA1,[15]
- ACTG2,[15]
- BANF1,[16][17]
- BCLAF1,[18]
- CTNNB1,[7][19]
- GMCL1,[17]
- LMNA,[15][20][21][22]
- PSME1,[20]
- SYNE1,[23][24][25]
- SYNE2,[23][25][26]
- TMEM43,[27] and
- YTHDC1.[20] | https://www.wikidoc.org/index.php/Emerin | |
c26240a375d87227afe0597febc77b502f20cda1 | wikidoc | Entrez | Entrez
# Overview
The Entrez Global Query Cross-Database Search System is a powerful federated search engine, or web portal that allows users to search many discrete health sciences databases at the National Center for Biotechnology Information (NCBI) website. NCBI is part of the National Library of Medicine (NLM), itself a department of the National Institutes of Health (NIH) of the United States government. Entrez also happens to be the French word for the second person plural form of the verb "to enter", meaning literally "come in".
Entrez Global Query is an integrated search and retrieval system that provides access to all databases simultaneously with a single query string and user interface. Entrez can efficiently retrieve related sequences, structures, and references. The Entrez system can provide views of gene and protein sequences and chromosome maps. Some textbooks are also available online through the Entrez system.
# Features
The Entrez front page provides, by default, access to the global query. All databases indexed by Entrez can be searched via a single query string, supporting boolean operators and search term tags to limit parts of the search statement to particular fields. This returns a unified results page, that shows the number of hits for the search in each of the databases, which are also links to actual search results for that particular database.
Entrez also provides a similar interface for searching each particular database and for refining search results. The Limits feature allows the user to narrow a search a web forms interface. The History feature gives a numbered list of recently performed queries. Results of previous queries can be referred to by number and combined via boolean operators. Search results can be saved temporarily in a Clipboard. Users with a MyNCBI account can save queries indefinitely and also choose to have updates with new search results e-mailed for saved queries of most databases. It is widely used in the field of biotechnology to enhance the knowledge of students worldwide.
# Databases
Entrez searches the following databases:
- PubMed: biomedical literature citations and abstracts, including Medline - articles from (mainly medical) journals, often including abstracts. Links to PubMed Central and other full-text resources are provided to articles from the 1990s.
- PubMed Central: free, full text journal articles
- Site Search: NCBI web and FTP web sites
- Books: online books
- OMIM: online Mendelian Inheritance in Man
- OMIA: online Mendelian Inheritance in Animals
- Nucleotide: sequence database (GenBank)
- Protein: sequence database
- Genome: whole genome sequences and Mapping
- Structure: three-dimensional macromolecular structures
- Taxonomy: organisms in GenBank Taxonomy
- SNP: single nucleotide polymorphism
- Gene: gene-centered information
- HomoloGene: eukaryotic homology groups
- PubChem Compound: unique small molecule chemical structures
- PubChem Substance: deposited chemical substance records
- Genome Project: genome project information
- UniGene: gene-oriented clusters of transcript sequences
- CDD: conserved protein domain database
- 3D Domains: domains from Entrez Structure
- UniSTS: markers and mapping data
- PopSet: population study data sets (epidemiology)
- GEO Profiles: expression and molecular abundance profiles
- GEO DataSets: experimental sets of GEO data
- Cancer Chromosomes: cytogenetic databases
- PubChem BioAssay: bioactivity screens of chemical substances
- GENSAT: gene expression atlas of mouse central nervous system
- Probe: sequence-specific reagents
- NLM Catalog: NLM bibliographic data for over 1.2 million journals, books, audiovisuals, computer software, electronic resources, and other materials resident in LocatorPlus (updated every weekday).
# Accessing Entrez
In addition to using the search engine forms to query the data in Entrez, NCBI provides the Entrez Programming Utilities (eUtils) for more direct access to query results. The eUtils are accessed by posting specially formed URLs to the NCBI server, and parsing the XML response. There is also an eUtils SOAP interface. | Entrez
# Overview
The Entrez Global Query Cross-Database Search System is a powerful federated search engine, or web portal that allows users to search many discrete health sciences databases at the National Center for Biotechnology Information (NCBI) website. NCBI is part of the National Library of Medicine (NLM), itself a department of the National Institutes of Health (NIH) of the United States government. Entrez also happens to be the French word for the second person plural form of the verb "to enter", meaning literally "come in".
Entrez Global Query is an integrated search and retrieval system that provides access to all databases simultaneously with a single query string and user interface. Entrez can efficiently retrieve related sequences, structures, and references. The Entrez system can provide views of gene and protein sequences and chromosome maps. Some textbooks are also available online through the Entrez system.
# Features
The Entrez front page provides, by default, access to the global query. All databases indexed by Entrez can be searched via a single query string, supporting boolean operators and search term tags to limit parts of the search statement to particular fields. This returns a unified results page, that shows the number of hits for the search in each of the databases, which are also links to actual search results for that particular database.
Entrez also provides a similar interface for searching each particular database and for refining search results. The Limits feature allows the user to narrow a search a web forms interface. The History feature gives a numbered list of recently performed queries. Results of previous queries can be referred to by number and combined via boolean operators. Search results can be saved temporarily in a Clipboard. Users with a MyNCBI account can save queries indefinitely and also choose to have updates with new search results e-mailed for saved queries of most databases. It is widely used in the field of biotechnology to enhance the knowledge of students worldwide.
# Databases
Entrez searches the following databases:
- PubMed: biomedical literature citations and abstracts, including Medline - articles from (mainly medical) journals, often including abstracts. Links to PubMed Central and other full-text resources are provided to articles from the 1990s.
- PubMed Central: free, full text journal articles
- Site Search: NCBI web and FTP web sites
- Books: online books
- OMIM: online Mendelian Inheritance in Man
- OMIA: online Mendelian Inheritance in Animals
- Nucleotide: sequence database (GenBank)
- Protein: sequence database
- Genome: whole genome sequences and Mapping
- Structure: three-dimensional macromolecular structures
- Taxonomy: organisms in GenBank Taxonomy
- SNP: single nucleotide polymorphism
- Gene: gene-centered information
- HomoloGene: eukaryotic homology groups
- PubChem Compound: unique small molecule chemical structures
- PubChem Substance: deposited chemical substance records
- Genome Project: genome project information
- UniGene: gene-oriented clusters of transcript sequences
- CDD: conserved protein domain database
- 3D Domains: domains from Entrez Structure
- UniSTS: markers and mapping data
- PopSet: population study data sets (epidemiology)
- GEO Profiles: expression and molecular abundance profiles
- GEO DataSets: experimental sets of GEO data
- Cancer Chromosomes: cytogenetic databases
- PubChem BioAssay: bioactivity screens of chemical substances
- GENSAT: gene expression atlas of mouse central nervous system
- Probe: sequence-specific reagents
- NLM Catalog: NLM bibliographic data for over 1.2 million journals, books, audiovisuals, computer software, electronic resources, and other materials resident in LocatorPlus (updated every weekday).
# Accessing Entrez
In addition to using the search engine forms to query the data in Entrez, NCBI provides the Entrez Programming Utilities (eUtils) for more direct access to query results. The eUtils are accessed by posting specially formed URLs to the NCBI server, and parsing the XML response. There is also an eUtils SOAP interface. | https://www.wikidoc.org/index.php/Entrez | |
39bb6932f521a081daa220011f2e770ee1eed0a7 | wikidoc | Enzyte | Enzyte
# Overview
Enzyte is an herbal nutritional supplement originally manufactured by Berkeley Premium Nutraceuticals of Cincinnati, Ohio. The manufacturer has claimed Enzyte promotes "natural male enhancement", which is suggestive of a euphemism for penile enlargement. However, its effectiveness has been called into doubt and the claims of the manufacturer have been under scrutiny from various state and federal organizations.
By 2009, marketing was oriented to both erectile dysfunction, and attracting more naive purchasers seeking permanent enlargement of the penis.
Because of their claims and business practices, the company's founder and CEO, Steve Warshak, and his mother, Harriett Warshak, were found guilty of conspiracy to commit mail fraud, bank fraud, and money laundering, and in September 2008 were sentenced to prison and ordered to forfeit $500 million in assets.
The conviction threw the company into bankruptcy. In December 2008 the assets were acquired from bankruptcy court for $2.75 million by Pristine Bay, affiliated with Cincinnati developer Chuck Kubicki who said he wanted to keep the company's 200 employees in one of his property buildings in suburban Cincinnati at Forest Park, Ohio. Pristine Bay LLC has the same mailing address as Kubicki's Cincinnati United Contractors. Pristine Bay’s statutory agent, Chance Truemper, is a property-development coordinator for CUC.
Kubiciki said he would change the company name but would keep the brand.
Enzyte is widely advertised on US television as "the once daily tablet for natural male enhancement". The commercials feature a character known as "Smilin' Bob" (played by actor John Larson), who always wears a smile that is implied to be caused by the enhancing effects of Enzyte; these advertisements feature double entendres. Some commercials feature an equally smiling "Mrs. Bob".
The purported benefits of this compound are dubious, unproven and untested.
# Ingredients
Despite being a compound of herbs, minerals, and vitamins, Enzyte formerly promoted itself under a seemingly scientific name of Suffragium asotas. While Enzyte's manufacturer claims this phrase translates as "better sex," this is incorrect; suffragium in Latin means vote, and asotas is not a Latin word at all. Harvard teaching fellow Rhett Martin says that the phrase might be an error for suffragor asotis, meaning "refuge for the dissipated."
Enzyte is said to contain:
- Tribulus terrestris (puncture vine)
- Niacin
- Panax ginseng
- Epimedium (horny goatweed)
- Avena sativa (oat)
- Zinc oxide
- Lepidium meyenii (maca)
- Muira puama
- Ginkgo biloba
- L-Arginine
- Saw palmetto
- Other ingredients: gelatin, cellulose, rice bran, tabasco sauce, oat fiber, magnesium stearate, silicon dioxide, dicalcium phosphate, titanium dioxide and propylene glycol.
Most of the above ingredients are commonly available as over the counter herbal or dietary supplements, and most have anecdotal reports, but marginal or unproven scientific evidence, of efficacy on various systems in the human body. Several of the herbal ingredients are included in only very low quantities.
One notable ingredient, Yohimbe, was included in the original formulation of Enzyte, produced until at least 2004, however as Yohimbe's legal status in Canada is unclear, Enzyte produced after 2004 no longer contains Yohimbe extract.
Additionally, zinc is an ingredient in Enzyte. Some men who have low zinc levels in their body have had success using zinc supplements to treat erection problems. Overdosage of zinc is a hazard to health. Zinc supplements are available without prescription at significantly lower prices than Enzyte.
# Effectiveness
Currently, the effectiveness of Enzyte is unproven. The Center for Science in the Public Interest has urged the Federal Trade Commission (which has power under federal law to regulate advertising) to disallow further television advertising for Enzyte, because of a lack of proper clinical trials. They have settled with the Attorneys General of various states and have altered their advertising in a more truthful fashion. Substantiation for the brand is on file for each claim. The company now offers a 60 day return policy on unopened products.
Enzyte originally advertised that use of the Enzyte product would promote permanent physical penile growth, or the company would return purchasers double their cost. Those who attempted to collect this refund claim they either received a partial refund or were duped into signing away the right to a refund. Enzyte advertising was changed to state that the product is intended to create a firmer erection by temporarily increasing blood flow to the penis. The advertising change was made after lawsuits against the company and its rebate policies began to surface. No evidence exists that proves Enzyte to be effective in any of its claims. The product advertising states in small print that it "is not intended or promoted to diagnose, or treat any disease" and since ED (Erectile dysfunction) is a recognized disease, the advertising is considered legal.
A civil lawsuit alleged Enzyte does not work as advertised. Despite manufacturer claims that Enzyte will increase penis size, girth, firmness, and improve sexual performance, there exists no scientific evidence that Enzyte is capable of these claims. In fact, Enzyte has never been scientifically tested by the FDA, or other independent third party. Accordingly, Enzyte is required by current US law to be marketed as an herbal supplement, and may not legally be called a drug. In keeping with FTC rulings, Enzyte is not allowed to claim these benefits in its advertising. However, as of May 2009, TV commercials for the product still use the phrase "natural male enhancement."
# Federal indictment and conviction
On September 21, 2006, Berkeley Premium Nutraceuticals, its owner and president, Steven Warshak, and five other individuals were indicted by the United States, Southern District of Ohio, U.S. Attorney Greg Lockhart, on charges of conspiracy], money laundering, and mail, wire and bank fraud. The indictment alleged that the company defrauded consumers and banks of US$ 100 million. The United States Food and Drug Administration, Internal Revenue Service, United States Postal Inspection Service and other agencies participated in the investigation. The federal fraud trial began on January 8, 2008.
In testimony during the trial, a former executive with Berkeley testified that the enhancements that the company claimed were given by use of Enzyte were fabricated, and the company defrauded customers by continuing to charge them for additional shipments of the supplement. He further testified that company employees were instructed to make it as difficult as possible for unhappy customers to receive refunds.
On February 22, 2008, Steven Warshak was found guilty of 93 counts of conspiracy, fraud and money laundering. On August 27, 2008 he was sentenced by U.S. District Judge Arthur Spiegel to 25 years in prison and ordered to pay $93,000 in fines. His company, Berkeley Premium Nutraceuticals, along with other defendants, was ordered to forfeit $500 million. His 75-year-old mother, Harriet Warshak, was sentenced to two years in prison. | Enzyte
# Overview
Enzyte is an herbal nutritional supplement originally manufactured by Berkeley Premium Nutraceuticals of Cincinnati, Ohio. The manufacturer has claimed Enzyte promotes "natural male enhancement", which is suggestive of a euphemism for penile enlargement. However, its effectiveness has been called into doubt and the claims of the manufacturer have been under scrutiny from various state and federal organizations.
By 2009, marketing was oriented to both erectile dysfunction, and attracting more naive purchasers seeking permanent enlargement of the penis.
Because of their claims and business practices, the company's founder and CEO, Steve Warshak, and his mother, Harriett Warshak, were found guilty of conspiracy to commit mail fraud, bank fraud, and money laundering, and in September 2008 were sentenced to prison and ordered to forfeit $500 million in assets.[1]
The conviction threw the company into bankruptcy. In December 2008 the assets were acquired from bankruptcy court for $2.75 million by Pristine Bay, affiliated with Cincinnati developer Chuck Kubicki who said he wanted to keep the company's 200 employees in one of his property buildings in suburban Cincinnati at Forest Park, Ohio. Pristine Bay LLC has the same mailing address as Kubicki's Cincinnati United Contractors. Pristine Bay’s statutory agent, Chance Truemper, is a property-development coordinator for CUC.
Kubiciki said he would change the company name but would keep the brand.[2]
Enzyte is widely advertised on US television as "the once daily tablet for natural male enhancement". The commercials feature a character known as "Smilin' Bob" (played by actor John Larson[3]), who always wears a smile that is implied to be caused by the enhancing effects of Enzyte; these advertisements feature double entendres. Some commercials feature an equally smiling "Mrs. Bob".
The purported benefits of this compound are dubious, unproven and untested.
# Ingredients
Despite being a compound of herbs, minerals, and vitamins, Enzyte formerly promoted itself under a seemingly scientific name of Suffragium asotas. While Enzyte's manufacturer claims this phrase translates as "better sex," this is incorrect; suffragium in Latin means vote, and asotas is not a Latin word at all. Harvard teaching fellow Rhett Martin says that the phrase might be an error for suffragor asotis, meaning "refuge for the dissipated."[4]
Enzyte is said to contain:
- Tribulus terrestris (puncture vine)
- Niacin
- Panax ginseng
- Epimedium (horny goatweed)
- Avena sativa (oat)
- Zinc oxide
- Lepidium meyenii (maca)
- Muira puama
- Ginkgo biloba
- L-Arginine
- Saw palmetto
- Other ingredients: gelatin, cellulose, rice bran, tabasco sauce, oat fiber, magnesium stearate, silicon dioxide, dicalcium phosphate, titanium dioxide and propylene glycol.
Most of the above ingredients are commonly available as over the counter herbal or dietary supplements, and most have anecdotal reports, but marginal or unproven scientific evidence, of efficacy on various systems in the human body. Several of the herbal ingredients are included in only very low quantities.
One notable ingredient, Yohimbe, was included in the original formulation of Enzyte, produced until at least 2004, however as Yohimbe's legal status in Canada is unclear, Enzyte produced after 2004 no longer contains Yohimbe extract.
Additionally, zinc is an ingredient in Enzyte. Some men who have low zinc levels in their body have had success using zinc supplements to treat erection problems. [5] Overdosage of zinc is a hazard to health. Zinc supplements are available without prescription at significantly lower prices than Enzyte.
# Effectiveness
Currently, the effectiveness of Enzyte is unproven. The Center for Science in the Public Interest has urged the Federal Trade Commission (which has power under federal law to regulate advertising) to disallow further television advertising for Enzyte, because of a lack of proper clinical trials. They have settled with the Attorneys General of various states and have altered their advertising in a more truthful fashion. Substantiation for the brand is on file for each claim. The company now offers a 60 day return policy on unopened products.
Enzyte originally advertised that use of the Enzyte product would promote permanent physical penile growth, or the company would return purchasers double their cost. Those who attempted to collect this refund claim they either received a partial refund or were duped into signing away the right to a refund. Enzyte advertising was changed to state that the product is intended to create a firmer erection by temporarily increasing blood flow to the penis. The advertising change was made after lawsuits against the company and its rebate policies began to surface. No evidence exists that proves Enzyte to be effective in any of its claims. The product advertising states in small print that it "is not intended or promoted to diagnose, or treat any disease" and since ED (Erectile dysfunction) is a recognized disease, the advertising is considered legal.
A civil lawsuit alleged Enzyte does not work as advertised.[1] Despite manufacturer claims that Enzyte will increase penis size, girth, firmness, and improve sexual performance, there exists no scientific evidence that Enzyte is capable of these claims. In fact, Enzyte has never been scientifically tested by the FDA, or other independent third party. [2] Accordingly, Enzyte is required by current US law to be marketed as an herbal supplement, and may not legally be called a drug. In keeping with FTC rulings, Enzyte is not allowed to claim these benefits in its advertising. However, as of May 2009, TV commercials for the product still use the phrase "natural male enhancement."
# Federal indictment and conviction
On September 21, 2006, Berkeley Premium Nutraceuticals, its owner and president, Steven Warshak, and five other individuals were indicted by the United States, Southern District of Ohio, U.S. Attorney Greg Lockhart, on charges of conspiracy], money laundering, and mail, wire and bank fraud. The indictment alleged that the company defrauded consumers and banks of US$ 100 million.[6] The United States Food and Drug Administration, Internal Revenue Service, United States Postal Inspection Service and other agencies participated in the investigation.[7] The federal fraud trial began on January 8, 2008.[8]
In testimony during the trial, a former executive with Berkeley testified that the enhancements that the company claimed were given by use of Enzyte were fabricated, and the company defrauded customers by continuing to charge them for additional shipments of the supplement. He further testified that company employees were instructed to make it as difficult as possible for unhappy customers to receive refunds.[9]
On February 22, 2008, Steven Warshak was found guilty of 93 counts of conspiracy, fraud and money laundering. On August 27, 2008 he was sentenced by U.S. District Judge Arthur Spiegel to 25 years in prison and ordered to pay $93,000 in fines. His company, Berkeley Premium Nutraceuticals, along with other defendants, was ordered to forfeit $500 million. His 75-year-old mother, Harriet Warshak, was sentenced to two years in prison.[10][11][12] | https://www.wikidoc.org/index.php/Enzyte | |
af1836ae0f1c95fb148b7b5b861d25eb9c147dc9 | wikidoc | Virola | Virola
Virola, also known as Epená, is a genus of medium-sized trees native to the South American rainforest and closely related to other Myristicaceae, such as nutmeg. It has glossy, dark leaves with clusters of tiny yellow flowers and emits a pungent odor.
The dark-red resin of the tree bark contains several hallucinogenic alkaloids, most notably 5-MeO-DMT(Virola calophylla), 5-OH-DMT (Bufotenine), and also N,N-DMT, perhaps the most "powerful" member of the Dimethyltryptamine family; it also contains beta-carboline harmala alkaloids, MAOIs that greatly potentiate the effects of DMT. The bark resin is prepared and dried by a variety of methods, often including the addition of ash or lime, presumably as basifying agents, and a powder made from the leaves of the small Justicia bush. Ingestion is similar to that of Yopo, consisting of assisted insufflation, with the snuff being blown through a long tube into the nostrils by an assistant. According to Schultes, the use of Virola in magico-religious rituals is restricted to tribes in the Western Amazon Basin and parts of the Orinoco Basin.
# Traditional medicine
The tops of Virola oleifera have been shown to produce lignan-7-ols and verrucosin that have antifungal action regarding Cladosporium sphaerospermum in doses as low as 25 micrograms. Lignan-7-ols oleiferin-B and oleiferin-G worked for C. cladosporoides starting as low as 10 micrograms. | Virola
Virola, also known as Epená, is a genus of medium-sized trees native to the South American rainforest and closely related to other Myristicaceae, such as nutmeg. It has glossy, dark leaves with clusters of tiny yellow flowers and emits a pungent odor.
The dark-red resin of the tree bark contains several hallucinogenic alkaloids, most notably 5-MeO-DMT(Virola calophylla), 5-OH-DMT (Bufotenine), and also N,N-DMT, perhaps the most "powerful" member of the Dimethyltryptamine family; it also contains beta-carboline harmala alkaloids, MAOIs that greatly potentiate the effects of DMT. The bark resin is prepared and dried by a variety of methods, often including the addition of ash or lime, presumably as basifying agents, and a powder made from the leaves of the small Justicia bush. Ingestion is similar to that of Yopo, consisting of assisted insufflation, with the snuff being blown through a long tube into the nostrils by an assistant. According to Schultes, the use of Virola in magico-religious rituals is restricted to tribes in the Western Amazon Basin and parts of the Orinoco Basin.
# Traditional medicine
The tops of Virola oleifera have been shown to produce lignan-7-ols and verrucosin that have antifungal action regarding Cladosporium sphaerospermum in doses as low as 25 micrograms. Lignan-7-ols oleiferin-B and oleiferin-G worked for C. cladosporoides starting as low as 10 micrograms.[1] | https://www.wikidoc.org/index.php/Epen%C3%A1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.