content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
In 2012, 35 of the 100 human poisoning cases we have reports for involved amatoxins. The vast majority of the amatoxin cases involved ingestion of he death cap, Amanita phalloides. Other cases involved destroying angels, Amanita ocreata, Amanita bisporigera and similar all-white Amanita species. There were three deaths in the United States from a single incident where elderly residents of a residential care facility were served soup made with deadly Amanita species. There were two deaths in two separate cases in Canada.
While we often learn of the majority of amatoxin cases, we only get a small sampling of other mushroom poisonings since they rarely involve death of the victim. Of the cases with symptoms severe enough for the individual to go to the hospital, over 14% were adverse reactions to hallucinogenic mushrooms, predominately Psilocybe species. Chlorophyllum molybdites accounted for 12% of this year’s cases. Adverse reactions to various Morchella species accounted for 10%.
We learned of 26 dogs and one horse poisoned by mushrooms, with 11 dog deaths, mostly from amatoxins. In two deaths, Inocybe species were implicated. In one death, two doses of atropine were administered to a dog who had consumed Amanita muscaria. Atropine markedly intensifies the effects of the toxins ibotenic acid and muscimol and so is contraindicated in such cases.
In 2012, there were once again many reports of amatoxin poisonings, both in humans (five deaths) and in dogs (six deaths from apparent amatoxins). For humans, two cases in Canada each involved one death and one case in the United States involved three deaths.
In the first Canadian case, the victim was an alcoholic. The victim was treated initially as a cardiac patient, but there was a rapid progression to multi-organ failure and death. A relative later found and discarded the remains of a cooked mushroom dish that was in the victim’s refrigerator. While no attempt was ever made to identify the mushrooms, the symptoms were consistent with amatoxin poisonings. In the second Canadian case, the mushrooms were identified post mortem as Amanita virosa. The man had a history of colitis and thus mushroom poisoning was discounted by his doctor. He was treated for his diarrhea and cramps and sent home. Two days later, he reported to the hospital with fulminant hepatic failure. He died 8 days after his mushroom meal. The poison center was never notified and best treatment practices for amatoxin poisoning were not employed.
All three deaths in the United States resulted from a single case in California where a caregiver at a residential care facility made a soup from mushrooms collected on the grounds. One elderly tenant had refused the dinner and was not ill, alerting investigators to the soup as the cause of the illnesses. The mushrooms were never positively identified, but descriptions by the caregiver implicate either Amanita phalloides or Amanita ocreata. The caregiver survived with aggressive rehydration therapy and use of injectable silymarin (Legalon®SIL). Three of the four elderly residents who consumed the soup succumbed. The first death occurred three days after the meal. A woman in her 90s recovered from the poisoning symptoms with use of aggressive rehydration alone but then died 20 days later due to other causes (Todd Mitchell, personal communication). Press accounts attributed her death to mushroom poisoning.
In five other cases in the United States (four in 2012 and one previously unreported case from 2011), at least 24 people consumed deadly Amanita species. After hospital admission, all were enrolled in the “Legalon®SIL; Mushroom Poisoning Clinical Study.” Following protocol, aggressive rehydration therapy was used in every case. The sickest individuals all received injectable silymarin and one individual, who had consumed a staggering quantity of Amanita phalloides, was treated using percutaneous cholecystostomy in addition to other therapies. All survived.
In a Connecticut case involving destroying angel mushrooms (Amanita cf bisporigera), all four family members survived. There were news reports of three Amanita phalloides cases in Ohio. One, involving at least a dozen people, happened in 2011, while two cases, each involving 4 people happened in 2012. In all three Ohio incidents and in the Connecticut incident, some individuals were sick enough to meet the criteria for treatment with injectable silymarin and so received injections of Legalon®SIL in addition to aggressive rehydration therapy.
On December 28, 2012, a California woman consumed approximately six Amanita phalloides mushrooms. By coincidence, when she reported to the hospital, Dr. Todd Mitchell was in the emergency room seeking treatment for his son who had dislocated his pinkie at volleyball practice (Todd Mitchell, personal communication). Dr. Mitchell is principal investigator for the drug interventional trial of injectable silymarin (Legalon®SIL) to treat amatoxin poisoning. In addition to aggressive rehydration therapy, the woman was treated with injectable silymarin. An interventional radiologist performed a percutaneous cholecystostomy. The woman was released from the hospital five days later after making a complete recovery even though she had consumed a staggering quantity of mushrooms.
In probably the weirdest case (and one that may well be a fabrication), one of the NAMA toxicology identifiers spotted a long rambling post on the website www.shroomery.org by a heavy user of numerous different hallucinogens. While under the influence of "MSE" (probably actually MXE, methoxetamine, a PCP analog), he claims to have gone out at night and collected, then consumed, about 50 mushrooms. In his drug-influenced state, he identified them as "Big Laughing Gyms." The next day he started feeling more and more ill, returned to his collecting site, and then identified the mushrooms as Galerina marginata, a deadly amatoxin-containing species. He reported to the emergency room but they supposedly did not believe there was any mushroom poisoning involved and wanted to do numerous expensive tests, so he reports that he left and treated himself. He claims to have ingested activated charcoal and consumed milk thistle capsules to cure himself – but he started treatment too late for charcoal to be of use and milk thistle capsules, which are widely believed to protect the liver, are ineffective since they are not absorbed into the blood stream and so are of no help either.
There were reports dealing with 75 people (70 incidents) suffering non-life-threatening conditions after consuming mushrooms. Thanks to the work of Marilyn Shaw, the numbers reflect detailed reporting for the region covered by the Rocky Mountain Poison and Drug Center (Colorado, Hawaii, Idaho, Montana, and Nevada). We also have detailed reporting from Michigan thanks to the cooperation of Susan Smolinske at the Children’s Hospital of Michigan Poison Center. Her volunteer intern, Hanady Nasser-Beydoun, prepared a spreadsheet for us of all symptomatic mushroom poisoning cases that their center had handled. For the rest of the country, we know that reporting is very incomplete, so our numbers really cannot be used to indicate whether poisoning incidents are increasing or decreasing with time or whether poisoning incidents are more common in one region than another. Because of reports to the Rocky Mountain Poison and Drug Center and the Children’s Hospital of Michigan Poison Center, we received a significant number of reports of adverse reactions to hallucinogens. At least 14 reports involved adverse reaction to species in the genus Psilocybe. In two cases of Psilocybe ingestion, the patient became combative.
Chlorophyllum molybdites accounted for 12, possibly 13, of the reports of adverse reactions to mushrooms. Often the victim had only consumed one bite raw. Cooking seems to decrease the severity of the symptoms, but even cooked C. molybdites can cause significant gastric upset. One husband (an MD) treated his wife at home using Gatorade® after finding the hospital to be of little or no help. Two other individuals self-medicated with Gatorade® to replace electrolytes lost from excessive vomiting and diarrhea after consuming C. molybdites.
Adverse reactions to morels accounted for 10 of the reports. One case involved raw morels; the other cases involved cooked morels. One case involved alcohol with the meal. Whether that individual can eat morels without alcohol was not established. For some people, it is unwise to consume alcohol with a meal of morels, though a significant majority of individuals can enjoy a beer or wine with a morel meal. It is becoming increasingly clear that some people can develop sensitivity to morels and suffer gastric distress after a morel meal when they had previously eaten morels for years without incident. We have even received the first report of life threatening anaphylactic shock from morels. The affected individual had previously eaten morels for years without adverse effect.
Five individuals in three separate incidents were sickened by puffballs, both Calvatia species and Lycoperdon species. Puffballs are normally only a problem if they are no longer pure white inside. However, in these cases, victims said that they had consumed mushrooms that had not yet started to mature and darken inside.
Five cases involved purchased mushrooms. Four cases involved individual sensitivity to a specific species (one sensitivity to Pleurotus ostreatus, two to Lentinula edodes (shiitake), and one sensitivity to Agaricus bisporus (crimini)). The fifth case was troubling since it involved sale of the poisonous species, Omphalotus illudens, by an unreliable wild crafter. The chef at the restaurant where the mushrooms had been purchased sampled the dish before placing it on the menu, so only he became ill.
The final human case of particular note involved a case of kidney failure after mushroom ingestion of an unknown species. Kidney failure is exceptionally rare, having been reported only for Amanita smithiana (and possibly some other Amanita species in section Lepidella) and for a few UV-fluorescent Cortinarius species (only one case in North America and that was due to ingestion of Cortinarius orellanosus). It is unfortunate that the mushrooms were not identified in this unusual case.
We received 26 reports of dogs and one horse poisoned by mushrooms, 11 of the dogs died. Eight of the dog cases involved suspected amatoxins with six deaths, five from confirmed or suspected Amanita species in the section Phalloides, one from suspected Galerina marginata. It is notable how rapidly dogs can succumb (as quickly as 55 to 60 hours post ingestion). When amatoxins are suspected, it is imperative that aggressive rehydration be begun rapidly, especially since dogs typically refuse to eat or drink after consuming mushrooms that contain amatoxins.
One dog death was attributed to consumption of Amanita muscaria. The dog was given two doses of atropine as part of the treatment. However, atropine is strongly contraindicated with poisonings involving mushrooms in the Amanita muscaria group, the Amanita pantherina group and Amanita aprica where muscimol and ibotenic acid, not muscarine is the toxin (Beug and Shaw, 2009). Two dog deaths were attributed to ingestion of Inocybe species and one dog death to suspicious unknown causes.
The problem of untrained individuals using the internet (or for that manner a book or other source) to identify mushrooms on their own came to light when a woman wrote that her dog was poisoned by what she had confirmed was Amanita pantherina and that the symptoms matched poisoning by ibotenic acid and muscimol. However, the reported symptoms actually matched lycoperdonosis. This was confirmed when a picture of the mushroom was sent in. It was an old Lycoperdon. The correspondent confirmed that when the dog bit into the mushroom, a cloud of dark green spores arose. The symptoms had been caused by inhalation of that cloud of spores. | https://namyco.org/toxicology_committee_report_20.php |
Whales, Dolphins, and Porpoise are marine mammals that are closely related to each other. Dolphins belong to the Delphinidae family while the porpoise belongs to the Phocoenidae family. However, many people may not be able to differentiate between the two. Today, in this article, we will compare the difference in dolphin and porpoise marine creatures.
What Does a Dolphin Look Like?
Dolphins are highly intelligent creatures that have smooth skin, flippers, and a dorsal fin. They have a long, slender snout and around 100 teeth, with a streamlined body. The single blow-hole at the top of their head has a flap that opens to reveal a pair of nostrils that they use to breathe when they surface. On each side of their head, they have an eye that moves independently of the other. Which means dolphins can see ahead, front, and behind. Dolphins have tiny ear holes. Still, they are best known for their hearing capacity.
Where Does a Dolphin Live?
Dolphins are marine animals; they habitat in the shallow areas of tropical and temperate oceans all over the world.
How Many Species of Dolphins are There?
There are 43 different recognized species of dolphins; 38 are marine dolphins, and the remaining 5 are river dolphins.
What is the Size of the Dolphin?
The largest and heaviest is the Orca, also known as the killer whale. These species can measure from 23 to 32 feet long and weigh up to 8.3 tons. The smallest dolphin species are Hector’s dolphin. However, an adult dolphin can have a total length of 3 feet 11 inches to 5 feet 3 inches and can weigh 88 to 132 pounds.
What Does a Dolphin Eat?
Dolphins feed on a variety of creatures like herring, cod, or mackerel, and squids. However, some of the larger dolphins like the killer whales feed on seals or sea lions and even turtles. An adult dolphin can eat around 4 to 9 percent of its body weight in fish, and so we can assume that a dolphin weighing 550 pounds will eat 20 to 50 pounds of fish each day.
Also read examples of fauna.
How High Does Dolphins Jump?
Dolphins can jump as high as 30 feet above the water’s surface.
Dolphin Reproduction:
Dolphins do not have a specific breeding season. They can reproduce at any time. In particular locations, the highest mating activities have been observed during both spring and autumn seasons. The gestation period in dolphins is considered in nine months, but it can also reach up to 17 months. Dolphins yield young calves in shallow waters. Dolphins usually give birth to one calf; the mother takes it to the surface for its first breath and will nurse the calf with her milk for around 12 months.
Interesting Facts About Dolphins:
- Dolphins can fascinate humans in several ways.
- They are curious mammals form strong bonds within their pods and have been known to help humans in various circumstances, including rescues and fishing.
- They have to come up to the surface to get air at different intervals. The duration can be from 20 seconds to 30 minutes when they get air.
- These marine mammals cannot go into full deep because they are conscious to breathe.
- Dolphins can let their one-half brain sleep at a time determined by conducting EGG studies on them.
- All the dolphins make the sound that travels underwater bounces off something and then returns to them as echoes. This echolocation also allows them to find predators, even in the dark ocean.
- While feeding, all the dolphins work as a team to get the fish’s school surrounded and wrap up. From there, they can dig through the middle and eat.
- Dolphins help other dolphins when they are sick, hurt, or giving birth.
- The smaller species of dolphins have few predators such as bull shark, tiger shark, and great white shark.
- Dolphins have several forms of developed communication, among which signature whistle allows other individuals to recognize them.
- These marine mammals can hear a high range of frequencies than humans. They can listen to from 20 to 150 kHz, which is seven times more sensitive than human ears.
- Dolphins do not drink seawater; they get water from the fish they eat.
Dolphin Behavior:
Dolphins are social mammals and prefer to live in pods that consist of dozens of individuals. Although the pod size can vary depending on species and locations. In areas, where they can find an abundance of food can merge, forming a super-pod that may exceed 1,000 dolphins.
How Long do Dolphins Live?
The average lifespan of dolphins is from 20 to 90 years, depending on species.
Are Dolphins Endangered?
According to the IUCN, the dolphins are listed as endangered. The statistics suggest that more dolphins are killed by illegal dynamite fishing, caught in a fishing net, collisions with boats, and marine pollution.
What Does a Porpoise Look Like?
Porpoise belongs to Cetacea’s order and is related to dolphins, but these two species differ from each other. Like dolphins, porpoises have sleek bodies, large flippers, slender and bony bodies. However, these species differ in many ways, such as porpoise, which does not have elongated beaks. Porpoise has triangular dorsal fins, and their teeth are shaped like a spade.
Also read Kangaroo vs Wallaroo comparison.
Where Do Porpoises Live?
Different porpoise species habitat in different regions according to their adaptations, Such as the Gulf of California porpoise, found around the northern part of the Gulf of California. Black porpoises live off the coasts of eastern and western South America. The common porpoise prefers to live in the cold water, but habitat to the east coast of North America around Southern Greenland. However, some finless porpoises can be seen in the Yellow sea.
How Many Species of Porpoise are There?
There are seven different species of porpoise. They are-
- Harbour porpoise
- Vaquita porpoise
- Burmeister’s porpoise
- Spectacled porpoise
- Indo-pacific finless porpoise
- Narrow-ridged finless porpoise
- Dall’s porpoise
What is the Size of the Porpoise?
Porpoises are smaller than dolphins. They grow to 5 to 6.5 feet long and weigh from 110 to 265 pounds.
What Does a Porpoise Eat?
Depending on the species, the diet of porpoise varies, but their primary diet consists of fish, octopus, squid, and crustaceans. Porpoises are also considered as big eaters. They consume about ten percent of their body weight each day.
How Fast Can a Porpoise Swim?
The Dall’s porpoise is one of the fastest marine mammals that can swim at a speed of 30 miles per hour.
Porpoise Reproduction:
Not much is known about the reproductive behavior of porpoises. But as porpoises are mammals, they gave live births. The gestation period in these species lies for 10 to 11 months yielding a single offspring. The young ones are called pups or calves. The calves are nursed for around 24 months.
Interesting Facts About Porpoises:
- The harbor porpoise name is correct to its habitat; they like to stay in water that no deeper than 500 feet along the coastline.
- Finless porpoises appear to be black, but they are grey with bits of blue.
- These mammals are highly intelligent creatures that can learn many tricks while living in captivity.
- They use echolocation to avoid collision with underwater objects and searching for food.
- The main predators of these creatures include sharks and killer whales.
- Porpoises are known to produce low-frequency sounds that are used for communication.
- Like dolphins, porpoise also lives in groups called shoals that can consist of couple members to thousands.
How Long Does Porpoise Live?
Some of the porpoise species live for less than ten years, while some are known to survive for 20 years.
Are Porpoise Endangered?
The vaquita porpoise is classified as critically, endangered species under the IUCN. At the same time, the fin less and Indo-pacific porpoise is listed as vulnerable. However, other species are either considered as vulnerable or have no classification.
Porpoise versus Dolphin: Fight Comparison
In a fight comparison between dolphins and porpoise, dolphins can win because of its vast size and aggressive behavior. A bottlenose dolphin has been seen flipping a porpoise into the air. As we have discussed above that porpoises are smaller in format, they cannot beat a dolphin.
Also read black howler monkey facts. | https://animalcreativefacts.com/dolphin-vs-porpoise/ |
The traditional method of surgical training has followed the ‘observe, practice, and teach’ model, which is useful for open surgery, but is insufficient for minimally invasive surgery. This study presents the validation of a new simulator designed for TMJ arthroscopy training. A group of 10 senior maxillofacial surgeons performed an arthroscopy procedure using the simulator. They then completed a questionnaire analyzing the realism of the simulator, its utility, and the educational quality of the audiovisual software. The mean age of the 10 surgeons was 42.6 years, and they had performed a mean 151 arthroscopies. With regard to the realism of the simulator, 80% reported that it was of an appropriate size and design and 70% referred to the very realistic positions and relationships between the internal structures. Regarding its educational potential, 80% reported the simulator to be very useful for acquiring the basic skills and to acquire the sensation of depth during access to the TMJ. Finally, 90% reported the prototype to be very useful for TMJ arthroscopy training. These preliminary results showed a high degree of approval. The general opinion of the group of experts was that the experience was rewarding and inspiring, and that the prototype has the educational potential for the achievement of basic TMJ arthroscopy skills.
The use of simulators for skills training is a common practice in our daily life, although we are probably unaware of this. Video games, virtual simulators in aeronautical engineering, and fire drills to evacuate a building are examples of simulations used in our daily routine. A training simulator can be defined as any system that provides the most realistic possible imitation of the steps necessary to follow in a specific procedure. Simulators are usually intended to recreate a real scenario in which events do not occur in an arbitrary way, but rather are previously planned. In this way, training with simulation allows the same procedure to be repeated as many times as needed until the basic skills are acquired, which will later be used in real life.
Generally, simulators are categorized into two types, realistic and virtual. However, it is becoming increasingly more common to find hybrid simulators that combine a device or real scenario with virtual reality software. The use of such simulators in the various fields of Medicine is widespread, such as the use of mannequins to learn to find a blood vessel and to perform the manoeuvres for cardiopulmonary resuscitation or orotracheal intubation.
In surgery, the use of simulators has been common practice for years. There is a multitude of designs – physical, virtual, and hybrid – with hybrid designs being the most recent and undergoing constant development. Another model is the use of animals in experiments, for which anatomical dissimilarities need to be taken into account. Despite the efforts made to find the perfect simulator, the cadaver continues to be the gold standard due to its close resemblance to the real patient. However, the cadaver has certain drawbacks, such as the high cost, the legal requirements, lack of availability in all hospitals, lack of reusability, failure to reproduce different pathologies, and numerous political, cultural, and religious considerations.
For surgical training, most learning programmes in recent decades have followed the Halstedian model, which consists of ‘observing, practicing, and teaching’. Surgeons without experience acquire autonomy in a progressive way as they follow surgical procedures under the supervision of an expert surgeon. Nevertheless, there are many limitations to the traditional training method including high costs, the pressure to be present, limited training time, difficulties in monitoring, ethical and legal restrictions, and the lack of standardization; furthermore, it depends on the number of patients, the opportunities for learning, and the advent of new minimally invasive techniques. As a consequence of all these drawbacks, numerous training modalities for surgical techniques have been developed outside the operating room so that the surgeon can negotiate the learning curve before moving on to real patients.
Within the field of maxillofacial surgery, arthroscopy of the temporomandibular joint (TMJ) is a common technique that has proven effective in the diagnosis and treatment of TMJ disorders. However, the difficulties of the technique make learning complex and sometimes frustrating. Given the extensive experience of the present study team in performing TMJ arthroscopy procedures, there is an apparent obligation for us to offer our surgeons, visitors, and residents a method that will enable them to learn the technique. This method should be reproducible, accessible to any specialist, and allow them to keep updated.
A realistic physical simulator that has been developed for training in arthroscopy of the TMJ is presented herein. The prototype has been constructed according to anthropometric standards using a material that reproduces the different textures and colours of all anatomical parts in the design (Neoderma, Brasil) ( Fig. 1 ). Thus, the skin, subcutaneous tissue, parotid gland, facial nerve, temporal vessels, ligaments, and articular capsule can be distinguished ( Fig. 2 ). In addition, a virtual teaching unit has been designed that consists of an electronic device connected to the simulator, which contains a library of contents grouped into different categories, including theoretical information such as explanatory videos.
The aim of this study was to obtain and report preliminary results for the validation of the simulator. This validation study involved a group of recognized maxillofacial surgeons from Spain with experience in the area of endoscopic surgery of the TMJ, who analyzed both the realism and the teaching potential of the simulator.
Materials and methods
A group of 10 expert surgeons was formed to execute a sequential practice exercise in which they performed an arthroscopy in the simulator and afterwards analyzed the audiovisual contents of the teaching unit. After the completion of both exercises, all of the participants completed a questionnaire to evaluate the realism of the model and its usefulness in surgical training. In addition, the surgeons were given the opportunity to express their personal opinions of the experience.
The first practice exercise consisted of performing an arthroscopy of the right side under the supervision of an expert surgeon and the engineer collaborating on the project ( Fig. 3 ). The exercise was carried out with a Storz arthroscope (Karl Storz-Endoskope, Tuttlingen, Germany) using an eyepiece angled at 30° with a diameter of 2.3 mm (identical to the one used in the clinic); a tracker was added to monitor the speed, time, and precision of each movement, for data analysis in subsequent studies.
All of the participants had to complete the following steps of the practice exercise ( Fig. 4 ): (1) Access the superior joint space of the TMJ. (2) Complete the examination from the posterior to the anterior recess, identifying all the joint structures during this movement (retrodiscal tissues, posterior ligament, articular disc, glenoid fossa, anterior recess, and pterygoid window). (3) Repeat the movement towards the back. (4) Insert the drainage cannula and triangulate. (5) Use a cutting instrument, via a second cannula, and eliminate an adhesion.
As the surgeons completed the practice exercise, they started to examine the teaching unit that contained different explanatory videos classified by theoretical content. The participants then completed the parts of the questionnaire referring to the teaching unit, and in this case, as in the practice exercise, most of the surgeons expressed their personal opinion of the unit.
Results
Ten surgeons participated, of whom nine were male and one was female; their average age was 42.6 years. All of the participants had prior experience in TMJ arthroscopy, averaging 11.5 years (range 1–26 years). The average number of arthroscopies undertaken by each of the participants was 151; the average number of arthroscopies at which the participant was an assistant was 147. Regarding prior experience using other types of simulator, only one surgeon had previously used a simulator.
In terms of formative experience in maxillofacial surgery, eight of the participants had been trained surgically as residents in different hospitals in Spain, a task that they had been involved in for years. Of these surgeons, seven did so specifically using the endoscopic technique for the TMJ.
In relation to the frequency with which they played video games, 40% claimed never to have played them, while 40% played occasionally and 20% did so once a month.
After the practice exercise had been completed, all of the participants completed a questionnaire that was divided into blocks: evaluation of the realism of the simulator, evaluation of its potential as an educational tool, and personal opinion. The items analyzed in the block referring to the realism of the simulator are listed in Table 1 . These were scored from 1 to 5, with a score of 1 representing ‘not very realistic’ and a score of 5 representing ‘perfectly realistic’. The items with the highest scores were the general external appearance of the simulator in terms of proportions and the locations of the anatomical structures, the sizes of the internal structures, and the locations of the internal structures and the relationships between them. The item with the lowest score related to the capacity of the simulator to maintain the saline solution within the joint cavity during irrigation, since the device was not completely watertight.
|1
|
Not realistic
|2
|
Not very realistic
|3
|
Quite realistic
|4
|
Very realistic
|5
|
Perfectly realistic
|The general external appearance of the simulator (proportions and locations of the anatomical structures)||10%||40%||50%|
|Tactile sensation experienced with the instruments during the procedure||10%||50%||40%|
|Sizes of the internal structures of the joint cavity||10%||10%||80%|
|The appearance of the internal tissues of the cavity||10%||50%||40%|
|Water tightness of the joint during the irrigation manoeuvres||20%||20%||40%||20%|
|Locations of the internal structures of the cavity and the relationships between these (temporal fossa, joint disc, retrodiscal tissue)||30%||70%|
|The realism of the pathologies incorporated into the simulator||20%||60%||20%|
|The relationship between the external anatomical structures used as a reference for access to the cavity and the joint cavity itself||70%||30%|
On analyzing the overall data, it was found that most of the scores corresponded to the highest evaluation level, thus in general all of the participants characterized the prototype as at least ‘very realistic’.
When asked about the skin, muscle, and joint capsule resistance in the simulator, 70% considered it less than in a real patient, and this quality was felt to lower the training value of the simulator for almost 50% of the subjects.
The second block of questions was designed to analyze the teaching potential of the prototype (data in Table 2 ). The results proved similar, given that the participants regarded the simulator as a good tool with which to learn basic skills, to perceive the depth of field, and to practice the movement of the instruments within the joint cavity and gain access to the joint. | https://pocketdentistry.com/validation-of-a-simulator-for-temporomandibular-joint-arthroscopy/ |
The utility model discloses a vertical connecting device of a fruit tree/cotton micro-irrigation double branch pipe regulating and controlling irrigation system and an application device thereof. In a fruit tree/cotton intercropping mode, a main pipe and branch pipes are ground-laid perpendicularly; the main pipe is laid below the ground; at the connecting position among the branch pipes and the main pipe in a drip irrigation system, a reducing tee is arranged to connect the branch pipes and the main pipe and is connected with a ground-surface water-supply opening; a tee is arranged at a position where a ground-surface water-outlet opening is located; a control valve is arranged on each of the two bypass ends of the tee; a branch pipe tee is connected with each of branch pipe control valves; the branch pipes are connected with the two bypass ends of the branch pipe tees; the branch pipes are provided with capillary tubes of different types based on needs; one branch pipe is used for supplying water to fruit trees while the other branch pipe is used for supplying water to cotton; and then the vertical connecting device of the fruit tree/cotton micro-irrigation double branch pipe regulating and controlling irrigation system is formed. The main pipe is laid below the ground and a water-outlet pile is reserved; the main pipe and the branch pipes are ground-laid perpendicularly; the main pipe is connected with the reducing tee and then connected with the branch pipe tees; the branch pipe tees are respectively connected with the two branch pipes; one branch pipe is connected with dropper tapes for the cotton while the other branch pipe is connected with droppers for the fruit trees; and then the application device of the vertical connecting device of the fruit tree/cotton micro-irrigation double branch pipe regulating and controlling irrigation system is formed. In this way, the double branch pipe regulating and controlling irrigation system is realized, in which the fruit trees and the cotton can use the same micro-irrigation system simultaneously. According to the utility model, the conflicts that the fruit trees and the cotton can not use the same micro-irrigation system simultaneously and that the fruit trees and the cotton are different in irrigation and fertilization needs can be solved; and the vertical connecting device of the fruit tree/cotton micro-irrigation double branch pipe regulating and controlling irrigation system and the application device thereof have a wide practicality. | |
Negative sentences are those which state something that is not true. In English grammar the general rule that defines a negative sentence is that the word ‘not’ appears after an auxiliary verb in the positive sentence. If there is no auxiliary verb in the positive sentence, then you add the auxiliary verb do.
Examples of negative sentences
The most common way to write a negative statement is the use of a negated auxiliary verb. An auxiliary verb is a verb that is used when in forming the tenses, moods, and voices of other verbs. ‘Be’ verbs are an example of an auxiliary verb. The ‘be’ verbs include:
To be (am, is, are ,was, were)
To have (have, has, had)
To do (do, does, did)
Look at the examples of negative sentences in different tenses. Note that some sentences use the contracted forms of informal writing and speech, and some others use the full forms
Positive sentence: I sing
To create a negative sentence from this, let’s use this format: Tense + Negative Word or its Contracted Form.
If there is no auxiliary verb in the positive sentence, as in the Present Simple and Past Simple tenses, then you add one, such as the auxiliary verb, ‘do’.
Present Simple Tense
do+not = don’t
I do not sing = I don’t sing
does+not = doesn’t
She does not sing = She doesn’t sing
Past Simple Tense
did+not = didn’t sing
I/She didn’t sing
Here are the ways a negative sentence looks using other tenses:
Present Progressive
is+not = isn’t
She is not singing = She isn’t singing
are+not = aren’t
We are not singing = We aren’t singing
Past Progressive
was+not = wasn’t
I was not singing = I wasn’t singing
were+not = weren’t
They were not playing = They weren’t playing
Present Perfect
have+not = haven’t has+not = hasn’t
You have not sung = You haven’t sung
She has not sung = She hasn’t sung
Besides the word ‘not’, the following are some of the negative words that can also be used use to create a negative sentence:
No
None
No one
Nobody
Nothing
Neither
Nowhere
Never
Lastly, a negative sentence is not the same as a double negative. In a double negative, the meaning is understood, but the sentence is not based on proper grammatical rules and usage. In fact, the meaning is exactly the opposite if you read a double negative in strict grammatical terms correctly.
The double negative is usually produced by combining the negative form of verb – did not, was not, etc. with a negative pronoun such as nothing:
I didn’t do nothing.
I didn’t see nobody.
These sentences really mean something positive, i.e.:
I did do something.
I saw somebody.
A double negative is not actually a negative sentence, but a positive one!
Leave a Comment
You must be logged in to post a comment. | https://blog.talk.edu/learn-english/negative-sentences/ |
Purpose of the role:
- The purpose of this role is to own the responsibility for defining and building IKS's Digital Ecosystem to achieve product goals and organizational vision as well as create net-new value for the organization through technology initiatives.
- You will be required to define the strategy to deliver business outcomes through technology adoption at IKS Health by evolving current products and identify new opportunities for disruption.
- In this role, you will have the ability to deliver business impact across business service lines.
Role in IKS Health #Tech #Team
This is a very visible and hands-on Product Management Leadership role. This role will effectively act as the intersection between operations, solutions, technology development and user experience to design, build and implement innovative technology solutions.
1. You will act as the primary product owner of IKS technology products and drive the agenda for IKS Health digital transformation from ideation, exploration, approval, development, implementation, execution, measurement and ongoing development
2. Manage stakeholders by structured/unstructured communication channels, collaborate with, and influence leaders in operations across all levels, regarding product needs, product plans, organizational prioritization, and project updates
3. Facilitate meetings to analyze business problems, gather relevant information and business requirements and to ensure understanding of the stakeholder's knowledge and the stage of the project life cycle.
4. Develop and maintain operational metrics and reporting to accurately measure product performance and quantify the business impact of technology.
5. Develop key product documentation including Business Case, Business Requirements / Product Specifications and user cases
6. Collaborate closely with the development team to manage delivery, unblock development team and prioritize the backlog using a agile approach.
7. Act as a thought leader and subject matter expert on technology disruption and tech implementation.
8. Ability to promote trust and confidence in the technology function across the organization.
To succeed in this role you will need to have
1. 10+ years of product development experience, product consulting or similar management consulting role
2. Past experience at a leadership role at a Tech Start-up will be highly desirable
3. Prior experience in business analysis in a healthcare software company or a healthcare KPO is preferred but not mandatory
4. Experience delivering products and showing the impact of the work you have done
5. Experience in the US healthcare domain, US Healthcare Standards preferred
Passion for Technology
- Passion for technology and willingness to disrupt the status quo using existing and emerging technologies
Strategic Orientation
- The ability to think long- term and beyond one's own area. It involves three key dimensions: business awareness, critical analysis, and integration of information, and the ability to develop an action-oriented plan.
Detail orientation
- The ability to perform as an individual contributor in technical solution design while managing the team to deliver on the business, technology, and infrastructure goals.
Business Acumen
- Experience developing tech solutions for driving business growth, reducing costs, improving operations, and successful creation of solutions for complex business nuances
Collaboration and Influence
- The ability to work effectively with, and influence those outside of own functional area for positive impact on business performance. | https://www.iimjobs.com/j/iks-health-leader-technology-product-management-10-14-yrs-765439.html?ref=cl |
Jesus came as the embodiment of love and compassion and lived among men, holding forth the highest ideals of life, 2000 years ago, when narrow pride and ignorance defiled mankind. As Bhagawan says, the celebration must take the form of adherence to His teachings, loyalty to His principles and practicing the discipline and experiencing the awareness of the Divine that He sought to awaken.
The proceedings on the auspicious morning of Christmas began with Bhagawan’s arrival amidst melodious Bhajans and songs. After He had blessed all who had gathered, Bhagawan took His seat and the morning’s proceedings began with Sister Bhuvana Santhanam leading the whole gathering to recite the Lord’s Prayer. This was followed by an offering of carols and songs by devotees from Singapore. Their presentation included a number of popular and well-known Christmas songs, including Sai You Are, O Come All Ye Faithful and Long Time Ago in Bethlehem. One of the highlights for Christmas morning every year, is the arrival of Santa Claus and this year was indeed a blissful sight to see as he entered Premamrutham on a sleigh complete with reindeer. Bhagawan blessed all the chocolates and sweets that had been brought and along with two helpers, they were distributed to everyone, bringing much joy and laughter to all present. Thereafter, Bhagawan blessed the Singapore group with tokens of love and took pictures with the members of the group.
After a short break, the proceedings continued with an address by Mr Isaac Tigrett. Mr Tigrett spoke about the omnipresence and omniscience of God. He detailed some aspects of the crucifixion of Christ and spoke about the equanimity of Jesus. Even though much injustice and pain had to be borne by Jesus, He bore no hatred or negativity towards anyone and this is the kind of equanimity which Bhagawan teaches everyone to develop.
The next speaker for the morning, was Sri B N Narasimha Murthy who spoke further about the equanimity and compassion of Jesus. Speaking about the life and message of Jesus, he urged everyone to keep progressing and be motivated by the principles of love and compassion in order to become more and more selfless.
Bhagawan then granted His Divine message and elaborated on the story of Jesus Christ. He said, “Today, we are celebrating Christmas Day which marks the birth, life and the message of Jesus, the Christ. When we think of Jesus Christ, we think of love. When we think of Jesus, we think of compassion. When we think of Jesus, we think of sacrifice. And when we think of Jesus, we think of divinity. For Jesus isn’t an individual who lived in Jerusalem at some point in time. He is what he practiced. Every time humankind forgets how to live in peace and harmony, how to show love and compassion and how to serve and sacrifice, God comes down in a human form. And when He comes taking upon Himself a physical body, He truly installs in hearts of men, the values of love and service.” Bhagawan urged everyone to develop compassion and love for everyone, including those who do not love us. He continued saying, “It won’t be wrong to say that Jesus was the incarnation of compassion. If we want to worship and celebrate him, we should follow his message of unconditional compassion for all beings. He always pointed towards his heart. What did he mean by that. Heart means that which is full of compassion. It is easy to live those who love us. But a real devotee of the Lord is one who loves those who don’t love him too. This is the message of Jesus. Compassion, compassion, compassion. When you see someone in sorrow and distress, your heart should melt in compassionate and your hands should help. Whatever is possible in your own might you should do for relieving distress.”
Thus, concluding His Divine Christmas message, the Bhajan ‘Love Is My Form’ was sung and Mangala Arati was offered to Bhagawan. Prasadam was distributed to all and a beautiful Christmas morning with the Lord concluded.
Evening
The music and joy of the Christmas festivities continued well in to the evening, as Bhagawan entered Premamrutham amidst Bhajans. The evening began with the launch of a new audio CD, Ananda Sudha – Volume 1 by Sister Pooja Vaidyanath. Hailing from Chennai, Sister Pooja offered this CD of nine devotional songs arranged and recorded by a group of renowned musicians from all across India.
It was an everyone were eagerly waiting for – performance by the girls and boys brass band. The evening commenced with the girls’ band, Sai’s Symphony offering three songs – Silent Night, Gloria and the popular song We Will Follow You. It was the first time that the girls were on stage as an all-girls band. The love and devotion expressed by these girls through their music was truly amazing and beautiful.
They were followed by the boys’ band, Sai’s Angels who played a wide range of songs, including What A Wonderful Lord and Hawaii Five-O among others. Both the girls’ and boys’ performances were interspersed with heartfelt speeches from the band members.
Thereafter, Bhagawan blessed the boys and girls along with their band teacher Mr Dimitris Lambrianos, with tokens of His love. After taking group photos with each of the bands, Bhagawan received Mangala Arati. Prasadam was distributed and thus ended a most beautiful day of Christmas celebrations in the Divine presence. | https://saivrinda.org/updates/christmas-celebrations-sathya-sai-grama-muddenahalli-december-25-2018 |
Andrew J. Preston is a Licensed Psychoeducational Specialist in the state of South Carolina. He is available to conduct psychoeducational evaluations to determine an individual's pattern of strengths and weaknesses. Dr. Preston is available to conduct independent educational evaluations as available under the Individuals with Disabilities Educational Improvement Act (IDEA) of 2004.
Based on the results of the evaluation, a number of recommendations may be appropriate. Intervention recommendations may include school-based accommodations (i.e., extended time on tests and assignments, preferential seating, sensory breaks, assistance with note-taking, alternate testing locations, foreign language or math waivers), and/or extended time on entrance exams (i.e., SAT, ACT, GRE, LSAT, MCAT, Bar Exam).
Your child (ages 3 through 21) may be eligible for a psychoeducational evaluation at no expense to you. These are provided by the School Psychologist assigned to the public school your child attends or by the School Psychologist at Child Find if your child does not attend a public school. Call the school district where your child attends for more information. | http://www.andrewjpreston.com/ |
This algorithm is not considered secure by modern standards. It should only be used when verifying existing hashes, or when interacting with applications that require this format. For new code, see the list of recommended hashes.
New in version 1.6.
This class implements the hash algorithm used by Microsoft SQL Server 2005
to store its user account passwords, replacing the slightly less secure
mssql2000 variant.
This class can be used directly as follows:
>>> from passlib.hash import mssql2005 as m25 >>> # hash password >>> h = m25.hash("password") >>> h '0x01006ACDF9FF5D2E211B392EEF1175EFFE13B3A368CE2F94038B' >>> # verify password >>> m25.verify("password", h) True >>> m25.verify("letmein", h) False
See also
- password hash usage – for more usage examples
- mssql2000 – the predecessor to this hash.
Interface¶
-
class
passlib.hash.
mssql2005¶
This class implements the password hash used by MS-SQL 2005, and follows the PasswordHash API.
It supports a fixed-length salt.
The
using()method accepts the following optional keywords:
Parameters:
- salt (bytes) – Optional salt string. If not specified, one will be autogenerated (this is recommended). If specified, it must be 4 bytes in length.
- relaxed (bool) – By default, providing an invalid value for one of the other
keywords will result in a
ValueError. If
relaxed=True, and the error can be corrected, a
PasslibHashWarningwill be issued instead. Correctable errors include
saltstrings that are too long.
Format & Algorithm¶
MSSQL 2005 hashes are usually presented as a series of 52 upper-case
hexadecimal characters, prefixed by
0x. An example MSSQL 2005 hash
(of
"password"):
0x01006ACDF9FF5D2E211B392EEF1175EFFE13B3A368CE2F94038B
This encodes 26 bytes of raw data, consisting of:
- a 2-byte constant
0100
- 4 byte of salt (
6ACDF9FFin the example)
- 20 byte digest (
5D2E211B392EEF1175EFFE13B3A368CE2F94038Bin the example).
The digest is generated by encoding the unicode password using
UTF-16-LE, and calculating
SHA1(encoded_secret + salt).
This format and algorithm is identical to mssql2000, except that this hash omits the 2nd case-insensitive digest used by MSSQL 2000.
Note
MSSQL 2005 hashes do not actually have a native textual format, as they
are stored as raw bytes in an SQL table. However, when external programs
deal with them, MSSQL generally encodes raw bytes as upper-case hexadecimal,
prefixed with
0x. This is the representation Passlib uses.
Security Issues¶
This algorithm is reasonably weak, and shouldn’t be used for any purpose besides manipulating existing MSSQL 2005 hashes. This mainly due to its simplicity, and years of research on high-speed SHA1 implementations, which makes efficient brute force attacks feasible. | https://passlib.readthedocs.io/en/stable/lib/passlib.hash.mssql2005.html |
The direct and indirect impacts of global warming, such as ocean acidification and large bleaching, have led to extensive and long-term damage to the Great Barrier Reef. Large cliffs have zero prospects for recovery naturally, so an intervention has been designed to correct what people have done at this site of the world heritage.
The aim of the larva restoration project is to restore the breeding stock to the damaged reefs and to ensure the reproductive lives of the corals are healthy. The team will harvest coral sperm and eggs and grow new larvae that will then be released in the most damaged areas of the cliff. Efforts begin this weekend in the Arlington Reef area just off the coast of Cairns in Queensland.
"It's the first time that the entire breeding and larval settlement process takes place on a cliff at the Great Barrier Reef," said Professor Peter Harrison of Southern Cross University. "Our team will rebuild hundreds of square meters in order to reach square kilometers in the future, a scale that has not previously tried."
Harrison's team tested this regenerative approach on smaller scales in the Philippines, as well as the islands of Heron and One Tree in the Southern Great Barrier Reef. If this attempt at a larger scale is successful, it could be used elsewhere in the world.
One particularly interesting innovation of this experiment is the joint cultivation of small algae known as zooxanthellae, which live in the tissues of many corals. Coral and microalga have a relationship. Coral protects the lashes and provides nutrients. Algae produce oxygen and remove coral waste.
"These micro-organisms and their symbiosis with corals are essential for healthy coral communities that create cliffs," said Professor David Suggett of the University of Technology in Sydney. "So we're trying to speed up this process to see if the survival and early growth of young corals can be backed up by a quick algae recapture."
The project is a collaboration between Harrison, Suggett, Katie Chartrand of James Cook University, Great Barrier Reef Marine Park, Queensland Parks & Wildlife Service and other key industry partners. Intervention is a courageous step but should not be considered a way to save a cliff. This is a check for damage.
"Our approach to restoring cliffs aims to buy time for coral populations to survive and develop until emissions are reduced and our climate stabilizes," said Professor Harrison. "Climate action is the only way to ensure that coral reefs can survive the future." | https://hamsara.com/unitedkingdom/researchers-are-trying-to-find-the-largest-rebate-project-in-the-great-barrier-reef/ |
Age of AshDaniel Abraham
From New York Times bestselling and critically acclaimed author Daniel Abraham, co-author of The Expanse, comes an ambitious new fantasy trilogy where every story matters, and the fate of a city is woven from them all.
Kithamar is a center of trade and wealth, an ancient city with a long, bloody history where countless thousands live and their stories unfold.
This is Alys's.
When her brother is murdered, a petty thief from the slums of Longhill sets out to discover who killed him and why. But the more she discovers about him, the more she learns about herself, and the truths she finds are more dangerous than knives.
Swept up in an intrigue as deep as the roots of Kithamar, where the secrets of the lowest born can sometimes topple thrones, the story Alys chooses will have the power to change everything.
Daniel Abraham
Daniel James Abraham (born November 14, 1969), pen names M. L. N. Hanover and James S. A. Corey, is an American novelist, comic book writer, screenwriter, and television producer. He is best known as the author of The Long Price Quartet and The Dagger and the Coin fantasy series, and with Ty Franck, as the co-author of The Expanse series of science fiction novels, written under the joint pseudonym James S. A. Corey. The series has been adapted into the television series The Expanse (2015–present), with both Abraham and Franck serving as writers and producers on the show.
The Kithamar Trilogy :: Series
Series contains 3 primary works and has 3 total works.
Each book in the trilogy unfolds within the walls of a single great city, over the course of one tumultuous year, a different character's perspective, and the fate of the city is woven from them all. | https://www.risingshadow.net/library/book/61330-age-of-ash |
Sharon Cass-Toole, DCEP, RP, is a Registered Psychotherapist, Counselor and Wellness Consultant specializing in trauma, PTSD, workplace issues and nutritional counseling.
Sharon’s approach to therapy consists of a class of treatments that uses the body’s own electrical system to rapidly remove emotional, spiritual and physical blocks that interfere with optimal mind-body-spirit functioning. Keeping in mind that rapport and providing clients with a safe place to share and process is as important as procedure, Sharon uses a variety of methods, customized toward each individual client for the best possible outcome.
Individual sessions may include Emotional Freedom Techniques, Cognitive Behavior Therapy, Eye Movement Desensitization & Reprocessing, Tapas Acupressure Technique, Trance Therapy, Aromatherapy (Certified Clinical Aromatherapist) and other approaches.
These specialized methods, along with “talk therapy” are used in the treatment of post traumatic stress, depression, anxiety and panic attacks, fears and phobias, addictive cravings, weight loss, negative memories, sexual abuse issues, grief and loss, allergies, performance anxiety, self esteem issues, sports performance, physical pain and symptom management.
Coverage by most insurance companies in Ontario. | http://meridianpsych.com/ |
Support for Faculty and Staff
The Boise State CARE team was created in fall 2011 because of campus incidents of concern and efforts by other institutions of learning to prevent tragedies. The CARE team has defined its mission as responding to reports regarding students, faculty, staff, and third parties who exhibit distressing, disturbing, or disruptive behavior that may pose a threat to themselves or the university, and to prevent acts of violence or self-harm (Policy 12050).
The campus community is encouraged to submit CARE alerts at boisestate.edu/care. If you are unsure if a CARE alert is inappropriate, it is helpful to ask yourself “could this behavior put someone (self or others) at risk?” If the answer is yes, then submit.
When a CARE alert is submitted, the CARE team reviews the information to determine the necessary response plan. In most instances, the CARE team brings in appropriate offices and campus/community personnel to assist with intervention and may work alongside the individual who submitted the alert to address their concern.
CARE alerts can be submitted anonymously; however, anonymous reporting often limits the team’s ability to respond. Additionally, even though the CARE team strives to maintain a high level of anonymity, confidentiality cannot be guaranteed. If you are worried about retaliation for submitting a CARE alert, please contact the Office of the Dean of Students at (208) 426-1527.
Support for Faculty and Staff
Regardless of an individual’s comfort level managing conflict, addressing behavior in the classroom can be stressful and often leads to faculty experiencing unanticipated issues around work performance, perceptions about students, and even their overall health and well-being. There are resources available to help support you.
Employee Assistance Program (EAP)
The Employee Assistance Program (EAP) is a free, confidential service that provides short-term counseling to eligible employees and their families to help address personal and work-life issues. EAP promotes problem-solving and stress resilience through counseling, coaching, and consultation.
To access this benefit, simply call the EAP at (877) 427-2327 and identify yourself as a Boise State University employee. Employees and their immediate family members can receive up to 5 free consultations per fiscal year. You may also access information online at www.guidanceresources.com by entering the company ID: SOIEAP.
Human Resources (HR)
Human Resources is available to consult with faculty to identify resources that may be of benefit based on their particular situation. The HR office is located at 2209 W University Dr., Capitol Village #3 and is open 8:00 a.m. to 5:00 p.m., Monday through Friday, except during scheduled closures. You can reach Human Resources at (208) 426-1616 or email at [email protected].
Dean of Students
The Office of the Dean of Students is able to provide guidance on addressing distressing, disruptive and disturbing behaviors in the classroom, mediation of conflict, and guidance on removing a student from class using Policy 2050. Students can also be referred directly for support services, campus resource navigation, and for help understanding university policies and procedures. You can reach the Dean of Students Office at (208) 426-1527 or email at [email protected].
Public Safety
The Boise State University Department of Public Safety operates 24 hours a day, 7 days a week. The security team is staffed with trained professional security and police officers. Boise State University security officers are first aid, CPR, and AED certified, and receive continual security training throughout the year. Boise State University contracts with the Boise Police Department (BPD) to provide police and security services to the university campus and community. The Boise Police Department is responsible for law enforcement, crime prevention programs, reporting criminal activity and crime-related problems on campus, and emergency response at Boise State University. Resources and services include:
● Security escorts – call (208) 426-6911
● Emergency telephones located around campus
● Silent witness reporting (anonymous suspicious behavior or crime reporting, located at boisestate.edu/publicsafety-security/)
● Online crime reporting (visit City of Boise Online Crime Reporting)
Academic Colleges, Departments, and Programs
Those who serve in a director, chair, or dean role are often available to consult with faculty regarding their concerns and assist in developing a plan of support. The administrative staff in your program, department, or college is also knowledgeable about support services and can aid in connecting you to people or resources. Additionally, your colleagues and peers are a wealth of knowledge and support. Reach out to these people in your areas as you feel comfortable and as it feels appropriate based on the level of the concern or issue.
Center for Teaching and Learning
The Center for Teaching and Learning aims to support, promote, and enhance teaching effectiveness and to facilitate engagement in student learning. It offers consultations services (e.g., on managing hot moments, utilizing a new teaching strategy, etc.), observations, mid-semester assessments (MAPs), workshops, and other programming to support the use of evidence-based instructional practices. It fosters dialogue, scholarship, innovation, and excellence in learner-centered strategies.
Faculty Ombuds
Faculty ombuds is a neutral, informal source of assistance. There may be certain matters you wish to explore “off the record” or need information or informal advice. Perhaps you are facing problems for which formal channels need to be invoked, but you are not sure how they work or what are the implications of using them. There may be issues that have not been satisfactorily addressed or resolved despite your best efforts. The ombuds helps by listening, analyzing and clarifying the problem, facilitating dialogue as well as explaining university policies and procedures. | https://www.boisestate.edu/deanofstudents/faculty-resources/faculty-guide-for-student-behavior/support-for-faculty-and-staff/ |
children from 3 years,
public playgrounds
exercise activities
Techincal Specifications
Lifting Device
-
Total Length
3.05 m
Total Width
1.10 m
Total Height
2.40 m
Minimum Space Length
6.55 m
Minimum Space Width
4.10 m
Minimum Space Height
3.00 m
Heaviest Single Piece
135.00 kg
(Total) Weight
135.00 kg
Free Height of Fall
1.50 m
Platform Heights
-
Foundations
Foundation 1
1 piece - 1.25 x 0.40 x 0.40 m volume: 0.20 m3
Foundation 2
-
Foundation 3
-
Foundation 4
-
Foundation 5
-
Foundation 6
-
Assembly
Number of Installers
-
Assembly Time
-
Assembly Details
Assembly time is delivered in one piece (without foundation creation)
Technical Data
Technical data
Base area: 1.10 x 3.05 m
Minimum space: 4.10 x 6.55 m Extension
height: 1.40 m
Sitting height: 1.50 m Slide inclination
: 36 ° Slide width
: 1.10 m
Slide plate thickness : 2.5 mm
Weight: 135 kg in total
One-piece stainless steel construction, with additional support in the outlet area. Particularly quiet thanks to the shape of the trough. Please note that in the area of the flange on the slide seat, a suitable surface (wooden platform or planks, concrete foundation, etc.) must be available on site.
For your planning, please note that stainless steel slides are oriented to the north-east due to the possible heating of the slide surface or are in the partial shade of trees. | https://www.redlynchleisure.co.uk/product/extension-wide-slide-1-40/ |
This invention relates to a method of fracturing and heating a gas hydrate formation to convert the hydrate into producible gas. In one aspect, the invention relates to a method of treating a subterranean formation underlying or within the permafrost.
The term "permafrost" refers to permanently frozen subsoil continuous in underlying polar regions and occurring locally in perennially frigid areas. Permafrost begins from a few inches to several feet below the surface and may extend downward as much as 1000 to 2000 feet, depending on its geographic location. In addition to granular ice in the interstices of the soil particles, there may be sizable bodies of solid ice.
In many areas, gas-bearing formations are found in close proximity to the base of the permafrost or within the permafrost itself. The proximity of the permafrost to gas formations has two significant effects: (1) the low temperature and pressure conditions of the gas in the presence of water results in a condition wherein the gas is trapped in a crystalline water structure in the form of a solid hydrate and (2) the low overburden pressure through the permafrost produces earth stresses such that fracturing treatments in or near the permafrost results in horizontal fractures.
The structure of the gas hydrate prevents removal of the gas from the formation by conventional production techniques. The application of heat, as by the injection of hot liquids, will cause the hydrate to dissociate and permit the release of gas, but the heat dissipates rapidly.
Hydraulic fracturing is a common technique of stimulating production by injecting a fluid into the formation at pressures and rates to cause the formation to fail and produce a fracture or crack therein. It is obvious that this technique is not applicable in gas hydrate formations because the hydrate remains immobile.
U.S. Pat. No. 5,620,049 discloses a well treatment process which combines hydraulic fracturing followed by heating the fracture using electric current. This process is disclosed in connection with the treatment of petroleum bearing formations, and not gas hydrate formations. The fracture generated in the subterranean formations disclosed in U.S. Pat. No. 5,620,049 is a vertical fracture. As described in more detail below, the method of the present invention requires that the fracture treatment produce horizontal fractures.
| |
In order to minimize the possibility of your students cheating on assessments in Canvas, we recommend that you take a few precautions while setting up your graded quizzes and exams.
Canvas's default feedback option allows students to see the correct answers for all questions both as soon as they submit the assessment and at any point after that. This default option makes your exam questions extremely insecure; it compromises the integrity of your exam questions for both the current semester and use in future semesters. For this reason, we strongly recommend that during quiz setup you adjust your quiz settings to reflect one of the following:
Require an access code
We recommend that you require students to enter in a password before they can take an assessment. This will help prevent students from accessing the assessment outside of a proctored environment, such as your classroom or FSU's Center for Assessment and Testing.
Availability dates specify the window of time in which a student may access the assessment.
Time Limit
Specifying a time limit will force-submit the quiz/exam once the specified time limit has been reached. Make sure to set up exceptions for students needing academic accommodations that include additional time on exams.
Shuffle Answers
This option randomizes the answer choices for each question. This means that no two students will see the exact same answer choice order for multiple choice and multiple answer question types. However, if there are multiple choice questions that include an "all of the above" answer option, then you may not want to shuffle answers.
Use question groups
A question group is specific to the quiz you are creating and randomizes the questions students answer in a Canvas quiz. It allows you to place multiple questions into a single group on a quiz and then select a specific number of those questions that will be chosen at random for students to answer. For example, you can choose to have students answer 5 questions from a question group containing 10 questions.
If you want to use question banks to house assessment questions so that they are organized by topic/chapter or are easily reusable, you can combine the use of question banks and question groups. For example, if you have a set of questions specific to Chapter 2, you would need to make a question bank containing all of your Chapter 2 questions. Then, when you create your specific assessment, you will need to create a question group that pulls a specific number of randomly selected questions from your Chapter 2 question bank.
One effective way to increase security while your students are taking their exams is to arrange to have TAs or other instructors serve as proctors in your classroom. If in-person proctoring isn’t an option for you or any of your students, you may consider using the Honorlock online proctoring service which can be enabled on any exam in Canvas.
For more information about Canvas's quiz settings, see Canvas's Quiz Settings to Maximize Security guide. If you would like any assistance with setting up your quiz options, please do not hesitate to contact our ODL Technical Support team at (850) 644 - 8004 or [email protected]. | https://bbsupport.happyfox.com/kb/article/987-how-to-increase-security-to-minimize-cheating-in-canvas-quizzes/ |
Many teachings define sacred geometry as an ancient science that represents the blueprint of creation, the origin of all forms. It is a science that explores and explains the energy patterns that create and unite everything. More precisely, it defines how the energy of creation is organized.
Geometric codes create all life forms, including DNA molecules, crystals, galaxies, stars, snowflakes, the cornea of the eye, and snowflakes. All forms have spiritual meaning – numbers, patterns, and other shapes. With sacred geometry, we can understand the meaning behind it. Thanks to this teaching, we can conclude that life springs from the same source, the intelligent force that some call God.
The sacred geometric shapes are never fixed on only one form. They are, in fact, in constant fluid transcendence and change from one geometric shape to another with their speed and frequency. Sacred geometry helps us see the infinity in us; that infinity of which we are all part of.
Sacred geometry explains the symbolism and purpose of shapes based on their proportions. Mathematical equations make up our Universe, other dimensions/Universes, and galaxies. It is the process of aligning the heart, mind, and spirit with the Divine Energy/Source.
Common Forms of Sacred Geometry (and Their Meanings)
1. Triangle
It symbolizes balance and harmony and can be associated with body, mind, and spirit. If directed upwards, it indicates raising awareness. When directed downward, it is associated with feminine energy and reproduction.
2. Circle
The circle is a perfect geometric shape. It is the only shape with no end, and it is infinite. The circle symbolizes perfection, wholeness, unity, infinity, and eternity.
3. Sphere
The sphere is the holiest and deepest symbol that contains all wisdom. All other forms are organized from the sphere shape. It is an expression of unity and equality – our planet Earth is a great example.
4. Square
This form represents solid/practical energy, foundations, grounding, strength, stability, and balance. For example, the base of the pyramids is exactly square, so this shape is considered reliable.
5. Spiral
The spiral represents a universal connection. Energy moves in that form. Quote “What goes around, comes back around” perfectly describes this shape. It defines the relationship between Heaven and Earth, our physical and inner selves.
Plato’s Bodies
Our world is a combination of five sacred forms. These forms are known as Plato’s bodies. Every form corresponds to one of the elements: Earth, Fire, Air, Water and Ether. These forms are believed to have the power to open communication channels to the Deity.
1. Tetrahedron
It is a three-dimensional symbol of balance and stability. It represents the element of Fire that establishes a balance between the spiritual and the physical. Examples of a tetrahedron are pyramids and crystals of the same shape (for example, hematite). This shape corresponds to the chakras of the Solar Plexus.
2. Hexahedron
This shape is known as a cube. It is a three-dimensional symbol that connects with the energies of nature and the Earth. Therefore, it represents an element of the Earth. Examples are crystals such as aquamarine and apatite. Corresponds to the root chakra.
3. Octahedron
It is a three-dimensional shape that has self-reflecting features. It is an element of air. Through this form, we can adapt to our inner nature. A real example is the crystals such as magnetite and fluorite. It is associated with the Heart Chakra.
4. Dodecahedron
It is a three-dimensional symbol associated with the element of Ether (Spirit). It reminds us of our ability to move away from the physical body and connect with our true nature. This form is associated with the Crown Chakra and the Third Eye Chakra.
5. Icosahedron
It is a three-dimensional symbol. It allows us to express our creativity and express our emotions and thoughts easily. It is associated with water and corresponds to the Sacral Chakra.
What are the Different Forms of Sacred Geometry and their Meanings?
Have you heard of the term space architecture? This term refers to Sacred Geometry. Since the ancient past, people have tended to express what they see, hear, and feel via drawings and images.
Nature itself has left its mark through perfect figures such as a cube, a sphere, a circle, a hexagon. While observed, these figures meet both cerebral hemispheres the desire for logical and subjective data.
The basic idea behind sacred geometry is that there are specific patterns that occur throughout the universe. These patterns can be seen in everything from a leaf to a galaxy. The basic belief is that these shapes contain some sort of hidden meaning or power.
Some examples include the golden mean, the Fibonacci sequence, the flower of life, and Metatron’s cube. These geometric shapes can be found in many places around us including artworks like paintings and sculptures as well as natural structures like flowers and crystals.
Different forms of sacred geometry include:
* Mandala – The mandala is a sacred geometry that has been used for centuries by cultures around the world to represent the universe. It is a circular pattern that symbolizes wholeness.
* Fibonacci Sequence – The Fibonacci sequence is a sequence in which each number is the sum of the previous two numbers. It is found in nature in plants, animals, and human beings. The Fibonacci sequence can be seen throughout nature, especially when it comes to plants and animals. The ratio of plant leaves to branches on a stem or the ratio of an animal’s body parts are often related to this sequence.
* Star of David – The six points of the star of David represent the six dimensions of space and time. The two triangles represent the duality of man’s nature as both good and evil. The circle at the center represents God, eternity, and unity.
Which Are the Most Popular Sacred Geometry Shapes?
* Hamsa – The term Hamsa comes from the Arabic word Khamsah, which means five (five fingers of the hand). The Babylonians believed that Hamsa was the highest power that governed both heaven and earth. Christians named Hamsa the Hand of Mary (Mother of Jesus). The Jews named it Mirjana’s hand (Mirjana-Moses’ sister). In the Muslim world, Hamsa is the hand of Fatima (Muhammad’s daughter). Hamsa symbolizes fulfillment, joy, happiness, permanence, peace, pleasure. On the other side, it protects from negative energy and evil.
* Tree of Life – A universal symbol that reminds us that we are not alone and there is something more than ourselves. It is known in most cultures and religions and defined as a map of divine qualities. It symbolizes strength, fertility, spiritual transformation, power, and growth. It dates back to Ancient Egypt, which is 3000 years ago. It is the center of Kabbalah (mystical Jewish tradition). The tree roots are connected to the branches at one main point. That displays the connection between heaven and earth, physical and spiritual. Humans are like the tree of life—the point of intersection between us and the divine source itself.
* Metatron Cube – It represents God’s will within our world. The Archangel Metatron (angel of life) supervises the energy and flow in this mystical design. The Metatron Cube contains all forms that exist in the Universe. Those shapes are the basis of all physical matter and are known as Plato’s solid matter. Plato connected them with the physical elements and the spiritual world. They appear in everything from human DNA to crystals. This shape allows for harmony and balance.
* Ying and Yang – It is one of the most famous and represented symbols in Chinese culture, but also beyond. It teaches us that there are always two opposing sides in everything. It describes two principles where each contains an element of the other one. The masculine principle (Yang) represents the sacred side, air, and fire. The feminine principle (Ying) shows the dark/passive side, water, and earth.
In Yang, we recognize the Sun, the south, the summer, the creativity of the mountain, the fire, the heat. In Ying, we recognize opposite concepts such as the moon, night, earth, north, cold, calm, valley. It is not difficult to conclude that Yang directly connects with life, while Yin resembles dying.
This separation of concepts gives a picture of the importance of Ying/Yang energy. The best display of energy perfection is dawn.
I hope you’re finding this information to be helpful to you. In my quest for a lifestyle by design, I enjoy exploring topics unknown to me and sharing them with you as one of my community members. Please reach out to me any time to share your thoughts, ideas, and opinions about the world we live in.
I’m USA Today and Wall Street Journal bestselling author, entrepreneur, and marketing strategist Connie Ragen Green, cultivating habits of excellence every day with my words and deeds. Come aboard for my Action Habits Challenge at no cost or obligation and continue a journey that you’ve most likely already begun. | https://mondaymorningmellow.com/sacred-geometry/ |
A a real-time digital auto-radiography Micro Imager from LabLogic Systems will figure in a presentation to be given by The University of Sheffield's Department of Animal and Plant Sciences at the 15th Congress of the Federation of European Societies of Plant Biology (Lyon, 17-21 July 2006).
The Department is investigating the uptake and intercellular transport of radio-labelled amino acids fed via the transpiration stream to Arabidopsis thaliana leaves and whole plants.
Plant xylem sap carries the supply of nitrogen from root to shoot in the form of nitrate and a range of amino acids. This movement is driven by transpiration, resulting in the arrival of the sap to mature leaves despite the far higher requirement for nitrogen in sink tissues such as developing leaves and fruits.
In order to study uptake and translocation of amino acids from the transpiration stream, the Department has employed the Micro Imager to image their distribution after short duration pulse-chase feeding experiments.
The radiolabel is introduced to the petiole of individual excised leaves, or to whole plants via cut roots to allow direct uptake into the root xylem.
Sixteen different amino acids have been imaged to date, revealing characteristic patterns of uptake and distribution. These results will be discussed at the meeting in relation to the in vivo function of amino acid transporters and xylem to phloem transfer of amino acids.
Figure showing the localization of four different 3H-labeled amino acids in leaves (top row). The bottom row of images show 14C sucrose fed in simultaneously with the amino acid. | https://lablogic.com/news/2006/06/auto-radiography-micro-imager-at-plant-biology-congress |
I’ve always associated the color yellow with happiness and positivity. It’s bright and warm and seems to omit a special light separate from its already vibrant visage.
It’s no surprise to me that I was washed in those feelings of happiness when I paid a visit to the almost fully bloomed sunflower field on Route 68. When I moved to Yellow Springs earlier this May, I heard occasional mutterings and musings about the infamous field that blooms late summer every year. As a lover of flowers and sunflowers in particular, this is something I’ve really looked forward to experiencing.
Just as it seemed time for summer to retreat and fall to take charge, the flowers opened up towards the sun, painting the green space with endless petals that mimic watercolors. It feels like a final sendoff from summer, but also a warm welcome to autumn. As we got out of the car and strolled across the narrow street to the field’s personal farewell/welcome bash, I couldn’t help but smile.
I was already excited because I have been looking forward to this visit for months, but I was not expecting the immense joy I encountered from looking out and seeing so many people sprinkled throughout the blooms with their own smiles painted across their faces. It was weirdly emotional for some reason. Everybody was there to witness something natural and beautiful and everything just felt like it was radiating positivity. I think it would be very difficult to be anything but happy while standing in a sea of sun-soaked flowers.
As I tried to assess the best entry point for the field’s well being, I made my way to the back where the stalks seemed largely unbothered and a little bit more spaced out. Admittedly I am not always the most gentle and graceful person, so I wanted to take extra care to make sure I was being respectful of the land and the owners. It really was a privilege for me to get to visit this place and I was not about to disrespect this opportunity I had.
While I waded through the narrow walkways towards the heart of the field, I was trying very hard to ignore the abundance of bees drifting from flower to flower all around me. They could not have cared less about me at that point though, not with all the pollen their little bee hearts desire available en masse. After all, they were here for the flowers just like I was.
When I made it to my desired spot, I took a few minutes to snap some pictures and take in how truly beautiful the rows of newly opened blooms are. As the sun fell, I made my way to the exit, content and ready for dinner. I am so grateful for the opportunity to witness such a collective feeling of positivity and beauty, all thanks to the generosity of Dave and Sharen Neuhardt (the owners of the field) as well as the Tecumseh Land Trust for making the property open to strangers.
*Jessica Sees is an Ohio University student interning with the News.
4 Responses to “BLOG- Making Moves: Sea of Sunflowers”
-
Are the Sunflowers in bloom yet? 8/18/18
-
Beautiful flowers
-
Are the flowers still blooming? I want to see them so very much! | https://ysnews.com/news/2017/09/blog-making-moves-sea-of-sunflowers |
By William Chapple
In December 2020, the Ontario government passed Bill 229 titled the Protect, Support and Recover from COVID-19 Act [Budget Measures]. Contained within this Bill is Schedule 6, which outlines significant amendments to the Conservation Authorities Act. These changes include empowering the Ministry of Municipal Affairs and Housing to overturn decisions made by conservation authorities to grant or deny development permits. The Ministry reserves the right to issue a Minister’s Zoning Order (MZO) for development, forcing conservation authorities to issue permits even if evidence shows the development could negatively affect human safety and protection of species at risk. The ruling removes conservation authorities as a public body, limiting them from appealing land use decisions. Developers may also build within key ecological features that would otherwise be protected, if they pay a fee.
To understand the implications of this ruling, it is important to comprehend the role and purpose of conservation authorities in Canada. In 1946, the Conservation Authorities Act was passed as concerns were rising about the state of renewable resources in the province. Poor resource management practices resulted in flooding and degradation. The provincial government adopted the integrated watershed management approach to provide watershed management and planning authority as we see presently in the 36 conservation authorities across Ontario.
According to an analysis of Bill 229 by the Canadian Environmental Law Association, “the majority of the Schedule 6 amendments are regressive in nature and are completely contradictory to fulfilling … the purpose of the Conservation Authorities Act.” As up to 95 per cent of Ontarians reside within watersheds, the responsible management of said watersheds should be scientifically driven to protect residents from the threats of flooding, erosion, and other safety concerns. The public did not get a chance to support or dispute Schedule 6 before it was passed in December because these changes were proposed as part of a Budget Measures Act, which do not require public consultation under the Environmental Bill of Rights, 2007.
In the case of Haliburton County, there is no conservation authority in charge of efforts for the region. Because of this the County handles land use decisions, including issuing development permits, as guided by scientific research completed by independent organizations. The lack of conservation authority does not mean, however, that Haliburton County will not be affected by Schedule 6 of Bill 229.
The Haliburton Highlands Land Trust (HHLT) is a privately owned, volunteer-based organization with similar goals of conservation and protection of natural systems as conservation authorities. Greg Wickware, Chair of the HHLT, describes the possible repercussions this legislation could have on the Haliburton area.
“Schedule 6 opens the door to replacing science-based watershed management with politically motivated decision-making and it puts the province’s remaining wetlands and forests, and the wildlife that depend on those habitats, at risk,” he explains.
In the area, some wetland systems have been granted protection as Provincially Significant Wetlands (PSW). Wickware highlights the danger Schedule 6 poses to these significant features.
“PSWs have the highest degree of protection, but with a [Minister’s Zoning Order (MZO)] that protection can be eliminated. Protecting wetlands reduces the threat of flooding and replenishes our supply of drinking water. Wetlands capture and hold deep pockets of water in our landscape … and replenish our groundwater tables.” He goes on to express extreme concern that these wetlands and their ecological function could be negatively impacted by MZOs and decision makers without the scientific and technical expertise needed to make rational land use decisions.
Unfortunately, privately owned land trusts such as the HHLT do not have the financial or technical capacity to complete the same work as conservation authorities, and do not act as land use planners or managers. Still, the County relies on scientific research studies completed by land trusts to guide responsible decision making. “Improved wetland mapping, including floodplain mapping is extremely important,” Wickware says as he describes the HHLT’s current projects.
“HHLT’s new wetland mapping layer is an important tool as it has been acknowledged by the County as the best accurate mapping for this and other planning decisions,” he said.
Because there are no conservation authorities in Haliburton, the HHLT supports the County in making informed decisions that will protect provincially and biologically significant features and human security. Land trusts also play an important role in helping the federal government in achieving the goal of conserving 30 per cent of Canada’s land and oceans by the year 2030. Bill 229, Schedule 6 seems to defy this ideal.
All five properties under the jurisdiction of the HHLT are protected as ecologically or biologically significant. With the threat of the MZO as described above, Wickware says, “Land Trusts will certainly have a role to play as zoning order permits are issued on lands that fall within their communities.”
Land trusts, especially those working without the support of a local Conservation Authority, must be very vigilant to the possible implications of Schedule 6. Wickware suggests this ruling will heighten the alert system of groups who are passionate about conservation. These people will not make it easy for a minister to override scientific recommendations on a consistent basis, Wickware says. The Conservation authorities will continue to provide recommendations based on scientific evidence, even though much of their power to act on these recommendations has been narrowed.
Wickware notes that the HHLT prides itself in providing volunteer opportunities as well as recreational facilities for locals. A healthy environment contributes to a healthy body and mental state especially during the pandemic, and the HHLT offers many naturalized areas for public use in their Dahl Forest and Barnum Creek Nature Reserve properties.
If you are a budding citizen scientist, volunteering with the HHLT is a great chance to make a difference for the environment in your community. Information about these opportunities is available on the HHLT website* and eNewsletter. In the face of environmental challenges like Bill 229, Schedule 6, it is up to us to come together and do what we can to protect the important ecosystems of Haliburton County and area.
For more information on HHLT, visit www.haliburtonlandtrust.ca. | https://haliburtonecho.ca/recent-provincial-legislation-narrows-the-scope-for-environmental-protection-efforts/ |
The Oak Ridges Moraine (Moraine) provides many benefits to southern Ontario, one of its’ most important being that it provides drinking water to over 250,000 people. Stretching 160 kilometres from the Trent River to the Niagara Escarpment, the Moraine’s ecological functions are extremely critical to the environmental health of the region, which is why the way its’ land and water is used is extremely important. Protection of the environmentally precious Moraine goes back many years, however, it wasn’t until May of 2001 that the Provincial Government began to recognize that in order to protect the Moraine from increasing residential, industrial, commercial and recreational pressures, legislation was going to be necessary.
On May 17, 2001, the Minister of Municipal Affairs and Housing introduced the Oak Ridges Moraine Protection Act. The Act established a six-month suspension of development on the Moraine to allow the Government to consult on how to best to protect it in the future. Following the passage of the Act, an Advisory Panel of key Moraine stakeholders was developed to advise the Minister on how best to move forward.
Following a series of stakeholder and public consultation meetings, the Minister announced a comprehensive strategy for the Moraine on November 1, 2001. This strategy included the Oak Ridges Moraine Conservation Act, which was passed on December 13, 2001, the Oak Ridges Moraine Conservation Plan (ORMCP) and the Oak Ridges Moraine Foundation (ORMF).
The purpose of the ORMCP is to provide land use and resource management planning direction to provincial ministers, ministries, agencies, municipalities, municipal planning authorities, landowners and other stakeholders to protect the Moraine’s ecological and hydrological features and functions. It is the regulatory tool meant to protect, preserve and restore the Moraine.
The ORMF is a non-regulatory governing body meant to complement the goals of the ORMCP. Established on March 11, 2002, the ORMF was given an initial grant of $15 million from the Provincial Government to support stewardship, education, research, land securement and trail projects across the Moraine. Between 2002 and 2008 the ORMF distributed in excess of $14 million in grants and leveraged, in collaboration with Moraine partners, an additional $35.8 in funding for 177 projects.
Although the ORMF’s granting role is currently suspended, we remain dedicated to protecting the Moraine and still have a very important role to play. The ORMCP is scheduled for review in 2015 and the ORMF is committed to compiling the reports and information necessary to ensure the Moraine is continuously protected. Currently we have formed a strategic partnership with the Conservation Authorities Moraine Coalition to examine the effectiveness of the ORMCP. Together, the efforts of both the ORMF and the CAMC will help the Province to make informed decisions about any changes required to the ORMCP to ensure it continues to protect and enhance the environment and water resources on the Oak Ridges Moraine.
Further reading: | http://moraineforlife.org/about/what.php |
By Jazmin Mitchell –
Why do students choose to live off campus? Getting away from parents, living on your own, getting a job, education, and college parties are some of the main reasons young adults want to live on campus. To some, the idea of a roommate or communal bathrooms just does not appeal to them. To others it’s the price or not being ready to leave the “nest.” Students are not ready to leave their families or maybe they, themselves, are not ready for such a transition. “I chose to live off-campus because it was cheaper and I grew up here so I am living with family,” first year student, Leesha Baumann stated. The cost of school is so expensive in today’s society, many students are choosing to live off campus. Baumann commented further that she likes living off-campus because it gives her a lot of freedom, “while still not being completely on my own.” A majority of RMC students live off campus but not all of them are first year students.
Living off campus has its challenges, such as: money for commuting, not being allowed in the cafeteria without a meal plan, not being able to stay out super late because they have to drive, and missing out on certain on campus activities or moments with their friends because they have other things to do and other obligations. They have way more responsibilities living off campus. It can be time consuming and stressful at times. Students who live off campus in apartments also have the added responsibility of having to pay rent and other bills that come with the responsibility of renting and not living with their families.
When asked if it is hard to make friends and get involved with the “campus-lifestyle,” first year student Katherine Blackford replied, “It’s different living off campus. We are going to have to actively seek out friends and activities. It is difficult to find time to hang out with friends, but it is feasible to have the campus life that everyone talks about.” People think of the “campus-lifestyle” as being immersed in the school and all the activities and events, living on campus, partying, staying up late, study sessions, sports, and clubs. As Katherine stated, it is “feasible” to do all of those things. It might be a little harder, but it is definitely possible. Some of the most notable differences between students on campus and students off campus, is that they don’t get meal plans and that they live at home with their families or in their own places, shared with friends or by themselves. They aren’t completely cut off from social, academic, and career opportunities.
“You still have classes with lots of different people and have the opportunity to get to know them just like on campus students, ” Baumann said. Baumann doesn’t think it prevents you from making friends and having fun. These two students enjoy living off campus and are just as happy with their college lifestyle as on campus students are. Neither of them regret their decision to go to RMC because they still get to go to the college of their choice.
Get to know your fellow off-campus classmates, because they want to get to know you. Off or on campus, we are a family! | http://summit.rocky.edu/the-effects-of-off-campus-living/ |
---
abstract: 'The neutrino flavour oscillations hypothesis has been confirmed by several experiments, all are based on the observation of the disappearance of a given neutrino flavour. The long baseline neutrino experiment OPERA (Oscillation Project with Emulsion tRacking Apparatus) aims to give the first direct proof of the $\tau$ neutrino appearance in a pure muon neutrino beam (CERN Neutrinos to Gran Sasso beam). In 2008 the OPERA experiment has started full data taking with the CNGS beam and around 1700 interactions have been recorded. The experiment status and the first results from the 2008 run are presented.'
address: 'Centre for Research and Education in Fundamental Physics, Laboratory for High Energy Physics (LHEP), University of Berne.'
author:
- 'Guillaume Lutter on behalf of the OPERA Collaboration.'
title: 'The OPERA experiment: Preliminary results from the 2008 run'
---
Introduction
============
The OPERA experiment, located in the Gran Sasso underground laboratory (LNGS) in Italy, is a long-baseline experiment designed to obtain an unambiguous signature of $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations in the parameter region indicated by the atmospheric neutrino experiments [@proposal]. The detector developed by the international collaboration OPERA is designed to search primary for $\nu_{\tau}$ appearance in the high energy $\nu_{\mu}$ CERN to Gran Sasso (CNGS) beam at 730 km from the neutrino source. It may also explore the $\nu_{\mu} \rightarrow \nu_e$ oscillation channel and improve the limits on the third yet unknown mixing angle $\theta_{13}$.
The $\nu_{\tau}$ direct appearance search is based on the observation of charge current interaction (CC) events with the $\tau$ decaying through leptonic and hadronic channels. The principle of the OPERA experiment is to observe the $\tau$ trajectories and the decay products. Because the weak neutrino cross section and the short $\tau$ lifetime, the OPERA detector must combine a huge mass with a high granularity which can be achieved by using nuclear emulsions.
The OPERA detector
==================
The OPERA detector is composed of two identical parts called Super Module (SM) [@techpaper]. Each SM has a target section and a muon spectrometer \[Fig. 1\].
\[figdetec\] {width=".8\textwidth"}
The target section is composed of 29 vertical supporting steel structures called walls. The walls contain the basic target detector units called Emulsion Cloud Chambers (ECC) brick. The total OPERA target contains 150036 ECC bricks with a total mass of 1.25 ktons. Each ECC brick is a sequence of 57 emulsion films interleaved with 56 lead plates (1 mm thick). An emulsion film is composed of a pair of 44 $\mu m$ thick emulsion layers deposited on a 205 $\mu m$ plastic base. The ECC bricks have been assembled underground at an average rate of 700 per day by a dedicated fully automated Brick Assembly Machine (BAM) and the OPERA target has been filled by using two automated manipulator systems (BMS).
Downstream of each brick \[Fig. 2\], an emulsion film doublet called Changeable Sheet (CS) is attached in a separate enveloppe. The CS doublet can be detached from the brick for analysis to confirm and locate the tracks produced in the electronic detectors by neutrino interactions. The CS doublet is the interface between the ECC brick and the electronic detector. Indeed, each wall is interleaved with a double layered wall of scintillator strips. This electronic detector called Target Tracker (TT) also provides a trigger for the neutrino interactions.
The spectrometer allows the determination of the muons charge and momentum by measuring their curvature in a dipolar magnet made of iron. Each spectrometer is equipped with RPC bakelite chambers and High Precision Tracker (HPT) composed of drift-tubes. The spectrometer reduces the charge confusion to less than 0.3 %, gives a muon momentum measurement better than 20% for a momentum less than 50 GeV and reaches a muon identification efficiency of 95 %.
\[figbrickcs\] {width=".25\textwidth"}
The CNGS beam and projected results
===================================
The CERN to Gran Sasso (CNGS) $\nu_{\mu}$ beam is designed and optimized to maximize the number of charged current interactions of $\nu_{\tau}$ produced by oscillation at LNGS. With $4.5 \times 10^{19}$ protons on target per year, the number of CC and neutral current (NC) interactions expected in the Gran Sasso laboratory from $\nu_\mu$ are respectively about 2900 per kton per year and 875 per kton per year. If the $\nu_\mu \rightarrow \nu_\tau$ oscillation hypothesis is confirmed, the number of $\tau$’s observed in the OPERA detector after 5 years of data taking is expected to be 10 events with a background of 0.75 events for a $\Delta m^2= 2.5 \times 10^{-3}eV^2$ at full mixing. The OPERA detector events are synchronized with the CNGS beam using a sophisticated GPS system.
The OPERA strategy
==================
For a given event, the electronic detectors give the map of probable locations for the interaction brick. The brick with the highest probability is extracted by the BMS. Then the CS doublet is detached from the brick and developed in the underground facility. The two emulsion films are scanned by using fast automated microscopes. If the track candidates found on the CS doublet match with the electronic data then the brick is exposed for 12 hours to cosmic rays in order to help with the film-to-film alignment in the brick. Subsequently the brick is developed in an automated facility and sent to the scanning laboratories either in Europe or in Japan. The brick scanning is done by computer driven fast microscopes. The vertex finding strategy consists in following back, film by film, the tracks found on the CS doublet untill the tracks stop inside the brick. The scanning speed can reach up to 20 $cm^2/h$ while keeping a good spatial and angular resolution. To confirm the stopping track, an area scan of several $mm^2$ around the stopping point of the tracks is performed for 5 films upstream and downstream. Then a vertex interaction can be reconstructed and a topology compatible with the decay of a $\tau$ lepton is searched for.
Status of the 2008 run
======================
After a short commissioning run in 2006, the CNGS operation started on September 2007 at rather low intensity with 40% of the total OPERA target mass. Since the CNGS encountered operational problems the physics run lasted only a few days. During this run $0.082 \times 10^{19}$ protons on target (p.o.t.) were accumulated and 465 events were recorded, of which 35 in the target region.
From June to November 2008, $1.782 \times 10^{19}$ p.o.t. were delivered by the CNGS [@2008run]. OPERA collected 10100 events and among them 1663 interactions in the target region where 1723 were expected. The other events originated in the spectrometers, the supporting structures, the rock surrounding the cavern, the hall structure. All electronic detectors were operational and the live time of the data acquisition system exceeded 99%.
For the events classified as CC interations in the target \[Fig. 3\] the muon momentum and the muon angle in the vertical (y-x) plane with respect to the horizontal (z) axis distributions are compared to the Monte Carlo (MC) expectation. The beam direction angle is found to be tilted by 58 mrad as expected from the geodesy.
\[fig:distri\] {width=".7\textwidth"}
In the beginning of April 2009, 1038 bricks have been developed and around 700 events have been located. The brick finding effiency is $88.3 \pm 5 \%$ and the vertex finding efficiency in the selected bricks for CC events is between 84-95 % with 93% predicted by MC and for NC events between 70-91 % with 81% expected from MC.
Among the located events, 7 events present a charm-like decay topology in agreement with 9.29 events as predicted by the Monte Carlo simulations. Charm production and decay topology events have a great importance in OPERA. Indeed, the charm decays exhibit the same topology as $\tau$ decays and they are a potential source of background if the muon at the primary vertex is not identified. A charm-like topology is shown in figure \[Fig. 4\] where a track presents a decay-kink.
\[fig:charm\] {width=".7\textwidth"}
Conclusion
==========
During the 2008 CNGS run all the electronic detectors performed well. The OPERA strategy has been validated and the vertex location was successfully accomplished for CC and NC events. In the analysed data sample, 7 events with a charm-like topology were found. This is consistent with the expectation and shows the success of combining the topological and kinematical analyses. The 2008 run constitute an important milestone for the OPERA experiment. For the 2009 run, around $3.5 \times 10^{19}$ p.o.t. are expected. The integrated statistics would be sufficient to expect the observation of two $\tau$ events and give a precise estimation of detector efficiency, background and sensitivity.
References {#references .unnumbered}
==========
[9]{} M. Guler and al., The OPERA Collaboration. *Experimental Proposal*, CERN 98-02, INFN/AE-98/05 (1998). R Acquafredda et al, The OPERA experiment in the CERN to Gran Sasso neutrino beam, JINST 4 P04018 (2009). N. Agafonova and al., The detection of neutrino interactions in the emulsion/lead target of the OPERA experiment, Submitted to NJP.
| |
The answer to this question, sadly, is a resounding yes. The process even has a name; ‘language attrition’ and is defined as the process of losing a native, or first language. This loss is generally due to isolation from other speakers of your native language, i.e through a move to a different country, as well as the introduction of a second language.
The process of language attrition is much more common in children. If you have learned your native language fully from childhood and speak it up until you are around 12 years old, this language will normally be quite stable and difficult to erode. This doesn’t mean you wouldn’t experience any symptoms of language attrition if you were to move to a new country and begin to speak a new language, but is it unlikely you would truly forget your mother tongue. However, if a child is moved from the native community before the age of 12, when language has stabilised, it is definitely possible for them to forget their first language to a large extent, or even entirely.
Language attrition occurs because two languages are competing for mental resources. For example, when an Arabic speaker begins to learn English, that person has to put quite a bit of mental energy not to use an Arabic word or Arabic sentence structure when they are speaking English. When they need to focus on a specific English word, they have to mentally block the Arabic equivalent from their brains. When they then want to use that Arabic word again, they have to put further effort into overriding the mental block that they have put in place. Monika Schmid, the leading researcher on language attrition currently based at the University of Essex, explains that “it’s not that you’re forgetting a language, what is happening is that it has been buried and you have to dig it up again and that takes quite a bit of energy.” However, the total loss of a native language in children comes when they don’t expend that energy to remember their first language. If they totally stop speaking their first language, or even speak it with less regularity than their second language then they will likely substantially or totally forget it.
However, there is some evidence to suggest that the language we learn at an early age leaves some kind of trace on the brain long after it is believed to be completely forgotten. A 2014 study found that Chinese children adopted at 12 months by French speaking families in Canada were able to respond to “Chinese tones”. The studies consisted of girls between 9-17 who were split into two groups. Group One were girls who only spoke French and had never been exposed to Chinese. Group Two consisted of bilingual girls, who spoke both French and Chinese, and Group Three was made up of Chinese adoptees who only spoke French. All groups listened to “pseudo words” that used tones found in Chinese languages. Interestingly, this study found that the bilingual girls AND the adopted girls (who had been exposed to Chinese in early years) had the same brain activity when listening to the pseudo words.
However, Schmid notes that although there is evidence that native language is able to stay with children in some strange way, this doesn’t necessarily mean that such children would have an advantage if they decided to relearn their forgotten language. She concludes that, although the child may be better than their peers at pronunciation with no prior knowledge, for grammar and vocabulary the advantages would be minimal. “There are a few neuroimaging studies trying to find residual knowledge through brain scans, but they, too, show that if such knowledge exists at all, it is very subtle and probably will not be very useful for the purpose of re-learning.”
One of the most interesting things about language attrition is that the most important factors to retaining a language seems to be how you feel about the language. Emotional connection to the language appears to have more impact on level of language attrition, more so than more obvious factors such as age at time of emigration and amount of use of the language. Studies have found that a positive attitude towards your native language can help with its maintenance, and a negative one cause and speed up attrition. This was most evident in a study of German-Jewish refugees, where a clear link emerged between the amount of Nazi persecution individual speakers have suffered and the amount of loss they had of their first language. Refugees who thought sentimentally about the German language, as it was the only thing their parents had given them that the Nazis had not been able to take from them, had much less evidence of language attrition. Refugees who viewed German as the language of their persecutors and therefore despised it, had a much more extreme level of language attrition.
Have you ever experienced language attrition? Let us know on our social media pages: Facebook, Twitter, Instagram and LinkedIn! | http://www.itltranslations.com/uncategorised/can-you-forget-the-first-language-you-ever-learn/ |
Font Size:
a
A
A
Based On E-commerce Supply Chain Management
Posted on:
2005-12-13
Degree:
Master
Type:
Thesis
Country:
China
Candidate:
Y Fan
Full Text:
PDF
GTID:
2206360122986081
Subject:
Business Administration
Abstract/Summary:
PDF Full Text Request
Since 1990s, the fast development of computer & network technology has brought great impacts to SCM, esp. the emergence of Internet technology, which made it possible for people to widely use internet engage in electronic trade of commodities & services via internet. This not only expanded the trade range, but also effectively shortened the trade time & transaction cost. To face the harsh inner & outer environments of e-commerce era, enterprises took the e-commerce strategy to improve competitiveness. In today's world, the traditional management mode of enterprises is unable to adapt to new competition situation. To swiftly respond to the market requirements, it becomes inevitable to reengineer business process with modern network technology, optimize SCM and make the transition to e-commerce. SCM is based on the modern network information technology. E-commerce is a dispensable modern network information technology for modern enterprises to lower costs and increase benefits, which provides an effective measure to improve SCM. As a totally new business mode, e-commerce could fundamentally change the mode of traditional enterprise's decision, production & marketing mode, enhance the information-based degree of enterprises and other enterprises in the chain and promote the response capability and operation efficiency of the whole chain, which play an important role to push forward the development of SCM. The combination of SCM & e-commerce would be the direction of future development of logistics.The essay is made under such background, which starts with the introduction & study of theory of Supply Chain Management. The approach of traditional enterprises to achieve Business Process Reengineering (BPR) is put forward, the function scope of Enterprise Resource Planning (ERP) & its implementation technology are discussed. The key factors & practical means to develop SCM based on Electronic Commerce (EC) are analyzed. Some examples are also given for illustration. Meanwhile, the essay also pointed out the challenges, difficulties& countermeasures which E-commerce and SCM now facing and offered some construction thoughts & applications to EC platform serving SCM. Finally, this essay introduce the progress of EC & SCM in China and point out the existed problem, as well as the solution.In the future, the competition among enterprises will be shown as the competition of strategic alliances of enterprises, that is the competition of supply chains. The development of e-commerce brings a revolution to the information flow way of supply chains. The emergence of e-commerce must exert great and far-reaching influence to the SCM of enterprises.
Keywords/Search Tags:
Supply Chain Management (SCM)
,
E-commerce(EC)
,
Business Process Reengineering (BPR)
, | https://www.globethesis.com/?t=2206360122986081 |
Located in Hveragerdi, Hotel Eldhestar is convenient to LA Art Museum and Geothermal park. This hotel is within close proximity of Hveragerdi Church and Reykjadalur Valley.
Rooms
Stay in one of 37 guestrooms featuring LED televisions. Rooms have private patios. Complimentary wireless Internet access is available to keep you connected. Bathrooms have bathtubs or showers and hair dryers.
Amenities
Take in the views from a terrace and a garden and make use of amenities such as complimentary wireless Internet access. Guests can catch a ride to nearby destinations on the complimentary area shuttle.
Dining
Enjoy a satisfying meal at a restaurant serving guests of Hotel Eldhestar. A complimentary buffet breakfast is served daily from 7 AM to 10 AM.
Business, Other Amenities
Featured amenities include a business center, luggage storage, and a safe deposit box at the front desk. Free self parking is available onsite.
Hotel Facilities
- Safe-deposit box at front desk
- Restaurant
- Luggage storage
- One meeting room
- Smoke-free property
- Free area shuttle
- Business center
- Free breakfast
- Free WiFi
- Tours/ticket assistance
- Total number of rooms - 37
- Terrace
- Garden
- Free self parking
- In-room accessibility
- Number of spa tubs - 2
- Accessible bathroom
- Roll-in shower
Room Facilities
- Free WiFi
- Bathtub or shower
- LED TV
- Rollaway/extra beds available
- Desk
- Private bathroom
- Free cribs/infant beds
- Hair dryer (on request)
- Daily housekeeping
- Patio
- Phone
Hotel Policy
Know Before You Go
- No pets and no service animals are allowed at this property. | https://www.zuji.com.au/accommodation/hotel-eldhestar-hveragerdi-iceland/ |
An analysis of a Neanderthal\’s fossilised hyoid bone – a horseshoe-shaped structure in the neck – suggests the species had the ability to speak.
This has been suspected since the 1989 discovery of a Neanderthal hyoid that looks just like a modern human\’s.
But now computer modelling of how it works has shown this bone was also used in a very similar way.
Writing in journal Plos One, scientists say its study is \”highly suggestive\” of complex speech in Neanderthals.
via BBC News – Neanderthals could speak like modern humans, study suggests.
This article talks about new evidence that Neanderthals could speak, and could speak fluently. And since we and Neanderthals forked evolutionarily some 400,000 years ago, it implies that speech and language is at least that old.
This doesn’t particularly surprise me. While I’ve read about a lot of theories that speech was only around 50,000 or so years old, that’s never made a lot of sense to me for a few reasons.
- Speech and language is a well developed feature. The idea that it sprang out of nowhere a few tens of thousands of years ago, without a lot of time going through intermediary stages seems implausible.
- Monkeys communicate with each other about predators and such with various screeches. These screeches aren’t language in the sense of having sentences and semantics, but they are communication and very much strike me as early protolanguage. A stage our remote ancestors probably went through.
- Part of our anatomy seems evolved for speech, and now it looks like part of Neanderthal anatomy is too. We also have parts of our brain dedicated to speech and understanding language. Evolution of these features took time. Of course, these could be repurposed functions, but then what were the original functions?
All of this, it seems to me, points to speech being very ancient. The Neanderthal evidence seems to push it back at least half a million years. My own (admittedly inexpert) intuition is that speech development probably ran more or less in parallel with the development of sophisticated tools, meaning it developed gradually over millions of years. | https://selfawarepatterns.com/2013/12/22/bbc-news-neanderthals-could-speak-like-modern-humans-study-suggests/ |
Every year, as many as 1,100 college students die by suicide. Most of these students were not in treatment at the time, according to the Jed Foundation, an organization that works to promote emotional health and prevent suicide among college and university students.
In November, 14 Augusta staff members were certified as instructors for QPR Suicide Prevention Gatekeeper Training. Prior to this training, the university did not have a formalized suicide prevention program, but QPR certification marks the beginnings of a formalized prevention program.
QPR stands for question, persuade and refer. It was developed on the fundamentals of CPR. One of the principles of CPR is that if one in four people is trained to detect the signs that a person is not breathing, then the odds are high that someone will be available to help. When this idea is applied in QPR training, the focus is on preventing a mental health crisis rather than a physical emergency.
“I’m usually the first person to meet with students when we get a report that they may be in distress,” said Gina Thurman, assistant dean of student life. “So, I was very interested in attending the training.”
After completing the 45-minute training, Augusta faculty and staff will be able to detect signs of a person being distressed, depressed or at risk for suicide. They should know how to ask appropriate questions of that person, be able to persuade that person to get help and refer the individual to a place where they can get help.
The staff members currently certified are authorized to train others in QPR, and they’re planning to begin hosting QPR training sessions in the spring. Dr. Mark Patishnock, director of the Counseling Center and a licensed psychologist, has asked each of those 14 Augusta faculty and staff members to hold two presentations next semester.
“If each one of us has four presentations a year, two a semester, in which 25 people attend, that’s 1,400 people who get trained in one year,” he said. “It takes a small body of people and a small commitment of time. You can reach a lot of people with a basic message, in a small period of time, with basic resources.”
The goal is to train as much of the Augusta campus community as possible. The training is designed for friends, family, faculty and staff to intervene, since they are often the first line of defense.
“If we want to prevent suicide, students struggling with mental health issues are far more likely to see other students or faculty and staff than they are to see a licensed mental health professional,” Patishnock said. “It’s very rare for someone who commits suicide to have been actively in treatment. So, if we want to help people, we’ve got to go outside of the Counseling Center.”
QPR training offers suggestions on how to ask students if they’re suicidal. It also includes specific recommendations for referrals and follow-ups. QPR also helps people understand what percentage of students are impacted by suicide and provides information about being able to understand verbal, behavioral and situational cues that indicate someone might be at increased.
“It’s actually basic information that anyone can use, even if you don’t have a mental health background,” Thurman said. “We’d like to get as many people as possible trained.”
If you’re interested in attending a training session or learning more about QPR, visit the Counseling Center website at http://www.gru.edu/admin/counseling/ or contact Dr. Patishnock at [email protected]. | https://jagwire.augusta.edu/faculty-and-staff-participate-in-suicide-prevention-training/ |
Snow3G is one of the two algorithms (the other being AES) algorithms used in LTE (4G) mobile network. The specifications for LTE come from the standards body called 3GPP and include the specification and reference implementation in C for the Snow 3G algorithm. There is an algorithm for confidentiality (encryption/decryption), and an algorithm for integrity.
The Snow3G confidentiality algorithm (also called f8) is a stream cipher that is used to encrypt/decrypt blocks of data under a 128 bit confidentiality key and IV (initialization vector). A stream cipher essentially generates a (pseudo) random sequence of bits of length equal to the length of the input, and the encrypted output is the xor (exclusive or) of the random sequence and the input. Snow3G is a word-oriented stream cipher as it outputs 32 bits at a time.
The Snow3g integrity algorithm (also called f9) computes a 32-bit MAC (Message Authentication Code) of a given input message using an 128 bit integrity key and IV.
LFSR is a shift register whose input bit is a linear function of its previous state — the feedback is a primitive polynomial over the finite field. The feedback function is carefully chosen to produce a sequence of (pseudo) random bits with a very long cycle.
This algorithm, which requires shifting and updating values in registers, can be easily and efficiently implemented in traditional non-functional languages (imperative style) where mutability allows for changing state — an array can be used to represent LSFR and FSM and updates can be made in place. In a pure functional programming language like Haskell, data is immutable — when you update an array you get a new array and this means that algorithms that make critical use of updatable state will be very inefficient.
Figure 1: Snow3g keystream mode
However, we know that the Snow3G algorithm is a pure function — it is a deterministic algorithm where for the same input, you get the same output. So, the function signatures for ciphering and integrity should be
Question: In a purely functional language Haskell, is it possible to have an external specification that is purely functional (as shown above), while the internal implementation makes use of updatable state? Can Haskell’s type system be used to securely encapsulate stateful computations? The answer is yes. The 1993 (yes, 1993!) paper titled Lazy Functional State Threads by John Launchbury , Simon L. Peyton Jones showed how to achieve this using primitives for mutable arrays and a State Transformer (ST) Monad. Note that in order to ensure that the stateful computation is securely encapsulated, rank-2 polymorphism is required. The GHC compiler implements the ST Monad.
After all the stateful computations are done, we need to ‘escape’ from the ST monad to get a pure function by invoking the runST function whose signature is runST :: ST s a -> a. As mentioned in the paper, “runST takes a state transformer as its argument, conjures up an initial empty state, applies the state transformer to it, and returns the result while discarding the final state”
For this Snow3G implementation, an unboxed mutable array is used to represent the 16 stage, 32 bit LFSR and the 3 registers of FSM. runSTUArray provides a safe way to create and work with this unboxed mutable array before returning an immutable array (without copying). Its signature is
It would be nice if there is a way to demonstrate this Snow3G algorithms in a web app — yes there is!. The amazing ghcjs can take any Haskell code or library (except those that have underlying native code dependencies) and convert it into Javascript. See my related blog article on the javascript problem. For the web UI portion, there are many options: I have used the powerful Functional Reactive Programming (FRP) system called reflex and a dom framework built on top of it.
| |
Political obligation
216 rex martin: political obligation 1 introduction political obligation’ is a broad notion and covers many things some have said, for example, that the citizen has an obligation or duty to vote. Political philosophy democracy and political obligation herman van erp tilburg university [email protected] abstract: the public life of political servants is characterized by other duties and obligations than private life conflicts can even arise between a person's public and private duties. The public life of political servants is characterized by other duties and obligations than private life conflicts can even arise between a person's public and private duties the central point of this paper is to examine whether this difference of duties can be regarded as an effect of different forms of obligation. To have a political obligation is to have a moral duty to obey the laws of one's country or state on that point there is almost complete agreement among political philosophers.
The public life of political servants is characterized by other duties and obligations than private life conflicts can even arise between a person's public and private duties the central point of this paper is to examine whether this difference of duties can be regarded as an effect of different. The chief doctrines of political obligation that have been current in mod-ern europe, and by criticising them to bring out more clearly the main points of the truer doctrine (3) to consider in detail the chief rights and obligations enforced in civilised states, inquiring what is their justifica. My first post on political obligation provoked some terrific debate so i’m now at work on a series of posts intended to introduce this fascinating branch of political philosophy to begin, it makes sense to clarify the concepts at play in the questions where political obligation comes from and who can claim it.
The contemporary political philosopher john rawls considers himself to be part of the social contract tradition of john locke, jean-jacques rousseau and immanuel kant, but not of the tradition of locke's predecessor, thomas hobbes. So to define political obligation as i will use the term in explaining locke's argument (locke himself does not seem to use it), it is a moral obligation, within limits which the argument will establish, to act upon the magistrate's determination of one's duties under natural law and not upon one's own judgment. However, even if a political obligation can be justified, and we do have some content-independent moral reason to obey the law, this does not settle the further matter of whether it should be taken to override the strong content-dependent moral reasons we might plausibly have to oppose certain laws. Jonathan wolff, “political obligation, fairness and independence,” in ratio, 1995 anarchist: matthew smith, “political obligation and the self,” in philosophy and phenomenological research.
Remember, a political obligation is one you are duty-bound to follow because of its source (the state) and your relationship to it theories of political obligation fall into five general sorts: consent, gratitude, fair play, association, and natural duty. To have a political obligation is to have a moral duty to obey the laws and support the institutions of one’s political community in fact, i think political obligations are a broader category. To a lay man the word means “to have a political obligation is to have a moral duty to obey the laws of one’s country or state”1 in context of the subject politics, the word political obligation is defined as “when the authorising rule is a law, and the association a state, we call this political obligation”2 political obligations. The last section is about a differentiated conception of political obligation and virtue, in democracies, for political leaders, for citizens, and for public servants all modern societies in some way accept the distinction between legal and ethical obligation. Abstractin “justice, deviance, and the dark ghetto,” tommie shelby argues that blacks in urban centers do not have the same set of civic obligations that bind people within the wider society that argument is based on his claim that meaningful citizenship—defined as equal political power relative to members of the wider society, equal access to employment-oriented skill acquisition and.
Ideas and ideologies henry gruijters feb 2008 what is political obligation and what creates it henry guijters according to a heywood, political obligation is: ‘the duty of the citizen to acknowledge the authority of the state and obey its laws’1 this obligation which naturally implies an element of coercion but what creates it. The subject of this chapter is hobbes’s theory of political obligation the discussion focuses exclusively on the exposition of the theory in leviathan while commentators agree on the basic theme of hobbes’s theory, that people become obligated to obey a sovereign through the covenants they make either to each other or to the sovereign, they disagree on what hobbes took to be the grounds. The idea of political obligation is that there is (at least under certain circumstances) a moral obligation to obey the government and its law even if, apart from this obligation, there would be no moral obligation to do so it is supposed to be a moral obligation, not simply a legal obligation or a matter of expediency. By political obligation, theorists generally mean a moral requirement to obey the law of one's state or one's country in the liberal tradition, liberty is a central value, and so the fact that some individuals should obey others must be explained.
Political obligation
The majority of political philosophers investigating this issue agree that a political obligation is a moral requirement to act in certain ways concerning political matters (eg a moral requirement to obey the laws and support one's country. Political obligation - moral or ethical foundations of political obligation - ancient indian ideas and institutions on political obligation unit - iii dimensions of political obligations in a modern state - political obligation and. Teleological ethics, (teleological from greek telos, “end” logos, “science”), theory of morality that derives duty or moral obligation from what is good or desirable as an end to be achieved also known as consequentialist ethics, it is opposed to deontological ethics (from the greek deon. By political obligation, theorists generally mean a moral requirement to obey the law of one’s state or one’s country traditionally, this has been viewed as a requirement.
The term ‘political obligation’ is not one that has much currency in contemporary political discourse, and will likely be unfamiliar even to those who are generally well-educated and politically informed. In other words, political obligations are grounded in the citizen performing some voluntary act in order to deliberately undertake the obligation this is a very popular theory, and is supported by philosophical heavyweights such as john locke. To a lay man the word means to have a political obligation is to have a moral duty to obey the laws of one's country or state1 in context of the subject politics, the word political obligation is defined as when the authorising rule is a law, and the association a state, we call this political obligation2 political obligations have been.
About philosophical anarchism and political obligation political obligation refers to the moral obligation of citizens to obey the law of their state and to the existence, nature, and justification of a special relationship between a government and its constituents. Hobbes on the basis of political obligation george schedler this essay is devoted to showing that hobbes was not an ethical egoist and to explaining the consequences of this discovery for other interpretations and criti- cisms of his account of political obligation 1 1 have divided the body of the essay into. Political philosophyexamine the basis for political obligation in light of various theories like social contract,general consent, general will, justice and common goodproblem of political obligations is the fundamental or central problem of the politicalphilosophy.
| |
Jupiter has no stable surface, therefore if you try to stand on it, you will sink and be crushed by the immense pressure inside the planet. If you could stand on Jupiter's surface, you would feel strong gravity. Jupiter's surface gravity is 2.5 times that of Earth's.
However, because of Jupiter's intense gravity, people have been able to survive in its environment. There are several examples of humans living in or being transported by objects larger than themselves that are used as habitats, such as spaceships and balloons. In addition, there are cases of humans having survived falls from a high altitude into Jupiter's atmosphere.
People have also walked on Jupiter using its moon, Io. Because Jupiter's gravity is so strong, people can travel across large distances in very little time. So even though Jupiter is many times wider than Earth, you could probably walk from one end to the other in a day. And since it takes Io more than 12 hours to rotate around Jupiter, people would be able to walk continuously during this time.
The Sun also plays an important role in allowing people to live on Jupiter. Since Jupiter gets bombarded by solar particles, it needs a thick atmosphere to protect it from radiation. Without this protection, any material exposed to the Sun's radiation will be destroyed quickly, including any humans who might be living on it.
Surface. Jupiter, as a gas giant, lacks a real surface. While a spaceship would have no place to land on Jupiter, it would also be unable to sail through unharmed. Spacecraft attempting to enter the planet are crushed, melted, and vaporized by the tremendous pressures and temperatures deep within the planet. Any material exposed to these conditions will be destroyed.
Jupiter has four major moons: Io, Ganymede, Europa, and Callisto. None of these have been proven to have a solid surface. They are all large bodies that probably consist of ice or rock beneath their hard shells.
Landing on any of these moons would be impossible because there are no irregularities in their surfaces that could provide traction for landing gear or oxygen for fuel cells.
Jupiter's magnetic field is strong enough to deflect many particles from outside the planet. However many solar flares do reach the planet's surface so this protection may not always be available.
Flights over Jupiter would be amazing sights to see in space because its different colors come from differences in density. The deeper blue color comes from hydrogen sulfide molecules - which makes up most of Jupiter's atmosphere - while the white clouds above the red spot are made of ammonia molecules.
Jupiter has been visited by several spacecraft including the Pioneer probes that passed by it in the 1970s, and the Galileo probe that entered it's orbit in 1995.
The atmosphere of Jupiter is largely made up of hydrogen and helium gas. It would be a horrible idea to try to land on Jupiter. You'd be subjected to incredibly high temperatures and would be stranded in mid-Jupiter with no possibility of escaping. The only way to reach Earth from Jupiter is by means of a spacecraft.
Even if a human were able to make it through the initial shock of landing on Jupiter, they would still be crushed under its weight. The average density of Jupiter is 1.368 times that of water; therefore, if humans were to travel there, they would have to reduce their body mass by about one-fourth just to be able to live.
However, it is possible to survive such a reduction if you do not mind being dead for several years. Scientists have calculated that to escape Jupiter's crushing gravity, you would need to travel at more than 18,000 miles per hour. We now know how to build vehicles that are capable of traveling close to this speed (with fuel cells instead of batteries), so it is possible for astronauts to travel beyond the orbit of Jupiter.
In conclusion, living on Jupiter would be impossible because its surface is too harsh and its atmosphere too heavy. However, it is possible to survive there if you can tolerate being dead for several years. | https://spiritualwander.com/can-we-stand-on-jupiter |
There has been a 60 year long discussion on the role of the neurotransmitter serotonin in the pathophysiology of depression. A recent systematic investigation by Joanna Moncrief and colleagues concluded that “main areas of serotonin research provide no consistent evidence of there being an association between serotonin and depression, and no support for the hypothesis that depression is caused by lowered serotonin activity or concentrations”.
Yesterday, a new paper came out in which the authors made the strong claim that they found “clear evidence” for the serotonin theory of depression, that is, that the neurotransmitter serotonin is involved in the pathophysiology of depression.
Less than 24 hours after the paper appeared online, there has already been substantial media coverage, such as a piece in the Guardian. Given the relevance of the study, I’ll explain in this blog post why the paper’s findings do not support the conclusion the authors draw.
The study
The study, entitled “BRAIN SEROTONIN RELEASE IS REDUCED IN PATIENTS WITH DEPRESSION: A [11C]Cimbi-36 PET STUDY WITH A D-AMPHETAMINE CHALLENGE”, was published on November 4th 2022 in Biological Psychiatry. The Guardian summarizes the study well:
The participants were given a PET scan that uses a radioactive tracer to reveal how much serotonin was binding to certain receptors in the brain. They were then given a dose of amphetamine, which stimulates serotonin release, and scanned again. A reduced serotonin response was seen in the depressed patients, the researchers found.
In the core summary of their paper, the researchers conclude that the study “provides clear evidence for dysfunctional serotonergic neurotransmission in depression”——a strong claim. I show in this blog that this conclusion is absolutely not warranted given the presented evidence.
Sample size is small, generalizability is not given
The study involved 17 depressed participants and 20 healthy controls. I want you to keep in mind that the authors here wrote a paper about depression—they wanted to learn about depression, not about the 17 participants in particular. This is the main reason we use statistics in science: you study a small sample of interest, and then use statistics to draw inferences about the population you are interested in.
No matter if you have training in statistics or not, you likely have built a pretty good statistical intuition by reading results of election polls. Suppose I want to know which of 2 parties will win the next election in the Netherlands, and to do so I carry out a poll with 37 participants. You would be very skeptical when I tell you that there is “clear evidence” that party 1 will win over party 2 from this small sample. Of course you would be more confident if out of 37 participants, every single one said “party 1” and nobody said “party 2”. But even in this case, the problem that remains is generalizability: who these 37 participants are.
Suppose I told you that I recruited the 37 participants as randomly as I could, by traveling the Netherlands for weeks and asking every 1000th person I encountered. You would have more confidence in my results than when I told you all 37 participants I asked were asked during a campaign event of party 1.
Overall, sample size and generalizability are the reason why firm conclusions about e.g. “depression” only follow when 1) we draw a large sample from the population we are interested in, and 2) we draw a random sample of depressed patients.
In this particular study, neither is the case. The study has a very small sample size, and generalizability is very low because the depressed group is not representative of people with depression broadly. There are many factors here, but just to list one: 5 of the 17 depressed participants in the study (i.e. 30%) have Parkinsons disease, but this does of course not apply to the population of interest (not every third person with depression has also Parkinsons).
Even if the study had strong statistical results, the study cannot provide “clear evidence” for the role of serotonin in depression given these limitations. Unfortunately, the study does not have strong statistical results either.
There is little to no statistical evidence at all
The most important thing to know is that the authors themselves do not find a significant difference between depressed and healthy people regarding serotonin release. They do find a significant difference only after removing one of the 17 depressed participants. I will not go into the weeds of discussing in detail here whether this exclusion is warranted or not, because it does not matter: you do not have “clear evidence” for the serotonin theory of depression if your result depends on one particular participant in your study.
But even after removing this one person, results are anything but conclusive—see below. On the left (black dots), you see the depressed participants, on the right (grey dots) the healthy controls, and the y axis denotes serotonin release (the people with Parkinson were excluded from the plot, if interested you can find the plot with these people included here).
Ron Pat-El on Twitter made an important point by marking up the plot:
As you can see, depressed people and healthy controls are nearly perfectly similar in their serotonin release, except for 1 depressed person (left bottom circle) and 2 healthy people (top right circle). This does not establish “clear evidence” for group differences. Consider the above plot again, and think of the y-axis as height of people rather than serotonin release. Based on the data points, you would not draw draw the conclusion that there is “clear evidence” for height differences (depressed people being smaller than healthy controls) given the evidence presented. Yes, there is one depressed person who seems quite small, and 2 healthy controls who are quite tall, but overall the two groups are very similar in overall height.
The authors conduct 3 statistical tests that compare the two groups of people regarding their serotonin release. Two of these tests are barely statistically significant (p-values are both 0.04, anything under 0.05 is considered statistically significant), and the third test is not significant. I want to stress here that statistically speaking, this is the weakest of evidence possible that still counts as a significant finding. There are many ways a scientist can test whether something is significant or meaningful or interesting. If you use a p-value, the threshold for significance the authors choose here is 0.05, but it is also common to choose much more stringent thresholds, such as 0.01 or even 0.001. According to these rules, the finding would not be significant. One can also estimate a bayes factor, which compares competing hypotheses (“depressed people have lower serotonin release than healthy controls” vs “there is no difference”) and quantifies the support for one model over the other. If you calculate the bayes factor in this case, which the authors did not do, it is smaller than 1—this means that there is no support at all for the serotonin hypothesis of depression. A support for the serotonin hypothesis would start with a bayes factor larger than 1), and to conclude that there is “clear evidence”, as the authors do, would require a bayes factor of 5 or even 10.
The authors do not look at these ways to quantify evidence. They also do not calculate any effect sizes for their results. Effect sizes are different from p-values and tell us, independent from statistical significance, how different the two groups are we compare here in terms of magnitude of serotonin release. This is commonly done in statistics, and I believe authors did not report effect sizes because the magnitude of difference here is really minimal. And when there is a minimal difference between healthy and depressed folks, it cannot corroborate theories about the pathophysiology about a disorder.
Another way of thinking of the magnitude of effects is by dropping random participants. Erick Turner uses the metaphor of a missed taxi, which I really like: imagine you claim that your study provides “clear evidence” to support a theory. Now imagine that the one depressed participant in the very bottom left of the plot, the outlier, misses their taxi on the day of the study and cannot join the investigation. The results would no longer be significant, and there would not be any evidence to support the serotonin theory for depression from this study. Clearly, your result is not robust if a single person participating in your study or not changes your results dramatically. For ”clear evidence”, you need a much larger samples in which such individual outliers do not impact on core findings. Think back to the poll we discussed above: if one person less in a poll changes your result from “clear evidence that party 1 wins” to “clear evidence that party 2 wins”, you would disregard it as uninformative.
In addition to testing whether there is more serotonin released in response to amphetamines in the brains of depressed people vs healthy people, the authors also investigate whether depression severity (rather than depression diagnosis) is associated with serotonin levels (or changes of serotonin levels; both questions were investigated). Finding a significant relation here is important to support the idea that serotonin hypothesis of depression, which maintains that lower serotonin levels in depressed people play a causal role in the pathophysiology of depression. Accordingly, the authors hypothesized in the paper that there would be a relationship between serotonin levels and depression severity (the lower the serotonin level, the higher the depression severity). The authors found no statistical association, concluding in the paper that “at this stage we have no explanation for the lack of such relationship.” If you don’t find a relationship between serotonin levels and depression severity, how can you confidently conclude that there is “clear evidence” in support of the serotonin hypothesis for depression?
Conflicts of interest
Finally, I’d like to note that the conflict of interest statement seems incorrect, which states that “all authors report no biomedical financial interests or potential conflicts of interest”. Without having the time to dig into this deeper for all authors, I know that Dr Nutt has previously declared COIs related to biomedical research, e.g. “D.N. received consulting fees from Algernon and H. Lundbeck and Beckley Psytech, advisory board fees from COMPASS Pathways and lecture fees from Takeda and Otsuka and Janssen plus owns stock in Alcarelle, Awakn and Psyched Wellness” in a study published just a few months ago.
But this can still be fixed, given that the paper on the website of the publisher is currently the “pre-proof” version of the paper. This likely also explains a number other inaccuracies or inconsistencies I found in the paper, such as a p-value for the main finding of 0.038 in the abstract of the paper that does not appear in the results (instead, the authors report 0.041 lateron).
Conclusion
I want to address one common point folks have brought up in response to criticism of the paper yesterday, e.g. on Twitter, to defend the work: “PET studies are hard to carry out, and it is expensive and difficult to recruit large samples”. This may be true, but it has nothing to do with the arguments critics raised.
As I explain in great detail elsewhere, small samples are not inherently problematic, and it is perfectly fine to have a sample size of 1 participant. The problem is when researchers draw conclusions from studies that do not follow from the presented evidence, for example, because of validity threats due to the fact that the sample is too small. In an n=1 study, or this study with 17 depressed participants, it’s perfectly fine to argue that one learned something about one person, or 17 people respectively. And one can even establishing interesting preliminary findings if results are strong enough, and call for replication studies in larger samples. But one cannot learn much about “depression” from studying 17 people with depression—threats to external validity, and the heterogeneous nature of depression stand in the way of such conclusions.
As Hannah Devlin quotes me in the Guardian piece: “The conclusions the authors draw are not proportional to the evidence presented”, and that is the problem, not the sample size.
There are many other issues I see with the paper I won’t discuss in detail here, but one is worth mentioning, perhaps: since the serotonin theory of depression became popular in the 60s, we have learned a lot about depression that just doesn’t align with the theory. One of these insights is that the diagnosis of major depressive disorder places people who have very diverse problems and etiologies into one single group that hinders effective treatments, and that most one-size-fits-all treatments for depression haven’t worked out so well.
Few researchers today still believe that it is plausible that there is one particular causal (biological, psychological, or social) mechanisms such as serotonin that would be the main driver of depression; to remind you, this is exactly what the serotonin theory of depression claims (folks in the 60s often talked about a “final common biological pathway” for depression). Instead, as we have shown in two recent papers published in the last few months, depression is probably best conceptualized as emerging from a system of biopsychosocial components that interact with each other in complex ways. The role of serotonin in this system deserves further consideration. Here is a brief blurb from our recent work on systems; the two full papers can be found here (Nature Reviews Psychology) and here (Current Directions in Psychological Science).
In the summer of 2019, a scholar I greatly admire was kind enough to lend me his bicycle for a few months, granted I take good care of it. When the bicycle broke down after 3 weeks, I was terribly worried, but reductionism came to the rescue: Bikes can be decomposed into their constituent parts, and fixing all parts at the micro level will restore function at the macro level. But mental disorders are not like bicycles—they are like many other complex systems in nature. Whether a lake is clean or turbid results from interactions of interdependent elements, such as oxygen levels, sunlight exposure, fish, pollution, and so on. Whether my mood while writing this manuscript is anxious or cheerful is the outcome of causal relations among elements of my mood system, including my personality and disposition; the previous night’s sleep; my caffeine consumption; and external influences such as my email inbox. The same applies to mental health states. From a systems perspective, such states result from interactions of numerous biological, psychological, and social features, including specific risk and protective factors, moods, thoughts, behaviors, biological predispositions, and social environments.
UPDATE 1, November 13 2022: I just saw that the Guardian has now changed their original headline from “Study finds first direct evidence” to “Study claims to find first direct evidence” a few days after my blog went live. And it turns out I wasn’t the only one who thought the conclusions draw in the study don’t follow from the evidence. | https://eiko-fried.com/clear-evidence-for-serotonin-hypothesis-of-depression/ |
Los Angeles (Jan. 18, 2011) – The Asian Pacific American Legal Center (APALC), a member of Asian American Center for Advancing Justice, has launched a campaign to get Asian American and Pacific Islander (AAPI) communities engaged in redistricting, which is the process of redrawing voting district boundaries every ten years based on census data. APALC anchors a statewide network of AAPI community organizations called the Coalition of Asian Pacific Americans for Fair Redistricting (CAPAFR), which is holding community meetings statewide in January and February to assist AAPI communities in understanding and participating in the redistricting process. A schedule of meetings can be found below.
CAPAFR’s work will help AAPI communities provide input to California’s new Citizens Redistricting Commission, which is the governmental body responsible for conducting redistricting for the Assembly, State Senate, Board of Equalization, and California’s 53 Congressional seats. The commission also will hold public hearings throughout California in the next few months. Community input at the hearings will be an important factor in whether the commission draws district lines that keep communities together or splits them unfairly.
The goal of CAPAFR’s meetings is to provide AAPI community members with an opportunity to review potential district configurations and to collect feedback on which configurations best articulate regional AAPI interests and concerns. It is vital that a broad range of AAPI community members attend the meetings because their feedback will be incorporated by CAPAFR into statewide Assembly and State Senate mapping proposals.
Sometime after the Census Bureau releases Census 2010 redistricting data in March 2011, CAPAFR will submit these statewide mapping proposals to the commission. The CAPAFR mapping proposals will illustrate how AAPI communities of interest should be kept together. CAPAFR also will prepare community members to testify before the commission about their communities’ interests and concerns. The commission will hold two rounds of hearings this year, and has a deadline of August 15 to establish new district boundaries.
Additional information about CAPAFR can be found at www.capafr.org. The commission’s website iswww.wedrawthelines.ca.org. | http://aapress.com/national/apalc-redistricting-campaign-to-protect-aapi-communities/ |
TIRANA, October 20 While climate change is the biggest threat to human health, “Albania decided to further reduce its already limited 2021 budget for environment and climate change by more than seven percent to Euro 8.9 million,” EC report says The annual report issued on Tuesday by the European Commission following the adoption of the...
FAO Supports Sustainable Development in Vjosa-Zagori Areas
TIRANA, August 20 Differently from the domestic outdated vision for hydropower projects on the Vjosa River, the United Nations Food and Agriculture Organization (FAO) highlights the need for sustainable management and development in and around the river of international importance. Given the status of Vjosa as one of the rare remaining natural flow regimes in...
Albania 7th in Europe for Share of Renewable Energy
TIRANA, January 23 The share of energy from renewable sources in gross and final consumption in Albania reached 34.9 percent in 2018, thus ranking the country seventh in Europe, Eurostat confirmed on Thursday. Meanwhile, Kosovo ranked 12 with a share of 24.9 percent from renewable energy sources. Yet, the diversification of non-hydro renewable energy sources...
Bern Convention Urges Albanian Govt to Suspend HHP Projects on Vjosa
TIRANA, December 11 This article was originally published by balkanriver.net under the headline: Rebuke for Albania: Bern Convention insists on the Albanian government to suspend hydropower projects on Vjosa river The Bern Convention Standing Committee has decided to keep the case against the Albanian government regarding the projected Pocem and Kalivac hydropower plants (HPP) on...
HPPs on Vjosa would Lead to a lose-lose-lose Situation, Scientists Say
TIRANA, July 3 This article was originally published by balkanriver.net under the headline: New Vjosa study: Hardly any Energy, no Sand for the Beach. Sediment study of the University of Natural Resources and Applied Life Sciences Vienna (BOKU) proves serious consequences of the planned power plants on the Vjosa. For over a year, scientists from...
Rafting on Vjosa River- One of the Wonders of Europe
TIRANA, June 3 Vjosa is a river that is located in the south of Albania and flows into the Adriatic Sea. As is already known, on the Vjosa river, the local and foreign visitors could do rafting, turning Tepelena town in a place with a touristic potential for the passionate of the sports adventure, but...
EP Urges Albanian Govt to Decrease Dependency on HPP Projects
TIRANA, December 11 The European Parliament resolution on Albania calls on authorities to review the strategy on renewable energy. The 39th point of the resolution highlights that the European Parliament expresses deep concern about certain economic projects that have led to grave environmental damage in protected areas, such as large-scale tourist resorts and the hydropower...
Bern Convention Opens Case-file on HPPs on Vjosa River
TIRANA, December 4 The Standing Committee of the Bern Convention that held its 38th meeting in Strasbourg on 27-30 November decided to open a case-file and called on the Albanian authorities to suspend Hydro Power Plant (HPP) projects on Vjosa River. “The Government of Albania: Uses the precautionary approach and suspends both Kalivac and Pocem...
Administrative Court Issues Verdict: No HPPs in Osumi Canyons
TIRANA, July 31 The Administrative Court upheld a verdict in favor of Osumi Canyons. The verdict issued on Monday, July 30 accepted the decision of the Ministry of Infrastructure and Energy on the annulment of a contract on the construction of two Hydro Power Plants in Osumi Canyons. On its part, the company took the...
Confederation of Albanian Industry against HPPs Construction on Vjosa River
TIRANA, November 27 The Confederation of Albanian Industry (Konfindustria) made an official appeal to the Albanian Prime Minister and to other decision making institutions on the interruption of legal proceedings on the construction of energy projects on Vjosa River, including Kalivac Hydro Power Plant (HPP), Monitor Magazine reported. This moratorium is required in order to... | https://invest-in-albania.org/tag/vjosa-river/ |
Cherry blossom festival to take place in Hanoi in March
A cherry blossom festival will take place in Hanoi from March 23rd-26th as part of a cultural exchange to mark the 45th anniversary of Vietnam-Japan diplomatic ties.
Cherry blossom festival 2017 in Hanoi. (Photo VNA)
Nearly 30 cherry trees and 10,000 blossom branches will be displayed at Ly Thai To Park in Hoan Kiem district, along with various types of flowers planted in Vietnam.
The event will feature 20 food pavilions and multi-cultural dances, including Yosakoi – a unique style of dance in Japan performed by large teams, and Vietnamese traditional art forms such as “xam” (blind wanderers’ music) and “ca tru” (ceremonial singing) as well as a conference to promote bilateral tourism and investment cooperation.
The Japanese side will present some cherry trees to grow in Hanoi.
The festival is expected to help Vietnamese people to understand better about Japanese land, culture and people in addition to boosting collaboration in culture, tourism and economy./. | https://vietnamtimes.org.vn/cherry-blossom-festival-to-take-place-in-hanoi-in-march-5306.html |
Atherosclerosis is a disease process leading to hardening and narrowing (stenosis) of your arteries. The buildup of fat, cholesterol, calcium and other substances creates plaques inside arteries, which can lead to serious problems including heart attack, stroke, amputation and death.
Serious, possibly fatal
Atherosclerosis-related diseases are the No. 1 cause of death in the U.S. for both men and women. Roughly 5 million people in the U.S. are affected.
Preventable—even small changes can help
Stopping smoking, following a healthy diet, managing cholesterol and staying physically active all decrease the risk of atherosclerosis and improve your overall health.
Find a vascular specialist near you
The information contained on Vascular.org is not intended, and should not be relied upon, as a substitute for medical advice or treatment. It is very important that individuals with specific medical problems or questions consult with their doctor or other health care professional.
Until the arteries narrow significantly, many people experience no symptoms. Symptoms often appear only when the disease is advanced, and vary with the types of arteries affected.
PAIN
Pain in the chest leading to angina or possibly a heart attack may indicate arteries of the heart are affected. Pain in the legs while walking may indicate arteries of the legs are affected.
SIGNS OF STROKE
A mini-stroke or stroke may occur if arteries of the neck are affected.
A variety of characteristics and behaviors called risk factors may contribute to atherosclerosis.
Some risk factors cannot be changed
Age, male gender, race and family history can put you at a higher risk.
Other risk factors can be managed
- Smoking
- High blood pressure
- High amounts of cholesterol in the blood
- High amounts of sugar in the blood
- High levels of inflammation as the body responds to injury or infection
- Obesity
- Lack of physical activity
- Mental health issues
- Stress
See a vascular surgeon
A vascular surgeon will ask questions about symptoms and medical history, including family history, and will perform a physical exam.
Blood tests likely, other tests may be recommended
The vascular surgeon will likely recommend one or more a blood tests be done.
Depending on the arteries affected or suspected, additional tests may be recommended to understand the presence and severity of disease. These may include:
- Treadmill test
- Ultrasound
- Computed tomography (CT) scan
- Magnetic resonance imaging (MRI) scan
- Angiogram
The vascular surgeon will provide information to help you understand the effects of atherosclerosis and may recommend changes in behavior or diet.
Medications may be prescribed, for example, to manage high blood pressure or high cholesterol.
If needed, surgery will be recommended and may include:
- Angioplasty or stenting
- Surgical bypass
Prevention is key to reducing the risk of atherosclerosis-related disease, primarily through lifestyle and dietary modifications that will improve your overall health. | https://vascular.org/patient-resources/vascular-conditions/atherosclerosis |
Q:
Standard Big Bang model and space curvature
Why it is said that in standard Big Bang model space curvature grows with time when the Universe expands? My intuition is that it should be the opposite.
I saw this in a book talking about the problem of standard Big Bang model and why we need inflation theory. It says that the Universe is observed to the nearly flat now. And because the space curvature grows with time, it should be even flatter in the past, which cannot be explained. But the inflation theory solves the problem because even if space is initially far from flat, inflation will smear out the curvature and make it flat.
A:
What exactly did the book say? It all depends on what you mean/define as "curvature". What you describe appears to be a description of the behaviour of $\Omega$. Inflation does indeed drive $\Omega$ towards unity and simultaneously flattens space because the radius of curvature grows exponentially bigger.
If $\Omega < 1$ at some early epoch then it should decrease quickly with time such that $\Omega << 1$ in the present day - this means that the universe has negative curvature, but does not mean it is becoming more curved.
In the Friedmann equation, the curvature parameter $k$ is a constant $(1,0,-1)$
$$H^2 = \frac{8\pi G\rho}{3} - \frac{kc^2}{a^2}$$
Here, the spatial curvature is $k/a^2$ and the radius of curvature is $a$ if $k=+1$. Thus as the universe expands and $a$ gets larger, any curvature becomes smaller.
In a little more detail - one can write the above equation in terms of the density parameter $\Omega$, the ratio of density to the critical density $3H^2/8\pi G$:
$$(\Omega ^{-1}-1)\rho a^{2}={\frac {-3kc^{2}}{8\pi G}}$$
During inflation, the energy density $\rho c^2$ remains constant as $a$ grows exponentially. In order to keep the left hand side equal to the right hand side (which is just a collection of constants), then $\Omega$ must be driven very close to unity, while $k/a^2$ will tend towards zero.
After inflation then $\rho$ will vary with $a$ depending on whether the expansion is dominated by matter ($a^{-3}$) or radiation $(a^{-4})$. In both these cases $\rho a^2$ will decrease as the universe expands, such that if $k \ne 0$, then $(\Omega^{-1} -1)$ must increase, which means that $\Omega$ must either grow or shrink away from unity. But $k/a^2$ continues to get smaller as the universe expands.
| |
About PCB Meaning,A PCB is a common name for a bare board for mounting electronic components. These electronic components use conductive routes or signal traces imprinted from copper materials. Moreover, you can mount these copper materials onto a non-conductive substrate. Therefore, another name for a printed circuit board is a printed wiring board (PWB). Also, printed wiring boards do not have circuit components such as resistors or capacitors.
Nevertheless, printed wiring boards (PWBs) allow complete automated assembly processes. These processes could not be practical during the tag-type circuit assembly process in the earlier era. Besides, a PCB full of electronic components has other names such as printed circuit board assembly (PCBA) or printed circuit assembly (PCA).
PCB Meaning--PCB VS. PCBA
A PCB is a bare board that connects electronic components using conductive pads and other conductive routes. Nevertheless, PCBA refers to the same printed circuit board after soldering all the components and ready to perform its electronic purpose.
PCB Meaning--Uses of PCB board
Below are some of the common application areas of the PCB board.
- Communications: devices such as tablets, smartphones, radios, and smartwatches use PCBs as a base for the device.
- Computers: laptops, desktop PCs, workstations, and satellite navigation contain PCBs at their center.
- Entertainment Systems: televisions, DVD players, games consoles, and stereo sets have PCBs at their core.
- Home Appliances: Nearly all modern home appliances such as coffee makers, microwave, refrigerator, and alarm clock use electronic components on a PCB.
PCB Meaning---Other application areas include: | https://www.ourpcb.com/pcb-meaning.html |
Philosophy of Teaching
Everyone is creative despite how many oodles of times I have heard someone say, “I wish I was creative!” This reoccurring belief that a person could lack creativity is reflective of some collective cultural dysfunction and is a misconception that exists even in the collegiate study of art. My experience as an educator/mentor interprets this “wish for creativity” as unlikely within this mindset because it remains an unattainable desire rather than a characteristic that one must strive to develop. Tracing the idology of this misconception to its source reveals contemporary society’s fascination with fear and worry.
Ted Orland and David Bayle’s 2001 Art & Fear: Observations on the Perils (and Rewards) of Artmaking dissect the complex relationship between fear and art. This book has been pivotal to my career as an artist and as an educator. The concepts introduced to me in this book played a significant role into the formation of my philosophy of teaching and sparked an interest in the psychology of creativity, learning theory, and behavioral psychology. I encourage the complex process of self-understanding as related to the process of creativity. Knowing thyself, vulnerability, cultural reflection, and ultimately (and of course!) Greek Mythology became popular topics in art critiques as the class engaged the conceptual ideas, formal elements and principles of design, material/technical employment, and personal reflection.
As a conceptual artist I am regularly engaged in serious creative work consisting of (1) play, (2) comfort with ambiguity in the pursuit of exploration of idea, and (3) trans-disciplinary research that serves curiosity as suggested by arts educator Cindy Foley in her 2014 TEDxColombus presentation titled, Teaching Art or Teaching to Think Like an Artist. Like a garden the brain needs consistent nourishment in order to flourish. Modeling in this kind of creative practice and thinking places high demands on physical and mental rigor and consistency in order for conceptual growth and contemplative reflection.
I teach this way because I am committed to delivering authentic content about which I am excited, but more important; I am committed to modeling a growth mind-set in the process. Inevitably, what I feed my mind becomes evident in my artwork, in class, and in conversations with students and peers. I intentionally and inclusively invite students to my exhibitions and my studio to observe the results of my creative output. This too serves to model the professional practice of an artist to the student and enforces a mentor-mentee relationship.
Mentorship is the foundation on which I have formed my approach to education. Integral to this approach is a desire to connect with the student in his/her current mindset and try to bring awareness to the journey of life through the process of art. This approach requires sensitivity, listening and hearing, observation, trust, flexibility, empathy, and self-awareness. Acknowledging the learner-centric approach necessitates a development of the full person not only in a diverse range of liberal arts subjects but also in an understanding of the unique individual.
I consider my role as an educator a privilege and responsibility. This time in a student’s life is critical as it is a time of transition from adolescence to adulthood. My goal is to honor each student I serve. | https://www.heathernamethbren.org/philosophy-of-teaching |
I was a defendant in an Inauguration Day protest case. I got off. That’s justice.
Police in riot gear contain a group of protesters at the corner of 12th and L streets NW on Inauguration Day in 2017. (Jahi Chikwendiu/The Washington Post)
by Elizabeth LagesseJuly 27 at 4:13 PM
Elizabeth Lagesse was a defendant in one of the Inauguration Day protest cases.
In recent interviews, D.C. Police Chief Peter Newsham showed a troubling lack of regard for the core principles intended to ensure that justice is the aim of our justice system. Remarks from local prosecutors also support a conclusion that many officials within that system seek victories above just outcomes.
Newsham clearly views the outcome of the “J20” inauguration protest cases as a miscarriage of justice. Only one person spent a short time in jail as part of a plea agreement, despite charges that threatened more than 200 defendants with sentences of up to 70 years. The vast majority of those charged were acquitted or had their cases dismissed by prosecutors. Two jury trials for 10 defendants ended without a single conviction — a stark defeat for the prosecution team.
If the possibility that justice was served by these acquittals and dismissals occurred to Newsham, we would not know it from his recent public statements. Speaking to WTOP’s Neal Augenstein, Newsham placed the blame on the standards required of prosecutors at trial: Establishing probable cause for facilitating a crime “is much easier than establishing it beyond a reasonable doubt.”
In his frustration at the difficulty of persuading a jury to convict defendants based on their mere presence at the J20 protests, Newsham seems to forget that proof beyond a reasonable doubt is required for all criminal convictions. That will remain the standard no matter how broadly any new statute is written.
It is also worth noting that several defendants stood trial accused of more overt acts, including specific instances of property destruction. A new law making it easier to charge bystanders as accessories wouldn’t change the uncomfortable fact that even cases against alleged principals generated not one guilty verdict from a trial. (Twenty-one people pleaded guilty before trial.)
Both juries in the J20 trials — as with those in all criminal cases — were told that every defendant is presumed innocent and that the presumption stands “throughout the trial, unless and until the government has proven that he or she is guilty beyond a reasonable doubt.” Does Newsham think this is too much to ask of prosecutors? Perhaps he has grown used to a plea system that reduces that standard in more than 90 percent of criminal cases to a cost-benefit analysis on the part of the defendant, leaving him or her to weigh fundamental rights against the disruptive power of a lengthy prosecution, regardless of guilt, innocence or strength of evidence.
This view seems to be shared by the lead prosecutor for the inauguration cases. In her closing remarks to a jury in December, Assistant U.S. Attorney Jennifer Kerkhoff made her feelings plain: “The defense has talked to you a little bit about reasonable doubt. You’re going to get an instruction from the judge. And you can tell it’s clearly written by a bunch of lawyers. It doesn’t mean a whole lot.
This, from the representative of a system that for centuries has held, in the words of Benjamin Franklin, “that it is better 100 guilty persons should escape than that one innocent person should suffer.” Instead, the gamelike mechanics that have grown up around that system encourage the rise of skilled players concerned mainly with keeping score.
When asked to comment on news that the remaining inauguration cases had been dismissed, Newsham noted that “in the American criminal justice system, sometimes the bad guys win.” | |
What does it mean to press in basketball?
Press is short for pressure. Often called a full-court press, this is an attacking defense employed in the backcourt, where the objective is to force a turnover.
How long can an offensive player stay inside the paint?
There are rules put in place about how long a player can stand still in the paint for. Defensive and offensive players are only allowed to stay inside the paint for three seconds at a time.
When can you full-court press in basketball?
A full – court press is a basketball term for a defensive style in which the defense applies pressure to the offensive team the entire length of the court before and after the inbound pass. Pressure may be applied man-to-man, or via a zone press using a zone defense.
How do you beat a half court press?
Your point guard O1 can pass to the middle, or to O3 or O5 in the corners. In fact, most half court presses are vulnerable down low in the corners. Good accurate long passes over the defense into the corners will often beat this defense. If the back defender starts cheating toward the corner, then the middle is open.
What is a 1/2 2 defense in basketball?
In a 1-2-2 zone defense, the top defender is on the basketball and the two wings are protecting the free-throw line and allowing the pass to be made to the wing. In a 3-2 zone, the top defender doesn’t pressure the point guard. Instead, they sag back and deny the pass into the high post. | https://surreybasketballclassic.info/interesting/how-to-press-in-basketball.html |
In their varying combinations, native plants are important structural components of the region’s ecosystems. This page contains information about some of our coastal, wetland and forest plants. Distinctive ecosystem types depend on a number of factors, such as soil type and climate.
Maintaining the mosaic of ecosystems across the region ensures the future of a more complete range of plant and animal species. Some ecosystem types are particularly at risk, such as coastal dunes and wetlands. These areas contain special plants that are adapted to living in the particular conditions of those environments.
Sand dunes are important natural areas, not only for their ecological significance but also because they help protect our beaches and coastal areas from erosion. Dunes act as barriers against the damage done by storms and waves. The plants associated with sand dunes tend to be suited to different parts of the dune system. The seaward side of a dune is known as the foredune, and plants like pīngao (golden sand sedge) and spinifex are well suited to these exposed areas. They trap sand in the hairs on their leaves and hold it together with their roots. This is how sand dunes are built up over time. Sand coprosma, sand daphne and piripiri (sand bidibid) are suited to the more sheltered mid dune areas, while tauhinu (cottonwood) and flaxes grow well in the backdune (landward) areas.
Many native plants are endangered in the wild and they often don’t get the recognition or help that our native animals do. The Wellington region does have some regionally rare and endangered plants. Toroheke (sand daphne) and New Zealand sea spurge (Eurphorbia glauca) are both coastal plants at risk with declining populations.
Check out the duneland and rocky coastal sections of ourWellington regional native plant guide for more information.
Wetlands are special ecosystems for our native plants and animals, and they are now much rarer than they used to be. There are a range of different wetland types - some under forest, some with brackish water, and some that dry up in summer. Each of these has plants that are adapted to the soil and climate conditions and to the hydrological regime (the water cycles) of their location.
Wetland plant types range from grasses (like toetoe) and sedges (like isolepsis) to ferns (like swamp kiokio) and giant trees (like kahikatea).
Have a look at our wetland resources to find out about other wetland plants or how to restore a wetland.
There are several different forest types in the Wellington region, characterised by different plant communities. To find out more about the different ecological zones in the region, and to learn about what is appropriate to plant in each, check out the Wellington regional native plant guide.
Forests have very small plants, like mosses and orchids, and very large plants like the majestic northern rātā and the podocarp trees (including rimu and mataī). These large trees emerge above the main canopy and often support whole communities of other plants, called epiphytes, which grow on their trunks and branches.
Smaller trees form the canopy of the forest and below these in the sub-canopy there is a big range of shrubs, ferns, palms, grasses and climbers. In a healthy forest, all of these plant types are represented and there is a good mix of different species and different sizes.
You can find information about the region’s forests plants in the Wellington regional native plant guide. Or you might like to visit some of the stunning forest sites in the region: | http://www.gw.govt.nz/native-plants-3/ |
The green Furbo Rug combines elegance with abstract artistry. Several shades of green and taupe overlap in irregular geometric shapes with dark green lines to finish. The contrasting wool textures are tufted, leaving the piece soft to the touch whilst the viscose green lines add sheen to the design. Its Nordic maker, Linie Design, is committed to sustainable design and is part of the worldwide Care and Fair scheme to support its weavers. Similarly, every rug is crafted using environmentally friendly fibres. | https://www.7interiordesign.com/pp/textile/rugs/linie-design-furbo-rug-green-cm-x |
My My, What Big Claws You Have! Navigating the Pitfalls of Drafting Clawback Agreements
One of the greatest fears in any litigation matter is that you will somehow accidentally produce work product or attorney-client privileged documents to the opposing side and waive the privilege. As a result, it has become standard protocol for parties to enter into clawback agreements that protect sensitive electronically stored information (ESI). Clawback agreements allow parties to agree that the inadvertent production of privileged information will not automatically waive the privilege and provide a process for the return or destruction of that privileged material.
FRE 502
Clawback agreements can work in tandem with Federal Rule of Evidence (FRE) 502, which permits parties to request the return of inadvertently produced attorney work product or attorney-client privileged information. According to the Judicial Conference Advisory Committee on Evidence Rules (revised Nov. 28, 2007), FRE 502 was put in place primarily to “respond to the widespread complaint that litigation costs necessary to protect against waiver of attorney-client privilege or work product have become prohibitive due to the concern that any disclosure (however innocent or minimal) will operate as a subject matter waiver of all protected communications or information.” In answer to those complaints, under Rule 502(b), inadvertent production of privileged information does not waive the privilege if the party took reasonable steps to avoid and remedy the disclosure. By this very language, Rule 502 clearly allows for the use of clawback agreements.
When negotiating clawback agreements, whenever possible, parties should agree upon what actions are considered “reasonable” and clearly define the reasonableness standard in order to prevent any future arguments in the wake of the production of privileged documents. It is also important to consider making the clawback agreement a part of a court order. Under Federal Rule of Evidence 502(c), clawback agreements are only considered binding between the parties to a case if the agreement is part of a court order. The order could also incorporate other aspects of FRE 502, including subsection (e) (clawback agreement applies to third parties).
FRE 502(d)
As an alternative to the standard 502(b) reasonableness standard, the parties could entertain the notion of waiving the reasonableness provision altogether under 502(d). In these cases, the privileged information could be clawed back without concern for whether the producing party took reasonable steps not to disclose it. One of the most notable proponents of the 502(d) order is U.S. Magistrate Judge Andrew Peck, who has been quoted as follows when discussing the significant of a Rule 502(d) order: “[I]t is a rule that says you don’t have to be careful, you don’t have to show that you’ve done a careful privilege screening, and it says that if the court enters a 502(d) order, it’s a non-waiver of privilege in that case and it’s a non-waiver of privilege in any subsequent state or federal case, even with different parties.”
Essentially, the rule gives heightened protection against waiver where privileged information is disclosed (even knowingly disclosed) and eliminates potentially costly motion practice regarding whether a production was inadvertent and any steps, much less “reasonable” steps, were taken to avoid the disclosure. Of course, Judge Peck followed with: “I’m never saying that you shouldn’t be as careful as possible to protect your client’s privilege” and noted that it would be improper for a court to compel a party to produce documents without conducting a careful privilege review. Judge Peck provides a sample, simple two paragraph order to this effect on his page in the Southern District of New York website.
Factors to Consider When Drafting a 502(b) Clawback Agreement
-
Do you want to include the requirements of inadvertent production and reasonableness?
Rule 502(c) precludes waiver of privilege only if disclosure of the information is inadvertent and reasonable steps had been taken to avert and remedy the disclosure. Any clawback agreement should have this concept clearly stated within the agreement. However, some clawback agreements dispense with the inadvertent and reasonable steps approach altogether and instead set forth a “no fault” or “irrespective of care” standard. These agreements make it, essentially, impossible to waive privilege resulting from disclosure. This type of agreement is more typical when the parties expect large scale productions and there may not be the time or resources to conduct a thorough review for privilege prior to production.
-
If you are going to include the reasonableness requirement, what is going to constitute reasonableness?
In negotiating a clawback agreement that intends to utilize the reasonableness requirement, it is essential that the parties determine what actions can be considered reasonable and how the parties will address such disclosures. For example, “reasonableness” may include running potential privilege terms across a database and conducting both first and second level privilege review. It is helpful to clearly define the reasonableness standard in order to avoid any potential disputes over disclosure of privileged information.
-
Do you want to include the clawback agreement in a protective order?
It is important to remember that clawback agreements are only binding between the parties to the agreement unless an order is entered that states otherwise. In order to cover third parties, it is important to ask the court to enter a protective order that mirrors the protections of the clawback agreement and refers to 502(e) so that it can be binding for non-parties as well.
-
What are the deadlines and procedures for clawback rights?
FRE 502(b) merely states that disclosure does not operate as a waiver of attorney-client privilege or work product protection if the party that produced the information “promptly took reasonable steps to rectify the error.” Clawback agreements may provide more specifics here regarding, for example, how many days the producing party has to notify the receiving party after the disclosure is discovered; how that notification should be made (e.g., by letter or email); how long the receiving party has to respond or protest the privilege assertion; and what the receiving party should do with the documents in the meantime.
-
What is covered under the clawback agreement?
It is important to remember that the phrase “all privileged information” is not solely limited to documents. Make sure the clawback agreement also accounts for things like deposition testimony as well as other forms of communications including, but not limited to, text messages, pictures, photographs and electronic notes.
-
Should confidential information be included?
Although a clawback agreement is primarily used to protect attorney-client and work product privileged information from being utilized by a receiving party, a clawback agreement may also have provisions to protect material that is confidential. Some disputes deal with sensitive business or personal information, which might warrant a “confidential,” “highly confidential” or “attorneys’ eyes only” designation. Should this be an issue in your litigation, clawback agreements may be used to provide a systematic approach for the return (or destruction) of such confidential information.
Even with a Clawback Agreement, Pitfalls Still Abound
Even with a clawback agreement in place, it is still important to develop and implement comprehensive and defensible search and review protocols for ESI. Such protocols ensure that privileged documents are properly recognized and designated, in effect demonstrating that reasonable steps were taken to avoid an inadvertent production and protect your client’s interests. Failing to adequately prepare for the inevitable intricacies of document production and privileged information withholding may land you in trouble. A recent example of the consequences of failing to properly manage a review and production occurred in the case of Irth Solutions, LLC v. Windstream Communications LLC, No. 2:16-CV-219, 2017 WL 3276021 (S.D. Ohio Aug. 2, 2017).
In Irth, the parties entered into a clawback agreement that lacked a defined standard of care required to preserve the right to clawback privileged material. During the discovery process, Windstream produced over 2,200 documents, which included 43 privileged documents. Several weeks later, Windstream again produced the same privileged documents. When Windstream attempted to claw back the documents, Irth refused, and Windstream filed a motion to compel.
In its analysis of the motion to compel, the Court looked specifically at Windstream’s process in conducting its privilege review and compared it with the parties’ clawback agreement, which did not contain a standard of care that comported with the requirements of Federal Rule of Evidence 502(b). With no defined “reasonable steps” in the agreement, the Court looked to other courts to see how they handled conflicts between private clawback agreements and the workings of Rule 502(b) and found three potential approaches. In the first approach, when there is no standard in a clawback agreement, the agreement automatically defaults to a “no-fault” standard and the return of privileged documents is required. In the second, inadvertent disclosure is not waived unless the disclosing party was completely reckless. In the third, Rule 502(b)’s reasonableness standard applies as the default standard.
The Court declined to follow the first approach, finding that it did follow the foundation set forth by Rule 502, which requires reasonableness on the part of the producing party. Applying both the second and third approaches, the Court determined that Windstream’s actions were not adequate to preserve the privilege. The Court found that Windstream’s actions were not reasonable, and in fact were reckless because Windstream produced the same set of privileged documents twice and the number of privileged documents produced was large for such a comparably small production.
Conclusion
As seen from the Irth case, it is very important to make sure your clawback agreement protects you from inadvertent disclosure. If you choose not to go with the 502(d) order, at a minimum, you should have an order in place. An effective, ironclad clawback can ultimately save you time and expense should there be any discovery disagreement down the road.
DISCLAIMER: The information contained in this blog is not intended as legal advice or as an opinion on specific facts. For more information about these issues, please contact the author(s) of this blog or your existing LitSmart contact. The invitation to contact the author is not to be construed as a solicitation for legal work. Any new attorney/client relationship will be confirmed in writing. | https://www.ktlitsmart.com/blog/my-my-what-big-claws-you-have-navigating-pitfalls-drafting-clawback-agreements |
This invention relates generally to local area networks (LANs) of computers and, more particularly, to multiple LANs that are interconnected by bridges. A computer network is simply a collection of autonomous computers connected together to permit sharing of hardware and software resources, and to increase overall reliability. The qualifying term "local area" is usually applied to computer networks in which the computers are located in a single building or in nearby buildings, such as on a college campus or at a single corporate site. When the computers are further apart, the terms "wide area network" or "long haul network" are used, but the distinction is one of degree and the definitions sometimes overlap.
A bridge is a device that is connected to at least two LANs and serves to pass message frames between LANs, such that a source station on one LAN can transmit data to a destination station on another LAN, without concern for the location of the destination. Bridges are useful and necessary network components, principally because the total number of stations on a single LAN is limited. Bridges can be implemented to operate at a selected layer of protocol of the network. A detailed knowledge of network architecture is not needed for an understanding of this invention, but a brief description follows by way of further background.
As computer networks have developed, various approaches have been used in the choice of communication medium, network topology, message format, protocols for channel access, and so forth. Some of these approaches have emerged as de facto standards, but there is still no single standard for network communication. However, a model for network architectures has been proposed and widely accepted. It is known as the International Standards Organization (ISO) Open Systems Interconnection (OSI) reference model. The OSI reference model is not itself a network architecture. Rather it specifies a hierarchy of protocol layers and defines the function of each layer in the network. Each layer in one computer of the network carries on a conversation with the corresponding layer in another computer with which communication is taking place, in accordance with a protocol defining the rules of this communication. In reality, information is transferred down from layer to layer in one computer, then through the channel medium and back up the successive layers of the other computer. However, for purposes of design of the various layers and understanding their functions, it is easier to consider each of the layers as communicating with its counterpart at the same level, in a "horizontal" direction.
The lowest layer defined by the OSI model is called the physical layer, and is concerned with transmitting raw data bits over the communication channel, and making sure that the data bits are received without error. Design of the physical layer involves issues of electrical, mechanical or optical engineering, depending on the medium used for the communication channel. The layer next to the physical layer is called the data link layer. The main task of the data link layer is to transform the physical layer, which interfaces directly with the channel medium, into a communication link that appears error-free to the next layer above, known as the network layer. The data link layer performs such functions as structuring data into packets or frames, and attaching control information to the packets or frames, such as checksums for error detection, and packet numbers.
Although the data link layer is primarily independent of the nature of the physical transmission medium, certain aspects of the data link layer function are more dependent on the transmission medium. For this reason, the data link layer in some network architectures is divided into two sublayers: a logical link control sublayer, which performs all medium-independent functions of the data link layer, and a media access control (MAC) layer. This layer, or sublayer, determines which station should get access to the communication channel when there are conflicting requests for access. The functions of the MAC layer are more likely to be dependent on the nature of the transmission medium.
Bridges may be designed to operate in the MAC sublayer. Further details may be found in "MAC Bridges," P802.1D/D6, Sept. 1988, a draft publication of IEEE Project 902 on Local and Metropolitan Area Network Standards.
The basic function of a bridge is to listen "promiscuously," i.e. to all message traffic on all LANs to which it is connected, and to forward each message it hears onto LANs other than the one from which the message was heard. Bridges also maintain a database of station locations, derived from the content of the messages being forwarded. Bridges are connected to LANs by paths known as "links." After a bridge has been in operation for some time, it can associate practically every station with a particular link connecting the bridge to a LAN, and can then forward messages in a more efficient manner, transmitting only over the appropriate link. The bridge can also recognize a message that does not need to be forwarded, because the source and destination stations are both reached through the same link. Except for its function of "learning" station locations, or at least station directions, the bridge operates basically as a message repeater.
As network topologies become more complex, with large numbers of LANs, and multiple bridges interconnecting them, operational difficulties can ensue if all possible LAN bridging connections are permitted. In particular, if several LANs are connected by bridges to form a closed loop, a message may be circulated back to the LAN from which it was originally transmitted, and multiple copies of the same message will be generated. In the worst case, messages will be duplicated to such a degree that the networks will be effectively clogged with these messages and unable to operate at all.
To prevent the formation of closed loops in bridged networks, IEEE draft publication P802.1D, referred to above, proposes a standard for a spanning tree algorithm that will connect the bridged network into a tree configuration, containing no closed loops, and spanning the entire network configuration. The spanning tree algorithm is executed periodically by the bridges on the interconnected network, to ensure that the tree structure is maintained, even if the physical configuration of the network changes. Basically, the bridges execute the spanning tree algorithm by sending special messages to each other to establish the identity of a "root" bridge. The root bridge is selected, for convenience, as the one with the smallest numerical identification. The algorithm determines which links of the bridges are to be closed and which are to be open, i.e. disabled, in configuring the tree structure. One more piece of terminology is needed to understand how the algorithm operates. Each LAN has a "designated" link, which means that one of the links connectable to the LAN is designated to carry traffic toward and away from the root bridge. The basis for this decision is similar to the basis for selecting the root bridge. The designated link is the one providing the least costly (shortest) path to the root bridge, with numerical bridge identification being used as a tie-breaker. Once the designated links are identified, the algorithm chooses two types of links to be activated or closed: first, for each LAN its designated link is chosen, and second, for each bridge a link that forms the "best path" to the root bridge is chosen, i.e. a link through which the bridge received a message giving the identity of the root bridge. All other links are opened. As will become clearer from an illustration in the following more detailed description, the algorithm results in interconnection of the LANs and bridges in a tree structure, i.e. one having no closed loops.
A disadvantage of the spanning tree configuration defined by the algorithm is that it fails to utilize redundant message paths that may be available, but are disabled, in the interconnected network. All message traffic is forced to follow a path through the tree structure. This path may, in some cases, be a long and tortuous one, even though a more direct path may be available through a point-to-point cross-link between two bridges, but outside the spanning tree structure. The reason that the more direct path is not used is that it may violate the rule against closed loops. Yet it will be apparent that some point-to-point cross-links between bridges represent useful message "shortcuts," which, if properly used, would increase the overall efficiency of the network. The spanning tree algorithm provides a simple solution to the problem of avoiding closed-loop message paths in interconnected networks, but the price for this convenience is a lower than optimum usage of the communication paths linking the interconnected LANs.
Hart U.S. Pat. No. 4,811,337 proposed a limited solution to this difficulty, but failed to recognize a more general solution. The Hart patent suggests that paths outside the spanning tree can be used to exchange message frames, in what the inventor refers to as distributed load sharing (DLS). As described in the patent, and also defined by the claims, the Hart invention has specific limitations. First, neither of the two bridges interfacing to the DLS or cross-link path may be the root bridge. Second, message frames transferred over the DLS path must be sent between stations that are further away from the root bridge than either bridge associated with the cross-link, or else the stations using the cross-link path must be connected directly to the local LANs of the cross-linked bridges.
The solution proposed by Hart not only limits the configurations that may utilize alternate communication paths in an interconnected network, but requires a complex set of rules to determine when a cross-link path may be formed. Clearly, a simpler approach would be desirable, and the present invention provides one.
| |
The cost of not using renewable energy
A clever new study [PDF] from the World Future Council attempts to do something I haven’t seen before: quantify the cost of not using renewables.
The idea is pretty simple. When we use finite fossil fuels to generate energy, rather than the inexhaustible, renewable alternatives, we make those fossil fuels unavailable for non-energetic uses (think petrochemicals) in the future. In other words, when we burn fossil fuels for energy, we are needlessly destroying valuable industrial capital stock.
You can read the paper for more on methodology and assumptions. The paper uses current market values for fossil fuels rather than attempting to predict future prices, so the estimates are likely conservative.
Here’s the conclusion:
Protecting the use of increasingly valuable fossil raw materials for the future is possible by substituting these materials with renewables. Every day that this is delayed and fossil raw materials are consumed as one-time energy creates a future usage loss of between 8.8 and 9.3 billion US Dollars. Not just the current cost of various renewable energies, but also the costs of not using them need to be taken into account. [my emphasis]
Got that? Every day we use fossil fuels for energy, we steal $9 billion from future people who will need those fossil fuels for non-substitutable industrial uses.
The exact numbers here are, like numbers in all economic modeling, probably going to turn out to be wrong. My guess is they are on the low end, but there are so many assumptions required that, really, who knows.
The paper does highlight an important conceptual point, though. Discussion about shifting to clean energy (or regenerative agriculture, or sustainable urbanism, etc.) tends to be dominated by the costs of changing. But the choice is not costs or no costs: the status quo carries costs too. Every day we stick with the status quo, we are increasing the costs our descendants will pay for the inevitable, unavoidable transitions to more sustainable systems. (Relatedly: the cost of reducing carbon emissions enough to avoid catastrophic warming rises every year.)
The question is not costs or no costs, but about the balance of costs and benefits, and perhaps more importantly, who pays the costs and who receives the benefits. Right now we are imposing huge and growing costs are our descendants in exchange for temporary, unsustainable benefits today. That is the farthest thing from “fiscal conservatism.”
| |
It’s being expected that by 2027, the Power Quality Equipment market cap will hit USD 22.02 billion at a CAGR growth of about 7.26%.
The global power quality equipment market is anticipated to reach a value of USD 59.5 billion by 2028. The rising demand for electronic product safety systems, the increase of renewable energy projects, non-uniform power quality and network stability problems, and standardization of power quality are the key factors driving the power quality equipment industry. The key driving force for the power efficiency equipment industry is rising power usage especially in the power utility sectors in the developing economies.
The growing industrial and manufacturing sector provides a global CAGR of above 6.5% with a huge demand for the power quality equipment in the market over the forecast period. The growth in the market is also governed by the rise in the number of technological advancements for the power quality equipment from the last few years. In addition, the market is expected to be further driven by strict government regulations on reactive power and penalties implied on end-users coupled with energy efficiency policies in different nations around the globe. However, some of the factors that are expected to limit the power quality equipment market during the forecast period are the impact of COVID-19 and the high cost of installation and deployment of power quality equipment.
Power Quality Equipment Market Scope
|Metrics||Details|
|Base Year||2020|
|Historic Data||2018-2019|
|Forecast Period||2021-2027|
|Study Period||2018-2027|
|Forecast Unit||Value (USD)|
|Revenue forecast in 2027||USD 22.02 billion|
|Growth Rate||CAGR of 7.26% during 2021-2027|
|Segment Covered||Phase, End-Users, Regions|
|Regions Covered||North America, Europe, Asia Pacific, South America, Middle East and Africa|
|Key Players Profiled||Hitachi ABB Power Grids Ltd, Siemens AG, EATON Corporation Plc, Emerson Electric Company, and Schneider Electric SE.|
Key Segments of the Global Power Quality Equipment Market
Equipment Overview, 2018-2028 (USD Billion, Million Units)
- Surge Arresters
- Surge Protection Devices
- Harmonic Filters
- Power Conditioning Units
- Power Distribution Units
- Uninterruptable Power Supply
- Synchronous Condenser
- Voltage Regulator
- Digital Static Transfer Switch
- Static VAR Compensator
- Solid Oxide Fuel Cells
- Isolation Transformers
- Power Quality Meters
Phase Overview, 2018-2028 (USD Billion, Million Units)
- Single Phase
- Three Phase
End User Overview, 2018-2028 (USD Billion, Million Units)
- Industrial & manufacturing
- Commercial
- Residential
- Transportation
- Utilities
Regional Overview, 2018-2028 (USD Billion, Million Units)
North America
- U.S.
- Canada
Europe
- UK
- Germany
- France
- Rest of Europe
Asia Pacific
- China
- Japan
- India
- Rest of Asia-Pacific
Middle East and Africa
- UAE
- South Africa
- Rest of Middle East and Africa
South America
- Brazil
- Rest of South America
Reasons for the study
- The purpose of the study is to give an exhaustive outlook of the global power quality equipment market.
- Power quality equipment is widely used for various end uses in industrial, manufacturing, commercial, residential, owing to their excellent properties and the market is expected to gain traction over the coming years
- With the growing transportation and utility industry, there is a rise in the demand for power quality equipment which is further expected to have a positive impact on the overall market growth
What does the report include?
- The study on the global power quality equipment market includes qualitative factors such as drivers, restraints, and opportunities
- The study covers the competitive landscape of existing/prospective players in the power quality equipment industry and their strategic initiatives for the product development
- The study covers a qualitative and quantitative analysis of the market segmented based on equipment and phase. Moreover, the study provides similar information for the key geographies.
- Actual market sizes and forecasts have been provided for all the above-mentioned segments.
Who should buy this report?
- This study is suitable for industry participants and stakeholders in the global power quality equipment market. The report will benefit: Every stakeholder involved in the power quality equipment market.
- Managers within the power quality equipment industry looking to publish recent and forecasted statistics about the global power quality equipment market.
- Government organizations, regulatory authorities, policymakers, and organizations looking for investments in trends of global power quality equipment market.
- Analysts, researchers, educators, strategy managers, and academic institutions looking for insights into the market to determine future strategies.
The quality of power is typically addressed on the basis of a number of parameters. The power quality measurement for the equipment manages common parameters for efficiency such as flicker, harmonics, current, voltage dips, voltage, transients and power. Such systems are capable of managing numerous disruptions. The main explanation for its use in power output analysis is the fact that devices that can accommodate several measurements of disturbances is that these disturbances appear to become a possible state of fault. It is impossible to determine in advance what kind of disturbances can lead to a possible fault; thus, tools with several capabilities for detecting disturbances are used.
For ease of access and increased precision in measurements, power quality control facilities are integrated in the device itself. Power quality devices primarily aim to reduce the use of multiple devices in a system where a single or a few units can perform the entire measurement work. The scalability of the measuring equipment for power quality increases as the device expands. Thus, several meters can be analyzed simultaneously. Such instruments are able to remotely configure and control meters in classes. They are able to retrieve data automatically from the meters themselves. It is possible to obtain a condensed description of the disruption. For rectification operations, this output may be dispersed across the grid.
Equipment Segment
Based on the equipment, the market is segmented into surge arresters, harmonic filters, surge protection devices, power conditioning units, uninterruptable power supply, isolation transformers, synchronous condenser, power distribution units, voltage regulator, static VAR compensator, solid oxide fuel cells, digital static transfer switch, and power quality meters. The uninterruptable power supply has the largest share in the market in 2019 and the market is forecasted to grow with the significant rate of above 4% CAGR over the forecast period. The product is widely used because of its primary protection feature for the critical equipment from the voltage interruption.
End User Segment
In terms of the end user segment, the market is segmented into industrial and manufacturing, utilities, transportation, residential, and commercial. In 2019, the commercial segment has the fastest growth in market and is anticipated to rise with a CAGR of above 4% over the forecast period. The use of remote diagnostics, remote data capture and remote maintenance, in the industry is being applied to electrical machinery and vehicles. The need for data centers, servers, and networking networks has been intensified by such measures. The need for safety systems for such vital infrastructure has been rising with the increasing use of electronic infrastructure. This fuels the demand for equipment for power efficiency in the commercial sector.
The North America region has the largest share in market with a CAGR of more than 6.2% in 2019 and is anticipated to continue with the same position during the forecast years 2018-2028. The North America region are seeing a major rise in the use of power quality equipment by countries, such as the U.S., and Mexico and Canada. The need for power quality equipment is fueled by the increasing demand for equipment in end-use industries such as industrial & manufacturing, commercial, and residential. The U.S. is expected to be the region's fastest-growing economy. The large number of investments in urban infrastructures and data centers from the telecom sector has provided a huge demand for the power quality equipment in the country over the coming years.
Asia Pacific region is anticipated to provide a high CAGR of over 7% during the forecast period. To fulfil its rising energy needs in an effective manner, the country is heading towards renewable energy on a wide scale. Some of the future growth markets in the power and utility sectors are China, Singapore and India. Asia-Pacific has also produced the highest possible gains for foreign direct investment, drawing 45% of all capital investment worldwide. Increased investments in population growth, urbanization, and modernization in the infrastructure sector especially in emerging economies like India and China. | https://www.adroitmarketresearch.com/industry-reports/power-quality-equipment-market |
I work with a wide range of clients from a variety of cultural backgrounds in person and remotely. Clients learn to reduce stress and work with their feelings in ways that lead to personal growth and development of effective interpersonal skills. Feelings that cause stress, anxiety, depression, excessive self-criticism, perfectionism and emotional flooding are processed so that these problem feelings can lead to growth, greater empathy for self and others as well as greater kindness towards others in their life. Additionally, clients who employ detachment as a means of self-protection develop emotional awareness and intelligence.
I provide short-term psychodynamic therapy that combines mindfulness, relaxation and deep breathing, body sensory awareness and insight within a safe nonjudgmental space. This can lead to the reduction of stress caused by emotional conflicts and current life realities. I offer advanced seminars on the treatment of trauma and supervision to psychotherapists.
Having an empathic based relationship with a client builds trust which is the foundation of effective therapy that can lead to transformations of the current life situation into a possibility for growth and more meaningful experiences as well as greater use of emotional intelligence. | https://www.psychologytoday.com/us/therapists/michael-a-kaufman-washington-dc/419835 |
Objective: To improve clinical recognition and provide research diagnostic criteria for three clinical syndromes associated with frontotemporal lobar degeneration.
Methods: Consensus criteria for the three prototypic syndromes-frontotemporal dementia, progressive nonfluent aphasia, and semantic dementia-were developed by members of an international workshop on frontotemporal lobar degeneration. These criteria build on earlier published clinical diagnostic guidelines for frontotemporal dementia produced by some of the workshop members.
Results: The consensus criteria specify core and supportive features for each of the three prototypic clinical syndromes and provide broad inclusion and exclusion criteria for the generic entity of frontotemporal lobar degeneration. The criteria are presented in lists, and operational definitions for features are provided in the text.
Conclusions: The criteria ought to provide the foundation for research work into the neuropsychology, neuropathology, genetics, molecular biology, and epidemiology of these important clinical disorders that account for a substantial proportion of cases of primary degenerative dementia occurring before the age of 65 years.
FTLD encompasses two major pathologic substrates which affect primarily the frontal or temporal cortex, in some patients asymmetrically. Three prototypic neurobehavioral syndromes can be produced by FTLD. Results of the consensus conference presented here describe these three behavioral conditions. The most common clinical manifestation of FTLD is a profound alteration in personality and social conduct, characterized by inertia and loss of volition or social disinhibition and distractibility, with relative preservation of memory function (FTD).2-5 There is emotional blunting and loss of insight. Behavior may be stereotyped and perseverative. Speech output is typically economical, leading ultimately to mutism, commensurate with the patient's amotivational state, although a press of speech may be present in some overactive, disinhibited patients. Cognitive deficits occur in the domains of attention, abstraction, planning, and problem solving, in keeping with a frontal "dysexecutive" syndrome, whereas primary tools of language, perception, and spatial functions are well preserved. Patients are not clinically amnesic. They are typically oriented and negotiate their local environment without becoming lost. Memory test performance, however, is typically inefficient, and impairments arise secondary to patients' frontal regulatory disturbances (inattention, lack of active strategies for learning and retrieval) rather than to a primary amnesia. Executive deficits are typically more evident in inert, avolitional patients than in overactive, disinhibited patients, although even in the latter, abnormalities can be elicited on tests of selective attention.
Two other prototypic clinical syndromes occur in FTLD: progressive nonfluent aphasia (PA)5-9 and semantic dementia (SD).5,10,11 PA is a disorder of expressive language, characterized by effortful speech production, phonologic and grammatical errors, and word retrieval difficulties. Difficulties in reading and writing also occur. Understanding of word meaning is relatively well preserved. The disorder of language occurs in the absence of impairment in other cognitive domains, although behavioral changes of FTD may emerge late in the disease course. In SD a severe naming and word comprehension impairment occurs in the context of fluent, effortless, and grammatical speech output; relative preservation of repetition; and the ability to read aloud and write orthographically regular words. Also there is an inability to recognize the meaning of visual percepts (associative agnosia). This loss of meaning for both verbal and nonverbal concepts (semantics) contrasts with the preservation of visuospatial skills and day-to-day memory.
The generic term FTLD refers to the circumscribed progressive degeneration of the frontotemporal lobes. The associated clinical syndromes are determined by the distribution of the pathology. In FTD there is prominent bilateral and usually symmetric involvement of the frontal lobes. In PA, atrophy is asymmetric, involving chiefly the left frontotemporal lobes. In SD, atrophy is typically bilateral and is most marked in the anterior temporal neocortex, with inferior and middle temporal gyri being predominantly affected. Asymmetries in the involvement of the left and right temporal lobes in SD mirror the relative severity of impairment for verbal and visual concepts (word meaning versus object recognition). Evidence that the different clinical manifestations may occur within the same family and that there may be an overlap in symptom pattern over the course of disease5 reinforces the link between the syndromes. Moreover, the distinct clinical syndromes are associated with the same underlying histopathologies. There are two main histologic types: prominent microvascuolar change without specific histologic features (frontal lobe degeneration type) or severe astrocytic gliosis with or without ballooned cells and inclusion bodies (Pick type).1 The disease etiology is not known but it has a high familial incidence and is likely to be under genetic influence. Molecular studies have shown mutations on chromosome 1712,13 or linkage to chromosome 314 in some families.
The clinical syndromes have a predominantly presenile onset, unlike AD and vascular dementia, which are more common in the elderly. The severe amnesia and visuospatial impairment and myoclonus characteristic of AD are not features of FTD, PA, and SD. Although EEGs show progressive slowing of waveforms in AD, the standard EEG is strikingly normal during the course of FTD, PA, and SD. Functional imaging using SPECT and PET reveal characteristic biparietal posterior abnormalities in the initial stages of AD, whereas in the clinical syndromes of FTLD the salient abnormality lies in the anterior hemispheres.
The course of FTD, PA, and SD is one of gradual evolution without the occurrence of ictal events, which are more characteristics of vascular dementia. The "bradyphrenia" of subcortical vascular disease is not a feature of the clinical syndromes of FTLD. Indeed, in FTLD, although striatal signs may develop late in the disease course, in the early and middle stages neurologic signs are absent or confined to the presence of primitive reflexes. Patients' physical well-being contrasts with the wealth of neurologic symptoms and signs common in vascular dementia. Although MRI frequently discloses extensive lesions in subcortical white matter in vascular dementia, this is not a pronounced feature of FTD, PA, or SD.
There are comprehensive descriptions in the literature of the clinical features and neuroradiologic manifestations of FTD, PA, and SD1-32 that enable the general and nonspecialist reader to appreciate the nature of historic evolution of the three syndromes. The types of underlying pathologic change have also been described extensively1,5,33-41 and an empiric nosologic taxonomy proposed prior to ultimate molecular biological definition. The purpose of this article is to present formalized diagnostic criteria for FTD, PA, and SD to enable researchers to perform further work into the neuropsychology, neuropathology, genetics, molecular biology, and epidemiology of these disorders. It is anticipated that usage in different fields of inquiry will lead to modification and improvements in the utility of these clinical criteria.
Criteria. The clinical criteria are set out in lists 1 through 4. The criteria for each of the three major clinical syndromes are divided into sections. The clinical profile statement together with the core clinical inclusion and exclusion features provide the necessary foundation for diagnosis. Additional clinical features, neuropsychological investigation, and brain imaging support the clinical diagnosis. Operational definitions of specific features are outlined later.
Clinical profile. This statement (seen in lists 1 through 3) summarizes the neurobehavioral profile necessary to fulfill criteria for diagnosis.
Core diagnostic features. These are features (see lists 1 through 3) integral to the clinical syndrome. All features must be present to fulfill the criteria for diagnosis.
Supportive diagnostic features. Clinical. These are features (see lists 1 through 3) that are not present in all patients, or they may be noted only during one phase of the disease. They are therefore not necessary conditions for diagnosis. Supportive features are characteristic, often with high diagnostic specificity, and their presence adds substantial weight to the clinical diagnosis. The diagnosis becomes more likely when more supportive features are present.
Physical. In each of the clinical syndromes physical signs are few in contrast to the prominent mental changes. Parkinsonian signs typically emerge only during late disease. The physical features outlined should be regarded as "supportive" rather than as necessary conditions for diagnosis.
Investigations. Formal neuropsychological assessment, EEG, and brain imaging each can provide support for and strengthen the clinical diagnosis. Such investigatory techniques are not available universally, and ought not to be considered a prerequisite for diagnosis. When neuropsychological assessment is performed, the profile of deficits must demonstrate disproportionate executive dysfunction in FTD or disproportionate language/semantic breakdown in PA and SD. With regard to brain imaging, the patterns of abnormality are characteristic, but not seen invariably. For example, prominent atrophy of the temporal lobes is well visualized by high-resolution MRI, but may be undetected by CT. Failure to demonstrate the prototypic appearances on imaging need not result in diagnostic exclusion.
Supportive features common to each of the clinical syndromes. These features (see list 4) support but are not a necessary condition for FTLD. Onset of disease is most commonly before the age of 65 years, although rare examples of onset in the very elderly have been reported. A positive family history of a similar disorder in a first-degree relative has been reported2,4 in as many as 50% of patients: Some families have shown mutations on chromosome 17 or linkage to chromosome 3. Motor neuron disease is a recognized albeit uncommon accompaniment to the clinical syndromes of lobar degeneration.42-47 The development of motor neuron disease in patients presenting with a progressive behavioral or language disorder would strongly support a clinical diagnosis of FTD or PA respectively.
Exclusion features common to each clinical syndrome. Clinical. All features (see list 4) must be absent. Early severe amnesia, early spatial disorientation, logoclonic speech with loss of train of thought, and myoclonus are features designed to exclude AD.
Investigations. All features should be absent (when the relevant information is available).
Relative diagnostic exclusion features. These are features (see list 4) that caution against but do not firmly exclude a diagnosis of FTLD. A history of alcohol abuse raises the possibility of an alcohol-related basis for a frontal lobe syndrome. However, excessive alcohol intake may also occur in FTD patients as a secondary manifestation of social disinhibition or hyperoral tendencies. The presence of vascular risk factors such as hypertension ought to alert investigators to a possible vascular etiology. Nevertheless, such risk factors are common in the general population and may be present coincidentally in some patients with FTLD, particularly in those of more advanced age.
Definitions of clinical features. This information assists in the use of the diagnostic lists.
Frontotemporal dementia. See list 1.
Core features. Insidious onset and gradual progression. There should be no evidence of an acute medical or traumatic event precipitating symptoms. Evidence for a gradually progressive course should be based on historic evidence of altered functional capacity (e.g., inability to work) over a period of at least 6 months, and may be supported by a decline in neuropsychological test performance. The degree of anticipated change is not specified because it is highly variable. In some patients change is dramatic over a 12-month period, whereas in others it is manifest only over a period of several years. Dramatic social and domestic events leading to perturbations in the patient's behavior must be distinguished from ictal occurrences of a neurologic or psychological nature. Only the latter are grounds for exclusion.
Early decline in social interpersonal conduct. This refers to qualitative breaches of interpersonal etiquette that are incongruent with the patient's premorbid behavior. This includes decline in manners, social graces, and decorum (e.g., disinhibited speech and gestures, and violation of interpersonal space) as well as active antisocial and disinhibited verbal, physical, and sexual behavior (e.g., criminal acts, incontinence, sexual exposure, tactlessness, and offensiveness). "Early" for this and other features implies that the abnormality should be present at initial presentation of the patient.
Early impaired regulation of personal conduct. This refers to departures from customary behavior of a quantitative type, ranging from passivity, inertia, and inactivity to overactivity, pacing, and wandering; and increased talking, laughing, singing, sexuality, and aggression.
Early emotional blunting. This refers to an inappropriate emotional shallowness with unconcern and a loss of emotional warmth, empathy, and sympathy, and an indifference to others.
Early loss of insight. This is defined as a lack of awareness of mental symptoms, evidenced by frank denial of symptoms or unconcern about the social, occupational, and financial consequences of mental failure.
Supportive features: behavioral disorder. Decline in personal hygiene and grooming. The caregivers' accounts of failure to wash, bathe, groom, apply makeup, and dress appropriately as before are reinforced by clinical observations of unkemptness, body odor, clothing stains, garish makeup, and inappropriate clothing combinations.
Mental rigidity and inflexibility. This refers to egocentricity and loss of mental adaptability, evidenced by reports of any one of the following: the patient has to have his or her own way, is unable to see another person's point of view, adheres to routine, and is unable to adapt to novel circumstances.
Distractibility and impersistence. These are reflected in failure to complete tasks and inappropriate digressions of attention to nonrelevant stimuli.
Hyperorality and dietary changes. This refers to overeating; bingeing; altered food preferences and food fads; excessive consumption of liquids, alcohol, and cigarettes; and the oral exploration of inanimate objects.
Perseverative and stereotyped behavior. This encompasses simple repetitive behaviors such as hand rubbing and clapping, counting aloud, tune humming, giggling, and dancing, as well as complex behavioral routines such as wandering a fixed route, collecting and hoarding objects, and rituals involving toileting and dressing.
Utilization behavior. This is stimulus-bound behavior48 during which patients grasp and repeatedly use objects in their visual field, despite the objects' irrelevance to the task at hand (e.g., patients repeatedly switch lights on and off, open and close doors, or continue eating if unlimited supplies of food are within reach). During clinical interview they may drink repeatedly from an empty cup or use scissors placed before them.
Speech and language. Altered speech output. There are two types of altered speech output: aspontaneity and economy of utterance, and press of speech. In aspontaneity and economy of utterance, the patient either does not initiate conversation or else output is limited to short phrases or stereotyped utterances. Responses to questions involve single-word replies or short, unelaborated phrases such as "don't know." Encouragement to amplify responses are unsuccessful. In press of speech, the patient speaks interruptedly, monopolizing a conversational interchange.
Stereotypy of speech. These are single words, phrases, or entire themes that the patient produces repeatedly and habitually either spontaneously or in response to questions, replacing appropriate conversational discourse.
Echolalia. Echolalia refers to a repetition of the utterances of others, either completely or in part, sometimes with change of syntax (e.g., Interviewer: "Did you go out yesterday?" Patient: "Did I go out yesterday") when this is a substitute for and not a precursor to an appropriate elaborated response.
Perseveration. Perseveration is defined as a repetition of a patient's own responses. It is a word or phrase that, once uttered, intrudes into the patient's subsequent utterances. It differs from a stereotypy in that the repeated word or phrase is not habitual. Perseverations may occur spontaneously in conversation or are elicited in naming tasks (e.g., the patient names scissors as "scissors" and later names a clock as "scissors"). Perseveration includes palilalia, in which there is immediate repetition of a word, phrase, or sentence (e.g., "I went down town, down town, down town").
Mutism. This is an absence of speech or speech sounds. Patients may pass through a transitional phase of "virtual mutism," during which they generate no propositional speech, yet echolalic responses and some automatic speech (e.g., "three" when prompted with "one, two") may still be present.
Physical signs. Primitive reflexes. At least one of the following is present: grasp, snout, and sucking reflexes.
Incontinence. This refers to voiding of urine or feces without concern.
Neuropsychology. Significant impairment on frontal lobe tests, in the absence of severe amnesia, aphasia, or perceptuospatial disorder. Impairment on frontal lobe tests is defined operationally as failures (scores below the fifth percentile) on conventional tests of frontal lobe function (e.g., Wisconsin/Nelson card sort, Stroop, Trail Making) in which a qualitative pattern of performance typically associated with frontal lobe dysfunction is demonstrated: concreteness, poor set shifting, perseveration, failure to use information from one trial to guide subsequent responses, inability to inhibit overlearned responses, and poor organization and temporal sequencing. Abnormal scores that arise secondary to memory, language, or perceptuospatial disorder (such as forgetting instructions or the inability to recognize or locate test stimuli) would not be accepted as evidence of impairment on frontal lobe tests as operationally defined.
FTD patients may perform inefficiently on formal memory, language, perceptual, and spatial tests as a secondary consequence of deficits associated with frontal lobe dysfunction, such as inattention, poor self-monitoring and checking, and a lack of concern for accuracy. Poor test scores per se would not therefore exclude a diagnosis of FTD. An absence of severe amnesia, aphasia, or perceptuospatial disorder would be demonstrated by patchiness or inconsistency in performance (e.g., failure on easy items and pass on more difficult items) or demonstration that correct responses can be elicited by cuing or by directing the patient's attention to test stimuli.
Electroencephalography. Normal despite clinically evident dementia. Conventional EEG reveals frequencies within the normal range for the patient's age (minimal theta would be considered within normal limits). There are no features of focal epileptiform activity.
Brain imaging (structural or functional). Predominant frontal or anterior temporal abnormality. Atrophy, in the case of structural imaging (CT or MRI), and tracer uptake abnormality, in the case of functional brain imaging (PET or SPECT), is more marked in the frontal or anterior temporal lobes. Anterior hemisphere abnormalities may be bilaterally symmetric or asymmetric, affecting the left or right hemisphere disproportionately.
Progressive nonfluent aphasia. Definitions are for features (see list 2) that differ from or are in addition to those of FTD.
Core features. Nonfluent spontaneous speech with at least one of the following: agrammatism, phonemic paraphasias, and anomia. Nonfluent speech is defined as hesitant, effortful production, with reduced rate of output. Agrammatism refers to the omission or incorrect use of grammatical terms, including articles, prepositions, auxiliary verbs, inflexions, and derivations (e.g., "man went town"; "he comed yesterday").
Phonemic paraphasias are sound-based errors that include incorrect phoneme use (e.g., "gat" for "cat") and phoneme transposition (e.g., "aminal" for "animal"). The frequency of such errors should exceed that reasonably attributed to normal slips of the tongue.
Impaired repetition. The patient has a reduced repetition span (less than five digits forward; less than four monosyllabic words) or makes phonemic paraphasias when attempting to repeat polysyllabic words, word sequences, or short phrases.
Alexia and agraphia. Reading is nonfluent and effortful. Sound-based errors are produced (phonemic paralexias). Writing is effortful, contains spelling errors, and may show features of agrammatism.
Early preservation of word meaning (understanding preserved at single-word level). Patients should show an understanding of the nominal terms employed during a routine clinical examination. There should be a demonstrable discrepancy between word comprehension and naming: Patients should show understanding of words that they have difficulty retrieving.
Behavior. Early preservation of social skills. The language disorder should be the presenting symptom. At the time of onset of language disorder, patients should demonstrate preserved interpersonal and personal conduct.
Late behavioral changes in FTD. The changes outlined for FTD in conduct, if they occur, should not be presenting symptoms. There should be a clear, documented period of circumscribed language disorder before their development.
Neuropsychology. Nonfluent aphasia in the absence of severe amnesia or perceptuospatial disorder. There is difficulty in verbal expression. The language impairment may compromise performance on verbal memory tasks, so that poor scores on memory tests per se would not exclude a diagnosis of progressive aphasia. The presence of normal scores on one or more tests of visual memory, or a demonstration of normal rates of forgetting (i.e., no abnormal loss of information from immediate to delayed recall/recognition), would provide evidence for an absence of severe amnesia. An absence of a severe perceptual disorder would be demonstrated by accurate recognition of the line drawings employed during routine naming tasks, as determined by the patient's ability to produce a correct name, an approximation to the name, a functional description of the object's use, or a pertinent gesture or action pantomime. An absence of severe spatial disorder is demonstrated by normal performance on two or more spatial tasks, such as dot counting, line orientation, and drawing copying.
Semantic aphasia and associative agnosia (SD). Core features. Fluent, empty spontaneous speech. Speech production is effortless, without hesitancies, and the patient does not search for words. However, little information is conveyed, reflecting reduced use of precise nominal terms, and increased use of broad generic terms such as "thing." In the early stages of the disorder the "empty" nature of the speech output may become apparent only on successive interviews, which reveal a limited and repetitive conversational repertoire.
Loss of word meaning. There must be evidence of a disorder both of single-word comprehension and naming. A semantic deficit may be alerted by patients' remarks of the type, "What's a **? I don't know what that is." However, impairment may not be immediately apparent in conversation because the patient's effortless speech gives an impression of facility with language. Word comprehension impairment needs to be established by word definition and object-pointing tasks. A range of stimuli needs to be tested, both animate and inanimate, because meaning may be differentially affected for different material types.
Semantic paraphasias. Semantically related words replace correct nominal terms. Although these may include superordinate category substitutions (e.g., "animal" for camel), coordinate category errors (e.g., "dog" for elephant; "sock" for glove) must be present to meet operational criteria.
Prosopagnosia. This is impaired recognition of familiar face identity, not attributable to anomia. It is demonstrated by the patient's inability to provide defining or contextual information about faces of acquaintances or well-known celebrities.
Associative agnosia. This is an impairment of object identity, present both on visual and tactile presentation, that cannot be explained in terms of nominal difficulties. It is indicated historically by reports of misuse of objects or loss of knowledge of their function. It is demonstrated clinically by patients' reports of lack of recognition and by their inability to convey the use of an object either verbally or by action pantomime.
Preserved perceptual matching and drawing reproduction. There should be some demonstration that the patient's inability to recognize faces or objects does not arise at the level of elementary visual processing. Demonstration of an ability to match for identity (to identify identical object pairs, shapes, or letters) or to reproduce simple line drawings (e.g., of a clock face, a flower, or a simple abstract design) would provide the minimum requirement to fulfill criteria for diagnosis.
Preserved single-word repetition. The relative preservation of repetition skills is a central feature of the disorder. This typically includes the ability to repeat short phrases and sequences of words, although for such complex material, errors may emerge ultimately in advanced disease in the context of severe semantic loss. Demonstration of accurate repetition at least at the level of a single polysyllabic word is required to fulfill criteria for diagnosis.
Preserved ability to read aloud and to write to dictation orthographically regular words. The ability to read without comprehension is central to the disorder. However, reading performance is not entirely error free. Orthographically irregular words commonly elicit "surface dyslexic"-type errors (e.g., "pint" read to rhyme with "mint"; "glove" to rhyme with "rove" and "strove"). Patients should demonstrate the ability to read aloud accurately at least one-syllable words with regular spelling-to-sound correspondence. Writing of orthographically irregular words also typically reveals regularization errors (e.g., "caught" written as "cort"). Patients should demonstrate accurate writing to dictation at least of one-syllable orthographically regular words.
Supportive diagnostic features: speech and language. Press of speech. The patient speaks without interruption. This occurs in many but not all patients.
Idiosyncratic word usage. Vocabulary is used consistently but idiosyncratically. For example, the word "container" applied to small objects regardless of their facility to contain, and "on the side" applied to spatial locations, both near (e.g., on the table) and distant (e.g., in Australia). The semantic link between the adopted word or phrase and its referent may be tenuous or absent.
Absence of phonemic paraphasias in spontaneous speech. Sound-based errors are absent in conversational speech. The feature, although characteristic, is not included as a core feature because occasional phonemic errors may emerge in advanced disease in the context of a profound disorder of meaning.
Surface dyslexia/dysgraphia. The presence of surface dyslexic errors (described earlier) in reading and writing is a strong supportive feature.
Preserved calculation. The preserved ability of patients to calculate (to carry out accurately two-digit written addition and subtraction) is characteristic. It is not included as a core feature because calculation skills may break down in advanced disease as a consequence of failure to recognize the identity of Arabic numerals.
Behavior. Loss of sympathy and empathy. Patients are regarded by relatives as self-centered, lacking in emotional warmth, and lacking awareness of the needs of others.
Narrowed preoccupations. Patients are reported to have a narrowed range of interests that they pursue at the expense of routine daily activities (e.g., doing jigsaw puzzles all day and neglecting the housework).
Parsimony. Patients show an abnormal preoccupation with money or financial economy. This may be demonstrated by hoarding or constant counting of money, by patients' avoidance of spending their own money, by their purchase of the cheapest items regardless of quality, or by their attempts to restrain usage by other family members of household utilities (e.g., electricity and water).
Neuropsychology. Profound semantic loss, manifest in failure of word comprehension and naming, or face and object recognition; preserved phonology and syntax, and elementary perceptual processing, spatial skills, and day-to-day memorizing. Significant impairment should be demonstrated on word comprehension and naming or famous face identification or object recognition tasks. It should be shown that poor scores arise at a semantic level and not at a more elementary level of verbal or visual processing by demonstrating that the patient can repeat words that are not understood, can match for identity, and can copy drawings of objects. Patients should demonstrate normal performance on two or more spatial tasks, such as dot counting and line orientation. Performance on formal memory tests (e.g., involving remembering words or faces) is compromised by patients' semantic disorder. Nevertheless, patients retain the ability to remember autobiographically relevant day-to-day events (e.g., that a grandchild visits on Saturdays). Such preservation is striking clinically but may be difficult to capture on formal tests, which by definition are divorced from daily life.
Features common to each clinical syndrome. Diagnostic exclusion features. Early, severe amnesia. Symptoms of poor memory may be present and inefficient performance demonstrated on memory tests; these may occur secondary to executive or language impairments. However, memory failures are patchy and inconsistent, and patients do not present a picture of classic amnesia. Demonstration that a patient is disoriented in both time and place and shows a consistent, pervasive amnesia for salient contemporary autobiographic events would be incompatible with the clinical syndromes of FTLD.
Spatial disorientation. Patients with FTD who wander from a familiar environment may become lost because of failure of self-regulation of behavior (i.e., for reasons that are not primarily spatial). They do not exhibit spatial disorientation in familiar surroundings such as their own home. They negotiate their surroundings with ease, and localize objects in the environment with accurate reaching actions. Preservation of primary spatial skills is demonstrable even in patients with advanced disease by their capacity, for example, to align objects and to fold paper accurately. Evidence of poor spatial localization and disorientation in highly familiar surroundings would exclude clinical diagnoses of FTD, PA, or SD.
Logoclonic, festinant speech with rapid loss of train of thought. Logoclonia is defined as the effortless repetition of the final syllable of a word (e.g., Washington ... ton ... ton ... ton). Festinant speech refers to a rapid, effortless reiteration of individual phonemes. Logoclonic and festinant speech need to be distinguished from stuttering, which has an effortful quality and usually involves repetition of the first consonant or syllable. They need to be distinguished from palilalia, during which there is repetition of complete words and phrases. Loss of train of thought is a common feature of AD: patients begin sentences that they fail to complete, not only because of word-finding difficulty but also because of rapid forgetting of the intended proposition. A demonstration in conversation that patients are rapidly losing track would be contrary to a diagnosis of FTLD.
Conclusion. These criteria provide a mechanism for diagnosis and differentiation of dementias associated with FTLD. The core diagnostic criteria indicate the consensus of the group in identifying the key clinical aspects that differentiate FTD, PA, and SD.
This consensus paper is the result of an international collaborative workshop on FTD held in Toronto, Canada, April 1996. It is dedicated to the memory of D. Frank Benson, greatly admired for his contribution to the field of dementia and his inspiration to others.
Character change and dirordered social conduct are the dominant features initially and throughout the disease course. Instrumental functions of perception, spatial skills, praxis, and memory are intact or relatively well preserved.
Disorder of expressive language is the dominant feature initially and throughout the disease course. Other aspects of cognition are intact or relatively well preserved.
Semantic disorder (impaired understanding of word meaning and/or object identity) is the dominant feature initially and throughout the disease course. Other aspects of cognition, including autobiographic memory, are intact or relatively well preserved.
Supported by The French Foundation and Baycrest Centre for Geriatric Care. Studies of FTD were supported in part by the Wellcome Trust and a National Institute on Aging Alzheimer's Disease Center grant (AG 10123).
Received March 27, 1998. Accepted in final form August 8, 1998.
The Lund and Manchester Groups. Consensus Statement. Clinical and neuropathological criteria for fronto-temporal dementia. J Neurol Neurosurg Psychiatry 1994;4:416-418.
Gustafson L. Frontal lobe degeneration of non-Alzheimer type. II. Clinical picture and differential diagnosis. Arch Gerontol Geriatr 1987;6:209-223.
Gustafson L. Clinical picture of frontal lobe degeneration of non-Alzheimer type. Dementia 1993;4:143-148.
Neary D, Snowden JS, Northen B, Goulding PJ. Dementia of frontal lobe type. J Neurol Neurosurg Psychiatry 1988;51:353-361.
Snowden JS, Neary D, Mann DMA. Fronto-temporal lobar degeneration: fronto-temporal dementia, progressive aphasia, semantic dementia. New York: Churchill Livingstone, 1996.
Mesulam M-M. Slowly progressive aphasia without generalized dementia. Ann Neurol 1982;11:592-598.
Delecluse F, Andersen AR, Waldemar G, et al. Cerebral blood flow in progressive aphasia without dementia. Brain 1990;113:1395-1404.
Weintraub S, Rubin NP, Mesulam M-M. Primary progressive aphasia: longitudinal course, neuropsychological profile and language features. Arch Neurol 1990;47:1329-1335.
Snowden JS, Neary D, Mann DMA, Goulding PJ, Testa HJ. Progressive language disorder due to lobar atrophy. Ann Neurol 1992;31:174-183.
Snowden JS, Goulding PJ, Neary D. Semantic dementia: a form of circumscribed atrophy. Behav Neurol 1989;2:167-182.
Hodges JR, Patterson K, Oxbury S, Funnell E. Semantic dementia. Progressive fluent aphasia with temporal lobe atrophy. Brain 1992;115:1783-1806.
Hutton M, Lendon CL, Rizzu P, et al. Association of missense and 5′-splice-site mutations in tau with the inherited dementia FTDP-17. Nature 1998;393:702-705.
Poorkaj P, Bird TD, Wijsman E, et al. Tau is a candidate gene for chromosome 17 frontotemporal dementia. Ann Neurol 1998;43:815-825.
Brown J, Ashworth A, Gydesen S, et al. Familial nonspecific dementia maps to chromosome 3. Hum Mol Genet 1995;4:1625-1628.
Gustafson L, Brun A, Ingvar DH. Presenile dementia: clinical symptoms, pathoanatomical findings and cerebral blood flow. In: Meyer JS, Lechner H, Reivich M, eds. Cerebral vascular disease. Amsterdam: Excerpta Medica, 1977:5-9.
Neary D, Snowden JS, Bowen DM, et al. Neuropsychological syndromes in presenile dementia due to cerebral atrophy. J Neurol Neurosurg Psychiatry 1986;49:163-174.
Neary D, Snowden JS, Shields RA, et al. Single photon emission tomography using 99mTc-HMPAO in the investigation of dementia. J Neurol Neurosurg Psychiatry 1987;50:1101-1109.
Jagust WJ, Reed BR, Seab JP, Kramer JH, Budinger TF. Clinical-physiologic correlates of Alzheimer's disease and frontal lobe dementia. Am J Physiol Imaging 1989;4:89-96.
Miller BL, Cummings JL, Villanueva-Meyer J, et al. Frontal lobe degeneration: clinical, neuropsychological and SPECT characteristics. Neurology 1991;41:1374-1382.
Gregory CA, Hodges JR. Dementia of frontal type and the focal lobar atrophies. Int Rev Psychiatry 1993;5:397-406.
Donoso A, Lillo R, Quiroz M, Rojas A. Demencias prefrontales: clinica y SPECT en seis casos. Rev Med Chil 1994;122:1408-1412.
Frisoni GB, Pizzolato G, Geroldi C, Rossato A, Bianchetti A, Trabucchi M. Dementia of the frontal type: neuropsychological and [99Tc]-HMPAO SPECT features. J Geriatr Psychiatry Neurol 1995;8:42-48.
Miller BL, Ikonte C, Ponton M, et al. A study of the Lund-Manchester research criteria for frontotemporal dementia: clinical and single-photon emission CT correlations. Neurology 1997;48:937-942.
Assal G, Favre C, Regli F. Aphasie dégénerative. Rev Neurol (Paris) 1985;141:245-247.
Chawluk JB, Mesulam M-M, Hurtig H, et al. Slowly progressive aphasia without generalized dementia: studies with positron emission tomography. Ann Neurol 1986;19:68-74.
Basso A, Capitani E, Laiacona M. Progressive language impairment without dementia: a case with isolated category specific semantic defect. J Neurol Neurosurg Psychiatry 1988;51:1201-1207.
Poeck K, Luzzatti C. Slowly progressive aphasia in three patients. The problem of accompanying neuropsychological deficit. Brain 1988;111:151-168.
Craenhals A, Raison-Van Ruymbeke AM, Rectem D, Seron X, Laterre EC. Is slowly progressive aphasia actually a new clinical entity? Aphasiology 1990;4:485-509.
Kempler D, Metter EJ, Riege WH, Jackson CA, Benson DF, Hanson WR. Slowly progressive aphasia: three cases with language, memory, CT and PET data. J Neurol Neurosurg Psychiatry 1990;53:987-993.
Tyrrell PJ, Warrington EK, Frackowiak RSJ, Rossor MN. Heterogeneity in progressive aphasia due to focal cortical atrophy. A clinical and PET study. Brain 1990;113:1321-1336.
Karbe H, Kertesz A, Polk M. Profiles of language impairment in primary progressive aphasia. Arch Neurol 1993;50:193-201.
Grossman M, Mickanin J, Onishi K, et al. Progressive nonfluent aphasia: language, cognitive and PET measures contrasted with probable Alzheimer's disease. J Cogn Neurosci 1996;8:135-154.
Brun A. Frontal lobe degeneration of non-Alzheimer type. I. Neuropathology. Arch Gerontol Geriatr 1987;6:193-208.
Brun A. Frontal lobe degeneration of non-Alzheimer type revisited. Dementia 1993;4:126-131.
Knopman DS, Mastri AR, Frey WH, Sung JH, Rustan T. Dementia lacking distinctive histologic features: a common non-Alzheimer degenerative dementia. Neurology 1990;40:251-256.
Mann DMA, South PW, Snowden JS, Neary D. Dementia of frontal lobe type; neuropathology and immunohistochemistry. J Neurol Neurosurg Psychiatry 1993;56:605-614.
Mann DMA, South PW. The topographic distribution of brain atrophy in frontal lobe dementia. Acta Neuropathol 1993;85:334-340.
Kirshner HS, Tanridag O, Thurman L, Whetsell WO. Progressive aphasia without dementia: two cases with focal spongiform degeneration. Ann Neurol 1987;22:527-532.
Graff-Radford NR, Damasio AR, Hyman BT, et al. Progressive aphasia in a patient with Pick's disease: a neuropsychological, radiologic and anatomic study. Neurology 1990;40:620-626.
Neary D, Snowden JS, Mann DMA. The clinical pathological correlates of lobar atrophy. A review. Dementia 1993;4:154-159.
Kertesz A, Hudson L, Mackenzie IRA, Munoz DG. The pathology and nosology of primary progressive aphasia. Neurology 1994;44:2065-2072.
Brion S, Psimaras A, Chevalier JF, Plas J, Masse G, Jatteau O. l'Association maladie de Pick et sclérose latérale amyotrophique. Etude d'un cas anatomo-clinique et revue de la litérature. L'Encephale 1980;6:259-286.
Constantinidis J. Syndrome familial: association de maladie Pick et sclérose latérale amyotrophique. L'Encephale 1987;13:285-293.
Neary D, Snowden JS, Mann DMA, Northern B, Goulding PJ, Mcdermott N. Frontal lobe dementia and motor neuron disease. J Neurol Neurosurg Psychiatry 1990;53:23-32.
Ferrer I, Roig C, Espino A, Peiro G, Matias Guiu X. Dementia of frontal lobe type and motor neuron disease. A Golgi study of the frontal cortex. J Neurol Neurosurg Psychiatry 1991;54:932-934.
Sam M, Gutmann L, Schochet SS, Doshi H. Pick's disease: a case clinically resembling amyotrophic lateral sclerosis. Neurology 1991;41:1831-1833.
Caselli RJ, Windebank AJ, Petersen RC, et al. Rapidly progressive aphasic dementia and motor neuron disease. Ann Neurol 1993;33:200-207.
Lhermitte F. 'Utilization behavior' and its relation to lesions of the frontal lobes. Brain 1983;106:237-255. | https://n.neurology.org/content/51/6/1546?ijkey=f4fdc7667bb3ca7c058323319220a30472e61fda&keytype2=tf_ipsecsha |
Genetic factors are important in the pathogenesis of osteoporosis, but little is known about the genetic determinants of treatment response. Previous studies have shown that polymorphisms of the LRP5 gene are associated with bone mineral density (BMD), but the relationship between LRP5 polymorphisms and response to bisphosphonate treatment in osteoporosis has not been studied. In this study we investigated LRP5 polymorphisms in relation to treatment response in a group of 249 osteoporotic or osteopenic men who participated in a 24-month randomized double blind placebo-controlled trial of risedronate treatment. BMD and biochemical markers of bone turnover were measured at baseline and after 6, 12, and 24 months of follow-up. We analyzed two coding polymorphisms of LRP5, which have previously been associated with BMD, V667M (rs4988321) and A1330V (rs3736228), and found a significant association between the A1330V polymorphism and hip BMD at baseline. Subjects with the 1330 Val/Val genotype had 8.4% higher total-hip BMD compared with the other genotype groups (P = 0.009), and similar associations were observed at the femoral neck (P = 0.01) and trochanter (P = 0.002). There was no association between A1330V and spine BMD, however, or between the V667M polymorphism and BMD at any site. The difference in hip BMD between A1330V genotype groups remained significant throughout the study, but there was no evidence of a genotype-treatment interaction in either risedronate- or placebo-treated patients. In conclusion, the LRP5 A1330V polymorphism is associated with hip BMD in osteoporotic men, but allelic variations in LRP5 do not appear to be associated with response to bisphosphonate treatment. | https://qfrd.pure.elsevier.com/en/publications/lrp5-polymorphisms-and-response-to-risedronate-treatment-in-osteo |
9th Annual Edisto Beach Road Race
Starting location is in Ocean Ridge Resort near the Wyndham Recreation Center on Sea Cloud Circle. The course winds through Wyndham and side streets on Edisto Beach. The 5K Run/Walk will begin at 8:30 am and the 1 Mile Run/Walk will begin at 9:30 am.
AWARDS
5K Run: Male & Female 1st Place per age group, and 1st, 2nd and 3rd Overall
1 Mile Run/Walk: 1st, 2nd and 3rd Overall
AGE GROUPS MALE & FEMALE
14 and under, 15-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70+
ENTRY FEE
Advance registration through March 15th and the fee will be $25 (this includes t-shirt). We cannot accept online registrations after March 15th, however, you can register in person between March 16th and 20th at the Edisto Chamber office, located at 42 Station Court, Edisto Island, SC 29438. Registration hours will be from 10 am to 12 noon and 2:00 – 4:00 pm. The fee for late registration through Race Day will be $30 (while t-shirt supplies last) or $20 without shirt. Office telephone number is 843-869-3867. | https://marathons.ahotu.com/event/edisto-beach-road-race |
The 2018 Call for Participation for the 106th Annual Conference—taking place February 21–24 in Los Angeles—describes many of next year’s sessions. CAA and the session chairs invite your participation: please follow the instructions in the booklet to submit a proposal for a paper or presentation. This publication also includes a call for Poster Session proposals. In addition, bear in mind that some sessions have already been fully formed at the time of acceptance.
Below are some of the sessions that could include eighteenth-century submissions, including HECAA’s panel on ‘Imitation, Influence, and Invention in the Enlightenment’, chaired by Heidi Strobel and Amber Ludwig.
◊ ◊ ◊ ◊ ◊
Historians of Netherlandish Art (HNA)
All in the Family: Northern European Artistic Dynasties, 1350–1750
Chair: Catharine Ingersoll (Virginia Military Institute), [email protected]
In early modern northern Europe, many artists followed fathers, uncles, brothers, sisters, and spouses into the family business of art-making. From the Netherlandish brothers Herman, Pol, and Jean de Limbourg, to the Vischer family of sculptors in Nuremberg, to the Teniers dynasty of Flemish painters, artists all over the North learned from and collaborated with family members over the course of their careers. For a young artist, family associations helped ease entry into the profession and art market and provided a built-in network of contacts and commissions. However, these connections could also constrict innovation when artists were expected to conform to models set by preceding generations. This session welcomes papers that deal with questions of artists’ familial relationships, in all their rich variety of forms. Some issues that may be explored in the panel include: Did artists seek to differentiate themselves from their pasts, or integrate themselves into a dynastic narrative? What kinds of dynamics were at play when family members collaborated on projects or commissions? How did familial ateliers organize themselves? In what ways were family traditions valued in the marketplace? To what extent did working in a family ‘style’ (evident for example in the work of Pieter Brueghel the Younger) benefit or hinder artists? Where in specific artworks do we see artistic debts to previous generations or deliberate breaks with the past?
◊ ◊ ◊ ◊ ◊
Ariadne’s Thread: Understanding Eurasia through Textiles
Chair: Mariachiara Gasparini (Santa Clara University), [email protected]
Textile can be perceived as an indecipherable code included in the field of material and visual culture. It is not only a two-dimensional screen that reflects a known common imagery ‘indigenized’ in different geographic areas, but it has also a three-dimensional surface—created by the fibers interwoven in its structure—which follows an acquired technical grammar in the weaving process, and which could sometimes affect the ‘two-dimensional’ pattern register. Especially during the Middle Ages, the material and visual nature of textile enabled its transcultural circulation among Eurasian societies. Today, polychrome and monochrome fragments can disclose cultural and artistic similarities between centralized and provincial areas. A technical and stylistic analysis can indeed lead us through the comprehension of the universal aspect of this medium which can be easily and generally perceived as functional or as aesthetic, but rarely as a medium of human interaction and sharing. The universal aspect of textile challenges the idea of stable and fixed cultural boundaries especially arose with the concept of the modern nation-states. This panel aims to clarify similar or identical artistic developments among ancient societies of Asia and Europe. Ariadne’s thread would investigate transcultural entanglements of a maze currently recognized in the academic world as an ancient form of ‘globalization’, which might rather be reconsidered as a universal form of kinship. Papers may investigate case studies in specific visual art and material culture topics and archeological sites or take a broader, comparative approach. Particularly welcome are papers from the digital humanities.
◊ ◊ ◊ ◊ ◊
Art History as Anti-Oppression Work
Chair: Christine Y. Hahn (Kalamazoo College), [email protected]
What would an anti-racist, anti-oppression art history curriculum in higher education look like and how might it be taught and implemented? Working from Iris Young’s five categories of oppression—exploitation, powerlessness, marginalization, cultural imperialism, and violence—how might art history be used as a liberatory methodology for dismantling these categories? More specifically, how can we use art history’s methodologies to address those “structural phenomena that immobilize or diminish a group”? This panel seeks papers from practitioners of art history who have used innovative approaches in the discipline as tools for addressing and dismantling structural oppression. Particularly of interest are examples of successful introductory survey courses in this regard, department-wide commitments to anti-oppression work that have driven curricular decisions, student activism through art history, and effective community collaborations.
◊ ◊ ◊ ◊ ◊
Art, Agency, and the Making of Identities at a Global Level, 1600–2000
Chairs: Noémie Etienne (Bern University), [email protected]; and Yaelle Biro (The Metropolitan Museum of Art), [email protected]
Circulation and imitation of cultural products are key factors in shaping the material world — as well as imagined identities. Many objects or techniques that came to be seen as local, authentic, and typical are in fact entangled in complex transnational narratives tied to a history of appropriation, imperialism, and the commercial phenomenon of supply and demand. In the seventeenth century, artists and craftspeople in Europe appropriated foreign techniques in the creation of porcelain, textiles, or lacquers that eventually shaped local European identities. During the nineteenth century, Western consumers looked for genuine goods produced outside of industry, and the demand of bourgeois tourism created a new market of authentic souvenirs and forgeries alike. Furthermore, the twentieth century saw the (re)emergence of local ‘schools’ of art and crafts as responses to political changes, anthropological research, and/or tourist demand. This panel will explore how technical knowledge, immaterial desires, and political agendas impacted the production and consumption of visual and material culture in different times and places. A new scrutiny of this back and forth between demanders and suppliers will allow us to map anew a multidirectional market for cultural goods in which the source countries could be positioned at the center. Papers could investigate transnational imitation and the definition of national identities; tourist art; the role of foreign investment in solidifying local identities; reproduction and authenticity in a commercial or institutional context; local responses to transnational demand; as well as the central role of the makers’ agency from the seventeenth to the twentieth century.
◊ ◊ ◊ ◊ ◊
Circumventing Censorship in Global Eighteenth-Century Visual Culture
Chairs: Lauren Kilroy-Ewbank (Pepperdine University), [email protected]; and Kristen Chiem (Pepperdine University), [email protected]
Today, we recognize many pervasive subjects and decorative motifs from the eighteenth century as lacking radicalized or subversive content. However, many of them emerged within inquisitorial atmospheres that accompanied political revolutions, colonial projects, the enlightenment, and religious transformations. Censorship of artists and images occurred in many instances to maintain or advance dominant ideologies, yet there are also cases where it proved ineffectual. We seek papers that highlight these less successful or futile cases of censorship in global eighteenth-century visual culture, especially of Asia, Africa, and the Americas. Specifically, we are interested in how artists resisted or subverted authoritative ideologies by crafting images that were thoroughly interwoven into the visual and social fabric so as to seem commonplace and unobjectionable. How did artists use innocuous images to implicitly critique power structures or subvert authority? In what ways did censorship that targeted texts or social practices shape visual culture more broadly? How did inquisitorial attempts unintentionally draw attention to the very ideas they aimed to suppress? This panel encourages a rethinking of imagery perceived as decorative, trivial, or benign and the impact of censorship in the eighteenth century.
◊ ◊ ◊ ◊ ◊
Museum Committee
Decolonizing Art Museums?
Chairs: Risham Majeed (Ithaca College), [email protected]; Elizabeth Rodini (Johns Hopkins University), [email protected]; and Celka Straughn (Spencer Museum of Art), [email protected]
The colonial history of museums is by now familiar, and institutional critiques of and within ethnographic and anthropological collections are fairly widespread. Indeed, many of the objects in these collections have migrated to art museums as a result of postcolonial thinking. But what about art museums? How do these institutions, their collections, and their practices continue to extend colonial outlooks for Western and non-Western art, perhaps silently, and what tools are being used to disrupt these perceptions both in the United States and abroad? This panel explores what decolonization means for art museum practices and the ways decolonizing approaches can move the museum field toward greater inclusion, broader scholarly perspectives, and opportunities to redress structural inequities. Topics to address might include: detangling collection objects from colonial collecting practices; decentering the status quo across museum operations; reconsidering the relationship between contemporaneity and historicism; alternative modes of presentation (breaking received hierarchies and narratives); embracing varied understandings of objects, materials, catalogues, and archives; polyphony and pluralism in museum rhetoric; and an understanding of ‘colonialism’ that steps outside conventional definitions of this term. We invite papers that combine scholarship, practice, and activism, bringing together case studies with critical reflection on art museums to demonstrate what decolonized practices can and might look like and o er models for institutional change. Papers that explore diverse modes of practice within and outside the United States, that provide intersectional and interdisciplinary approaches, and/or that present alternative ways for people to use and reimagine art museums are especially welcome.
◊ ◊ ◊ ◊ ◊
Digital Surrogates: The Reproduction and (re)Presentation of Art and Cultural Heritage
Chairs: Sarah Victoria Turner (Paul Mellon Centre for Studies in British Art), [email protected]; and Thomas Scutt (Paul Mellon Centre for Studies in British Art), tscutt@paul-mellon-centre. ac.uk
What new art historical perspectives and kinds of knowledge do three-dimensional visualizations of objects and spaces afford? What are the key possibilities or potential pitfalls to be aware of when generating new visualizations? How can visualizations extend and enhance the public function of museums by increasing accessibility and engagement? How do we connect these visualizations with new methodological insights about objects and their reproductions? Does the creation of digital surrogates result in a democratization of cultural history, or does it further distance researchers and the public from original objects? How does the production of these resources navigate the ‘threshold of originality,’ and to what extent can they be distinguished as original works? What are the most effective ways to share, publish, and circulate these visualizations? This panel seeks presentations and provocations exploring issues relating to the process of creating, collaborating on, publishing, and using 3D visualizations of art works, cultural heritage objects, and architectural spaces. It is chaired by members of the editorial team of British Art Studies (BAS), an online-only peer-reviewed journal that publishes new research on art and architecture. Approaching these issues from the perspective of art history, digital humanities, and cultural heritage, this panel will explore best practices in a growing area of digital art historical research from a range of perspectives.
◊ ◊ ◊ ◊ ◊
Eccentric Images in the Early Modern World
Chairs: Mark A. Meadow (University of California, Santa Barbara), [email protected]; and Marta Faust (University of California, Santa Barbara), [email protected]
Trompe l’oeil paintings, anamorphic portraits, anthropomorphic landscapes, pictorial stones, reversible heads, and composite figures are doubly eccentric. Often dismissed as curiosities and aberrations, they have been marginalized and de-centered within art history. Frequently, they demand that the viewer take unorthodox positions, looking at them from extreme angles from more than one physical location or shifting from one perceptual mode to another. Rather than trivializing such pictures as mere games, virtuosic trivia, and forms of entertainment, this session invites papers that explore how such eccentric images explore issues concerning perception, artifice, and both human and natural creativity. What different modes of artistic production and perception do they require? What questions do they pose about cognition, viewing experiences, and alternate subject positions? What questions do they raise about the role of viewers in constituting the work of art? How do images that seem to change before one’s eyes engage with period notions of paradox, volatility, and mutable forms? How do they establish conditions for a more self-aware beholder? We welcome submissions addressing any aspect of eccentric imagery, from any cultural perspective, in the long early modern period (ca. 1400–1800).
◊ ◊ ◊ ◊ ◊
Historicizing Loss in Early Modern Europe
Chair: Julia Vazquez (Columbia University), [email protected]
The history of art and architecture in Baroque Madrid is bookended by two major events: the fire that burned down the Pardo Palace in 1604 and the fire that burned down the Alcázar Palace in 1734. Resulting in the loss of dozens of paintings by Titian, Antonis Mor, and Velázquez, in addition to the buildings themselves, these events represented unprecedented moments
of loss to the historical record of this period. Scholars that work in this field usually lament losses like these for their historiographic repercussions. This panel aims, instead, to resituate loss in its historical context. How can the loss of any one object transform the reception of others in their own historical period? How do patrons and artists respond to the destruction of objects? How are losses narrativized, and how do they transform existing narratives? When and under what circumstances does the destruction of existing artworks stimulate the production of new ones? Are objects ever recuperated or reconstituted, and if so, how? Although organized by a scholar of the Spanish Baroque, I invite scholars working in any period of early modern Europe to propose papers dealing with these or related questions.
◊ ◊ ◊ ◊ ◊
Hucksters or Connoisseurs?: The Role of Intermediary Agents in Art Economies
Chairs: Titial Hulst (Purchase College, The State University of New York), [email protected]; and Anne Helmreich (Texas Christian University), [email protected]
The roles of art dealers in the creation of art economies and the circulatory exchange of goods have come to increasing attention of late. However, much work remains to be done to counter the long history of the hagiographic treatment of dealers, which owes a great deal to the fact that histories of dealers were largely authored by dealers themselves, eager to write themselves into the history of art. For this session, we seek to bring a critical and historical perspective to the role of intermediary agents in the primary and secondary markets. We seek papers that will examine dealers who mediated between the artist as producer and the consumer, whether conceived as an individual patron or broadly configured audiences. We also seek papers that identify strategies developed by these intermediary figures in response to changing social-historical as well as geographical conditions. Relatedly, what role did dealers play in the emergence of art history as a discipline and the construction of its narratives given the vested interest of these agents in knowledge formation and collection building? Since histories of art dealers have long been dominated by narratives drawn from the Western market, we are particularly interested in papers that examine the role of this figure in non-western art economies as well as topics that help us test and question standard models derived from the early modern and modern Western context. We encourage analysis of historically grounded strategies and practices, as opposed to anecdotal heroic narratives.
◊ ◊ ◊ ◊ ◊
Historians of Eighteenth-Century Art and Architecture (HECAA)
Imitation, Influence, and Invention in the Enlightenment
Chairs: Heidi A. Strobel (University of Evansville), [email protected]; and Amber Ludwig, (Independent Scholar), [email protected]
Much eighteenth-century artistic training and practice centered on the idea of copying. Sir Joshua Reynolds encouraged Royal Academy students to contemplate and quote the old masters to elevate their works; the Académie des Beaux-Arts sponsored the Prix de Rome to allow French painters and sculptors uninterrupted study of antiquity and Renaissance art and architecture. Exhibitions like John Boydell’s Shakespeare Gallery relied, in part, on revenue from print sales to turn a pro t, while artists like sculptor Anne Damer used prints to broaden the audience of her works. The purpose of this session is to interrogate the complicated relationship between imitation, in uence, and invention and the ways in which value—educational, monetary, cultural, etc.—is assigned to artwork created after or influenced by another.
◊ ◊ ◊ ◊ ◊
Association of Research Institutes in Art History (ARIAH)
Material Culture and Art History: A State of the Field(s) Panel Discussion
Chair: Catharine Dann Roeber (Winterthur Museum), [email protected]
Over the past generation, art history has become increasingly more inclusive in the objects it takes as its focus of study. In tandem, some practitioners have turned to the term ‘material culture studies’ to describe their work. We are looking for short presentations (ten minutes) that can open out into a larger discussion among panelists, organizers, and attendees about conceptual frameworks and methodological approaches emerging from this ongoing nexus. Proposals are welcomed from educators, curators, designers, and artists. Rather than case studies, we would value more reflective perspectives.
◊ ◊ ◊ ◊ ◊
Materiality and Metaphor: The Uses of Gold in Asian Art
Chairs: Michelle C. Wang (Georgetown University), mcw57@ georgetown.edu; and Donna K. Strahan (Freer Gallery of Art and Sackler M. Gallery, Smithsonian Institution), [email protected]
Unique among Asian art materials, gold is both a color and an artistic medium. Embodying a host of contradictions, gold functioned as a marker of wealth and prestige and was minted into coins and cast into jewelry, yet it was also commonly used to embellish repairs made to utilitarian objects such as ceramics. Malleable and lustrous, gold furthermore was used as frequently on its own as it was in conjunction with other materials, including bronze, lacquer, and textile, and applied to paper as surface decoration. The conceptual associations of gold are equally varied. In Daoism, alchemists experimented with a range of substances in order to produce life-prolonging elixirs of gold. Within Buddhism, the body of the Buddha is believed to be golden in hue and emit light. Despite its omnipresence within a broad range of artistic and cultural traditions in Asia, however, the study of gold is still in its infancy. Only in the past twenty-five years have scholars of Asian art turned their attention to the serious study of gold artifacts. This panel seeks to bring together art historians and conservators from museums and universities in a conversation about gold as material and metaphor in Asian art. Creating a cross-cultural and comparative platform, we seek papers that simultaneously pay attention to the materiality of gold and place it into dialogue with larger theoretical and conceptual concerns in Asian art and culture.
◊ ◊ ◊ ◊ ◊
Association of Art Museum Curators (AAMC)
Mobilizing the Collection
Chairs: Kristen Collins (The J. Paul Getty Museum), [email protected]
With the decentering of the discipline of art history, museums in this century are working as never before to transcend the paradigms that shaped their collections. The proposed panel explores how a primarily Western-centric collection can engage contemporary audiences in a multicultural society. The proposed panel discussion and conversation will include four ten-minute presentations by curators and directors who will outline projects that have attempted to address this issue through loans, exhibitions, and programming. Questions to be addressed include: How are we to mobilize our collections, using our works of art as a starting point for conversations that promote inclusiveness and connection to our audiences? What are the potential challenges that face museum professionals who move outside their areas of specialty in order to speak with, rather than at, intended audiences? Issues to be dealt with include how museums can work across boundaries established by institutions, established canons, and audiences. We will problematize periodization and traditional ideas regarding East-West exchange. We will also address the inherent challenges of decentering the history of art from collections that essentially work to a rm the Western European canon. Alternately, we welcome panelists who can speak from the perspective of specialist museums who seek to appropriate and transform the canon. The panel will also explore the negative tropes associated with race, gender, and class that are reflected in our collections and will discuss how museums can tell the truth about these difficult and ugly aspects of our shared history.
◊ ◊ ◊ ◊ ◊
Objects of Change? Art, Liberalism, and Reform across the Eighteenth and Nineteenth Centuries
Chairs: Caitlin Beach (Columbia University), [email protected]; and Emily Casey (St. Mary’s College of Maryland), [email protected]
This panel seeks to consider the dynamics of producing, mobilizing, and consuming images in the pursuit of social justice and reform. The eighteenth and nineteenth centuries saw a proliferation of such campaigns, with movements to abolish slavery, extend suffrage rights, and transform labor laws numbering amongst the many efforts to effect large-scale societal changes in Europe and the Americas. From Josiah Wedgwood’s oft-reproduced antislavery medallion of 1793 to the imagery and highly visible pageantry of women’s suffrage movements towards the turn of the twentieth century, visual and material culture has long been seen to play a vital role in shaping and articulating rhetorics of liberal political reform. However, recent scholarship on the entangled—and oftentimes parallel—historical trajectories of liberalism, capitalism, and empire complicates a straightforward understanding of the relationship between images and reform. As Lisa Lowe, Marcus Wood, and others have suggested, ideologies of liberal governance and reform often did as much to scaffold the status quo as to incite radical societal change. How did art objects—broadly defined—manifest, transform, obscure, or interrupt relationships between liberal reform campaigns and the forms of power they supported? How did markets for fine and decorative arts participate in or overlap with capitalist networks? How might our understanding of objects of reform shift if we see them operating with—rather than in opposition to—the imperial nation-state? Finally, what are the stakes of mobilizing such historical objects today, particularly in museums, scholarship, pedagogy, and contemporary activism?
◊ ◊ ◊ ◊ ◊
Association for Latin American Art (ALAA)
Open Session for Emerging Scholars of Latin American Art
Chairs: Lisa Trever (University of California, Berkeley), [email protected]; and Elena FitzPatrick Sifford (Louisiana State University), [email protected]
Each year increasing numbers of scholars are awarded doctoral degrees in Latin American art history. This session seeks to highlight the scholarship of advanced graduate and recent PhD scholars. Papers may address any geographic region, theme, or temporal period related to the study of Latin American art or art history, including Caribbean and Latinx topics. Please note, Association for Latin American Art (ALAA) membership is not required at the time of paper proposal, but all speakers will be required to be active members of CAA and ALAA at the time of the annual meeting. ALAA membership details are available through the session chairs.
◊ ◊ ◊ ◊ ◊
Provenance Research as a Method of Connoisseurship?
Chairs: Valentina Locatelli (Kunstmuseum Bern), [email protected]; Christian Huemer (The Getty Research Institute), [email protected]; and Valérie Kobi (Universität Bielefeld), [email protected]
This session will explore the intersections between provenance research and connoisseurship with regard to the early modern period. In order to go beyond today’s dominant understanding of provenance research as a practice almost exclusively related to Nazi-looted art and questions of restitutions, the panel will deliberately focus on topics from the late fifteenth to the eighteenth centuries. By setting this alternative chronological limit, we will delve into the historical role of provenance research, its tools and significations, and its relation to connoisseurship and collecting practices. What influence did the biography of an artwork exert on the opinion of some of the greatest connoisseurs of the past? How did the documented (or suspected) provenance of a work of art impact its attribution and authentication process? Which strategies were employed in the mentioning of provenance information in sale catalogues or, sometimes, directly on the artworks themselves? Did the development of art historical knowledge change the practice of provenance research over time? And finally, how can we call attention to these questions in contemporary museum practice and reassess provenance research as a tool of connoisseurship? In addition to addressing the history as well as the strategies of provenance research, this session will be an opportunity to question its relationship to other domains as well as to bring it closer to core problems of art history and museology. We invite contributions that introduce new historical and methodological approaches. Proposals which go beyond the case study are especially encouraged.
◊ ◊ ◊ ◊ ◊
Race, Ethnicity, and Cultural Appropriation in the History of Design
Chairs: Karen Carter (Kendall College of Art and Design of Ferris State University), [email protected]; and Victoria Rose Pass (Maryland Institute College of Art), [email protected]
Design history has often ignored the thorny issues of race and ethnicity, although design is deeply intertwined with global trade, slavery, colonial encounters, and ethnic and racial stereotypes. Examples of cultural appropriations might include blue and white porcelain export ware from China or paisley cashmere shawls from India that were manufactured for Western markets and subsequently copied by European designers in order to capitalize on the taste for global goods. Additional examples are the use of ‘blackamoor’ figures in interior design or American housewares with depictions of Mammies in which blackness is constructed in opposition to whiteness. This panel seeks to critically interrogate the practice of cultural appropriation by exploring the economic and cultural foundations of design in the past and present (in architecture, industrial design, craft, fashion, graphics, furniture, interiors, and systems). Papers should address some of the following questions: How does cultural appropriation move in multiple directions throughout a globalized history of design? How do designers and/or consumers use cultural appropriation to express their own identities? What role does the concept of ‘authenticity’ play in cultural appropriation? Does cultural appropriation, which often relies on racial and ethnic stereotypes and helps to reify them, also have the potential to undermine stereotypes? How do questions of gender, sexuality, and class intersect with those of race and ethnicity within cultural appropriations? Papers that employ methods from postcolonial and critical race studies and/or case studies of ordinary artifacts that have been eliminated from the traditional canon of design history are especially welcome.
◊ ◊ ◊ ◊ ◊
State of the Art (History): Re-Examining the Exam
Chairs: Karen D. Shelby (Baruch College, The City University of New York), [email protected]; and Virginia B. Spivey (Independent Scholar, Art History Teaching Resources), [email protected]
This session invites proposals for seven-minute lightning talks exploring the pedagogy and philosophy of formal assessments in art history. While we are interested in exam-related practices, we welcome submissions that substitute innovative and non-traditional models as a primary mode of formal assessment of specific skills and art historical content. What are critical and compelling components to formal assessment methods? How do you administer exams? How do you support students’ exam preparation? What exam formats do you find most effective to measure student learning, to provide formative feedback, or to achieve other goals of assessment? What is the relationship between formal assessment and student grades? What strategies have you employed to ensure transparency in evaluation and grades? What types of assessments are pedagogically sound for art history majors? Non-art history majors? Students taking art history as a general education requirement? The session will be facilitated by ArtHistoryTeachingResources.org (AHTR), founded in 2011 as a collectively authored discussion around new ways of teaching and learning in the art history classroom. Modeled on the AHTR Weekly, a peer-populated blog where art historians from international institutions share assignments, reactions, and teaching tools, this session will o er a dynamic ‘curriculum slam’ in which speakers, respondents, and attendees will engage in dialogue and reflection on successes/failures regarding issues of undergraduate assessment in art history. The session is dedicated to scholarly discourse that articulates research and practice in art history pedagogy and seeks to raise the profile and value of those who identify as educators.
◊ ◊ ◊ ◊ ◊
American Society for Eighteenth-Century Studies (ASECS)
The 1790s
Chair: Julia Sienkewicz (Duquesne University), [email protected]
An eventful decade in the ‘Age of Revolutions,’ the 1790s were a time of ‘commotion’ (so-characterized by Benjamin Henry Latrobe) that shifted national boundaries, transformed structures of power, and cast individuals of all ranks from one end of the globe to the other. Many travelers sought to escape misfortune, others voyaged in the service of their political ideals, and still others merely hoped to peacefully continue with routine trade and other activities. As a transitional decade, the culture of the 1790s is rich with both ideas that do not survive the eighteenth century and those that flourish in the nineteenth. In the production and consumption of art and architecture, these years brought pronounced changes. Neoclassicism flourished in a variety of forms and in the service of (sometimes subtly) differing ideologies or ideals. The medium of transparent watercolor rose to new heights, particularly in Britain, where it also began to take on a patriotic valence. In both France and the United States, artists and their publics struggled to give visual form to the idea of the ‘Republic,’ in light of the long tradition of art in the service of monarchy. This panel seeks to bring together new perspectives on the art and architecture of the 1790s. Scholarship that traces the chaos, innovation, and creative aspirations of this period, in lieu of pursuing long-established artistic canons or national schools is particularly desirable. Papers may consider artists from, or working in, any geographic location, and in any medium.
◊ ◊ ◊ ◊ ◊
The French Fragment, 1789–1914
Chairs: Emily Eastgate Brink (University of Western Australia), [email protected]; and Marika Knowles (Harvard University), [email protected]
In 1979, Henri Zerner and Charles Rosen launched their influential analysis of Romantic aesthetics with a description of the Romantic fragment as “both metaphor and metonymy.” In France, post-Revolutionary artists gravitated towards visions of ruins, butchered bodies, papery sketches, and other manifestations of human transience. Evolving out of this love of pieces, fragments took on a variety of forms throughout the nineteenth century. Romantic artists responded to the spectacle of ‘bric-a-brac’ salvaged from aristocratic interiors, medieval sculptures loosed from cult settings, and collections of ethnographic curiosities comprised of objects from ‘elsewhere.’ Eventually, as artists turned to the spectacle of modern life, the fragment as an object, figure, or ‘other,’ ceded to forms of fragmentary vision. The late nineteenth-century artistic proclivity for cropped bodies, blurred outlines, and decorative vignettes trafficked in fragments, amplifying what Michael Fried has identified as the modern tension between the morceau and tableau. Nearly forty years after Zerner and Rosen’s publication, this panel seeks to reassess and reinvigorate approaches to the fragment in French art of the long nineteenth century. We welcome multiple approaches to the fragment, including critical definitions of the term. How did the fragment change, or remain the same, over the course of the long nineteenth century? What is the relationship between the fragment and its presumed ‘whole’? How did the fragment represent and articulate relationships within France’s ongoing colonial enterprise? How did new visual technologies, such as lithography, photography, and the cinema, affect the status of the fragment in France?
◊ ◊ ◊ ◊ ◊
Travel, Diplomacy, and Networks of Global Exchange in the Early Modern Period
Chair: Justina Spencer (Carleton Univeristy), [email protected]
Early modern artists were known to travel alongside ambassadors on diplomatic missions, in accompaniment of explorers, or as entrepreneurial merchants on solo expeditions. Works of art likewise toured en route with artists, were produced amid voyages, or at times illustrated the arrival of foreigners in new lands. This panel seeks to explore the role visual culture played vis-à-vis travel, trade, diplomacy, and transcultural encounters in the early modern period. In what ways did the movement of artists contribute to the construction of aesthetic hybridism and early cosmopolitanism? If art forms such as Japanese Namban screens and Ottoman costume albums divulge a cultural encounter, do they presuppose a burgeoning ‘global public’? Taking into account that global art history is not, to use the words of Thomas DaCosta Kaufmann, “the reverse side of Western art history,” but instead contrary to national art and its incumbent limitations, this panel seeks contributions from scholars interested in a horizontal approach to artistic exchange where emphasis is placed on the interconnectedness of visual cultures, styles, and techniques. Contributors to this panel may deal with any aspect of global travel and exchange in the early modern period (1450–1800). Papers might address the visual manifestations of political diplomacy, art as foreign reportage, the adaption of foreign artistic techniques, or the role of the court as a contact zone for cross-cultural exchange. Topics may include a discussion of an individual work of art or artist, or can consist of more theoretical discussions of travel in the early modern world.
◊ ◊ ◊ ◊ ◊
Society for the Study of Early Modern Women (SSEMW)
Unruly Women in Early Modern Art and Material Culture
Chair: Maria F. Maurer (The University of Tulsa), [email protected]
From Caterina Sforza’s defense of Forlì or Sor Juana de la Cruz’s questioning of the misogynist literary tradition to images of slovenly Dutch housewives and objects which facilitated active female participation in and enjoyment of sex, early modern art history abounds with images and stories of misbehaving women. Art and material culture produced during the early modern period allows us to consider ways in which women negotiated and even transgressed social strictures. What did it mean for an early modern woman to be unruly? How was gendered transgression pictured and performed through objects and artworks? Conversely, how might art have been used to normalize problematic female figures? Finally, how have modern art historians treated disruptive female agency? This panel aims to study examples of troublesome or disobedient women and their involvement in early modern art. We seek papers that explore artists, patrons, subjects, and beholders who do not t into expected frameworks or who disrupt traditional narratives about women’s roles in early modern art and society. Paper topics might include, but are not limited to: female artists or patrons who contravened established artistic practices; representations of unusual and/or misbehaving women; examples of female beholders who engaged in alternative interpretations of, or interactions with, art; and female artists, patrons, or subjects who have proved unmanageable for later art historians. We welcome papers from any area of the globe concerning the years ca. 1400–1800, and invite scholars of all ranks to apply.
◊ ◊ ◊ ◊ ◊
American Council for Southern Asian Art (ACSAA)
Viral Media and South Asia
Chairs: Holly Shaffer (Brown University), [email protected]; and Debra Diamond (Freer Gallery of Art and Sackler M. Gallery, Smithsonian Institution), [email protected]
From the sixteenth century, European publications about South Asia ranged from travelers’ accounts, military memoirs, and missionary manuals to text and image compilations. The technology of print allowed for compositions to replicate and disperse over hundreds of years, which expanded knowledge—and established stereotypes—about South Asian culture. The role of the visual in establishing, justifying, and corroborating the parameters of European inquiries about South Asian subjects and peoples has urgent contemporary implications as the circulation of true or false images only increases the links between knowledge, politics, and aesthetics. This panel invites papers to address themes related to printed imagery produced about South Asia, or produced by South Asians about other locales, from 1500 to now. The first theme asks how the print medium accelerated the movement of information and stultified it through replication. We are interested in studies about images that ‘go viral’ or circulated ‘fake news.’The second question concerns the use of artworks as a source for printed images about culture. What were the processes of translating artworks into print? How does the artwork as model alter how information was perceived by makers and received by audiences? The third theme is about theories of reproducibility. How might a study of the conveyance of information about South Asia—by witnessing, hearsay, or objects—disrupt and nuance scholarship on the print medium? Papers can focus on artists, publishers, or publications from anywhere, the only qualifier is that they be about South Asia or produced by South Asians.
◊ ◊ ◊ ◊ ◊
Working Out of Medium
Chair: David Pullins (The Frick Collection), [email protected]
What happens when an artist steps outside of their preferred medium, or outside the medium that their public has come to expect from them? What leads to such a decision, at what stage in an artist’s career might it occur, and with what results? How do such moments fit into an artist’s historiography (and the concept of a singular, consistent artistic personality and œuvre), or the collecting and display of their work (even the literal market value of one object over another)? Inspired by early modern European examples (the pastelist Perronneau working in oil, Chardin in pastel, Oudry in watercolor, Prud’hon in ink), this call for papers is open to a wider geographic and chronological range with the aim of starting from a diversity of particulars in order to address larger, more conceptual questions. This said, ideal proposals will be those that look with nuance at the material properties of the objects produced by one or two makers in order to set them into dialogue the themes of a panel that aims to speak across artistic practice and the construction of artistic identity as it relates to medium.
◊ ◊ ◊ ◊ ◊
Woven Spaces: Building with Textile in Islamic Architecture
Chair: Patricia Blessing (Pomona College), [email protected]
This session invites papers that examine the relationship between textiles and architecture within the Islamic world, prior to ca. 1850. Questions of textile as architecture (such as tents) but also textiles in architecture (such as textile furnishings or the use of textile motifs) are relevant to the panel. A larger discussion will develop surrounding the concept of a textile aesthetic in Islamic architecture, and the panel invites speakers to broadly engage theoretical perspectives in this regard. When considered in this framework, multiple relationships between fabric and monument emerge. Issues of materiality, sensory perception, and intermediality are at stake within the larger question of how fabrics are an integral part of the built environment in the medieval and early modern Islamic world. Textile structures such as tents or canopies were built of fabric; portable architecture that could be folded and stored for transportation, and then reconstructed. Textiles were also central parts of the ways in which spaces were furnished and transformed with changes in wall hangings, curtains, and floor coverings. Textile motifs were frequently integrated into architectural decoration, rendered in a range of materials such as stucco and tile. Overall, the understanding of space is thoroughly transformed once the presence of textiles in these often overlapping modes is acknowledged in considerations of textile spatiality. Contributions will engage with questions related to the multiple uses of textiles as they are integrated into Islamic architecture from late antiquity to the nineteenth century in the various ways outlined.
◊ ◊ ◊ ◊ ◊
Note (added 7 July 2017) — With the original posting, I inadvertently omitted the session on ‘Viral Media and South Asia’, chaired by Holly Shaffer and Debra Diamond. –CH
Note (added 13 July 2017) — The original posting did not include the ASECS-affiliated session on ‘The 1790s’.
New Book | Dans l’œil du connaisseur: Pierre-Jean Mariette
From PUR:
Valérie Kobi, Dans l’œil du connaisseur: Pierre-Jean Mariette (1694–1774) et la construction des savoirs en histoire de l’art (Rennes: Presses Universitaires de Rennes, 2017), 322 pages, ISBN: 978 27535 53149, 28€.
En partant du cas singulier du collectionneur Pierre-Jean Mariette (1694–1774), cet ouvrage vise à mieux définir les étapes qui jalonnent la formation des connaissances en histoire de l’art au xviiie siècle et, plus largement, à questionner le rôle joué par la figure du connaisseur dans cette dynamique. En somme, il s’agit de répondre aux interrogations suivantes : sur quels éléments repose la réputation de l’amateur et comment s’organise la reconnaissance de son autorité par ses pairs ? Quels sont les instruments, matériels ou intellectuels, déployés par l’expert ? Et, finalement, sous quelles formes se présente son savoir lorsqu’il se matérialise par l’écrit ?
Dans ce but, l’identité et l’activité de Mariette se trouvent ici interrogées à travers six chapitres thématiques scindés en deux parties. La première, intitulée La naissance d’un amateur, analyse l’émergence de la figure de l’amateur à travers trois moments-clés : la constitution d’une identité, le voyage d’Italie et l’insertion de l’érudit dans la République des Lettres. La seconde, dévouée aux Savoirs mis en œuvre, examine les modalités de la divulgation scientifique, de ses modèles théoriques à ses représentations visuelles.
Plus qu’un panorama exhaustif d’une pensée savante, le développement suivi enquête sur la façon dont le livre devient, au cours du xviiie siècle, un véritable laboratoire des savoirs ; un espace privilégié où se déroule, entre amateurs, le débat qui participe à l’élaboration d’une connaissance empirique dans le domaine de l’histoire de l’art. À cet égard, la présente étude pose non seulement un regard nouveau sur l’apport de Pierre-Jean Mariette au champ historique mais elle réfléchit aussi de manière originale aux pratiques socio-culturelles et aux enjeux esthétiques qui façonnent la discipline à l’époque des Lumières.
Valérie Kobi a reçu son doctorat en histoire de l’art de l’université de Neuchâtel (Suisse). Elle a été boursière à l’Institut suisse de Rome, au Getty Research Institute et à la Ludwig-Maximilians-Universität München. Depuis mai 2015, elle mène ses recherches postdoctorales entre Bielefeld et Weimar dans le cadre du projet Parergonale Rahmungen. Zur Ästhetik wissenschaftlicher Dinge bei Goethe. | https://enfilade18thc.com/2017/07/06/ |
My final major project ‘Textures of Jamaica’ is inspired by photographs capturing everyday life in Jamaica. This includes looking at different elements such as nature, people and architecture. Through my visual development I specifically observed the textures, colours, and shapes found in these photos. These observations impacted the overall feel of my project. This is visible through my use of weave setups such as interchangeable double cloth, block setup, and extra-warps, enabling me to create shapes and to reveal colour. In addition to this, the use of yarns such as silk, viscose, polyester and cotton within my final sample collection allowed me to create textural light – mid weight fabrics intended for Pre-Fall Menswear.
The inspiration behind Textures of Jamaica conveyed through visual research, weave techniques, colour and yarn choice aims to pay homage to Jamaica, recognising and celebrating its natural beauty.
Design Interests:
Textiles and Fashion for Menswear
Culturally informed Textiles
Colour and Material Experimentation
Industry Experience: | https://artandmediagraduateshow.brighton.ac.uk/carter-leterece-textiles-design-with-business-studies-bahons/ |
Carl Peters (1856-1918) is generally known as a young doctor of philosophy who traveled as a simple civilian to Zanzibar where he acquired the territories which later formed the colony of German East Africa. Less known is the fact that the public figure of Peters is made up of three characters , which are quite distinct: firstly, Peters was an actual historical personality, engaged, as he was, in the colonial vicissitudes. Secondly, Peters was an author, who tried to defend the enterprises of his first avatar. And thirdly, Peters became the literary character or rather the various literary characters that he himself and a plethora of other German authors depicted - from the time of Wilhelm II, throughout the Weimar Republic all the way to the end of the Hitler era - in their attempts to use the conquistador of Hanover for the propagation of their various political interests. The present study analyses the transformations that Carl Peters image underwent in German public opinion, literature and political propaganda from 1884 (the beginning of the Berlin West African conference) up to the year 1945, which marks both the end of Nazism and of the dream of a German colonial Empire . The central thesis that emerges from this analysis is that Carl Peters and his life represent a myth that continuously accompanied Germany s colonial ambitions. | https://www.dart-europe.org/full.php?id=1568451 |
Introduce yourself in a few words : My name is Lisa Mee, an Irish-Korean-American woman artist who lives and works in New York City. I create mixed media paintings of the natural world, figures and abstraction. Captivated by rhythms and colors from a saturated sunset or the reflections of light from aesthetic waterscapes, I create a stained glass effect by using metallic gilding paints. Adding recycled material that most people find useless, I create beauty. My artwork illustrates my commitment to express environmental concerns about our fragile ecological state.
Tell us about how you started : I have always painted since I was a child. My parents were art collectors who took me to art museums and art fairs in New York City. I attended Fordham University and met my husband, artist Wayne Ensrud (the protegee of the Austrian artist Oskar Kokoschka). After working with Wayne for over 20 years as his Studio Director, I immersed myself in my own work, devoting myself full-time to my art.
Why did you start in this field ? It is my passion and bliss. It’s a way for me to tap into an inner creative energy and ‘voice.’ I integrate recycled materials and paint, creating scenes of peace and joy which conveys a harmonious energy.
What interests you / what are you working on in your art ? Landscapes and seascapes are a passion where I paint outdoors and then further develop canvases in my studio. Observing the natural world and conveying that emotional response as a luminescent painting is my goal. Colors convey energy and now more than ever people need positive radiant art.
« Following your inner intuition of how to make the world a better and more beautiful place. Whether it is through music, art, writing, photography. »
What are the highlights of your career ? A V.I.P. event of myself with esteemed international artists at the renowned Breakers Resort in West Palm Beach, Florida (March 2019) My solo exhibition at the International American Art Museum of San Francisco in March 2020. Presenting my art on live television through America’s Value Channel Fine Art Auction (January 2021).
Any plans, desires for the future ? Develop a series of large scale abstract paintings consisting of floral elements and precious stones.
What is the worst experience of your career ? Feeling belittled by male art gallery owners.
What are the positive or negative aspects of your job ? The thrill of creating painting and feeling that energy while losing myself in a painting is the height of my joy.
An you make a living from your art? Absolutely! I had the best year of sales as galleries moved online due to Covid.
What are you passionate about / what do you stand for in life ? Following your inner intuition of how to make the world a better and more beautiful place. Whether it is through music, art, writing, photography. We each have a responsibility to look after each other and the planet. Devoting oneself to making beautiful arts or crafts is needed by society to sustain us through crises and keep our optimism alive.
What are your other interests, hobbies ? Writing, flamenco dance, running and yoga.
What would be your motto ? There is just this moment and nothing else so do not be weighted by the past or concerned about the future to determine your happiness. | https://welovart.com/en/lisa-mee-irish-korean-american-painter-artist/ |
Thermometer movements are an interested hybrid of electric motors that turn hands according to an acquired time and also weather-based values that are input from sensors. Hence, thermostat movements turn a single hand to a point within a limited range that corresponds to a number (a temperature level) signified by the sensor. There are a variety of ranges of this plan, as well as we explore them in this article.
Using clock movements for thermostat purposes is an idea that could make one scratch her head. After all, for centuries people have obtained utilized to checking out the placement of a rotating hand to inform time. But with a little penetrating under the surface impact one sees the logic of this approach.
A thermostat is generally a tool that measures the level of heat, or temperature, of something. That something may be a pot of water on the oven, the body of an ill person, a remote celestial celebrity, or the ambient setting. The nature of the thing measured establishes the best thermal innovation to utilize.
Early thermostats made use of a fluid (or gas) had in a closed glass tube to indicate the level of ambient heat. For this to work the substance needs to have a high coefficient of expansion and continue to be in its state over a wide range. Examples include mercury, ethanol, and brandy or some comparable alcohol-water combination.
Blackbody radiation is the innovation to utilize if the object is remote, such as a star. The concept here is that the range of the compound is proportional to its temperature.
More current technologies involve making use of steels. A thermistor utilizes the concept of a particular steel's resistance being greatly minimized when it is warmed. This is a prominent method to construct thermostats today.
Bi-metallic modern technology is a mechanical technique based on various coefficients of growth for 2 different steels. A coil or spring is created from a blend of the two substances, and also the amount of tension (or level of coil) differs with the ambient temperature level.
It's this last modern technology that makes the most sense for combining with a motor for rotating hands. First off, a battery isn't required because the bi-metallic system is strictly mechanical. Second, it is well understood, specific, and also trustworthy, not easily impacted by outside influences.
These features enable one to setup the tool in a certain spot and afterwards forget it. It does not require calibration neither is it required to make periodic modifications.
Therefore, a clock components supplier does not have to go through an extensive retooling to produce a thermometer movement. He can possibly use or retrofit an existing hand layout and the externals of an existing activity. However, the dial has to be created from scratch.
The internals of the motion also have to transform. Instead of a quartz crystal creating pulses that obtain collected right into an electronic counter, a bi-metallic system has to be coupled to the hand rotator. This needs a proper scaling to make sure that the variety of the temperatures scanned maps onto a bit less than a full circumference of the dial.
The great thing is that movement can fit more than one dial dimension. The array mapping is independent of dial size, so all one has to bother with is whether the longer hand is much heavier sufficient to need more torque than the motor can deliver. | https://clockaccessories.ucraft.net/ |
The neck is made up of a number of muscles that are going to allow the neck to have mobility in flexion, extension, lateral flexion, right rotation, and left rotation. That is why it is important to keep the neck muscles relaxed.
Neck pain is a frequent cause of medical consultation. These can appear at any age. There are several reasons for neck pain: Stress, improper posture, muscle strains, or trauma.
The Lateral neck muscles are:
- Rectus capitis anterior
- Rectus capitis lateralis
- Longus capitis
- Longus Colli
- The trapezius
These muscles are really important because we use them all the time to twist your head from side to side, tilt your head, and control the base movements of your head.
Neck Stretches
Circle Neck Stretch
- Execute small slow circles clockwise for 10 seconds.
- Repeat slow circles counterclockwise for 10 seconds.
Front Neck Stretch
- Roll up a towel.
- Place it at the base of your head.
- Let your head “drop” freely to the ground and relax.
- Stay in this position for about 10 minutes, unless you feel any kind of pain.
Side Neck Stretch
- Place your right hand on top of your head and gently pull to your right.
- Keep your back straight and your shoulders relaxed.
- Hold that position for 30-40 seconds, and then slowly bring your head back to its starting position.
- Repeat on the other side.
Back Neck Stretch
- Clasp your hands behind your head.
- Gently lower your head, bringing your chin closer to your chest.
- Hold that position for 30 to 40 seconds, and then slowly return your head to the starting position and then release your hands.
Upper Torso Twist and Neck Stretch
- Begin by getting on all fours, supporting yourself on your hands and knees.
- Then slide your left arm with the palm of your hand up, between your right arm and your leg, rotating your body until your head touches the ground.
- Hold this position for 30-40 seconds and then repeat on the other side.
Shoulder Rotation
- Begin sitting or standing, keeping your back and neck straight.
- Lift your shoulders and then roll them back and down.
- All movements should be smooth. Keep your chin tight, doing a double chin.
Benefits of stretching exercises for the neck area
- Decreases the likelihood of neck and cervical injuries.
- It allows the muscles in these areas to be more flexible.
- Increase the range of motion in the neck.
- It helps to reduce the pain that exists in these areas.
- Helps correct posture.
- Relax the muscles that are tense.
- Provides a better mood.
- Improves elasticity.
- Prepare your muscles before exercising.
Recommendations
- Neck Stretches should be done smoothly and progressively.
- No bouncing should be done during the stretch.
- Maintain the stretches for the indicated time to be effective.
- Create your own routine to fulfill your needs.
Recommended Stretches for you:
Follow our Social Media! | https://sportsandmartialarts.com/neck-stretches-step-by-step/ |
People love stories. They engage audiences and drive them to take a desired action.
This workshop demonstrates the profound impact of storytelling on others as well as the uses and benefits of storytelling in a corporate environment.
You will discover how to plan and structure a business story and practice delivering it to a group. You will also explore ways in which storytelling can help to promote your brand, products or services.
This workshop is for you if…
You would like to communicate your messages more effectively and inspire and influence others.
Outcomes
After taking this workshop you should be able to:
- plan, structure and deliver an effective business story
- match a suitable storytelling technique to your purpose
- better engage your audience and inspire them to act
Workshop Outline
Everyone Loves a Good Story
- Your storytelling situations at work
- What’s your story? | https://www.britishcouncil.sg/courses-business/workshops/interpersonal-communication/strategic-business-storytelling |
Emergency department patients had lengthy waits to be admitted to acute care beds in January at Red Deer Regional Hospital Centre — including three patients who each waited more than 100 hours.
The emergency department has about 50 stretchers and lounge chairs for patients in serious condition waiting for beds.
Sylvia Barron, director of emergency and critical care at the hospital, said a review of the three protracted stays in emergency was underway.
“They stayed in emergency department for a good four days,” Barron said on Tuesday.
They were waiting for specialized testing and assessment, which took longer so they weren’t the normal emergency patient.
Barron said excluding those patients, the wait-time for a bed was still higher than usual and an investigation into January wait-times has begun.
Overall, emergency patients waited a median of 18 hours before getting an acute care bed, meaning half the patients waited longer than 18 hours and half had shorter waits.
The median wait-time was 11.8 hours in December, 14.9 in November, 11.2 in October and 13 in September.
“It may be just a blip in January. We’re hoping February will be a little bit better,” Barron said.
The emergency room overcapacity plan to reduce wait times was triggered 23 times in January compared to seven times in December and 15 in November.
The overcapacity plan involves moving existing patients who can be discharged to dedicated lounge chairs or beds in the hospital, sending them to nearby hospitals or long-term care facilities, or home with home care support to make room for emergency patients.
Alberta Health Services implemented the overcapacity plan in December 2010 to reduce wait-times to under eight hours for emergency patients who need acute care beds, and those who haven’t been admitted to be treated and released within four hours.
The wait for a hospital bed in Red Deer had some declines from the fall of 2010.
This January, patients who were treated and sent home spent a median time of 2.9 hours at the emergency department, 2.8 hours in December, 3.0 in November, 3.0 in October and 3.2 in September.
Barron said several factors may have contributed to the longer wait-time for beds. The length of hospital stay was up in January. The number of discharges was down. Urgent operating room cases has been increasing in recent months, which would impact available beds. More people may have needed isolation beds.
She said the hospital will also be looking at staffing levels to ensure there is enough staff for patients as the demand changes throughout the day and night to get the emergency department back on track.
Red Deer North Liberal candidate Michael Dawe said emergency department wait times are a long-time problem the province has yet to solve through proper planning and management.
“This didn’t just happen. But (the province) keeps treating this like it’s a big surprise,” said Dawe, a former chair of Red Deer Regional Hospital Board and former trustee of David Thompson Health Region.
“We’ve had these jumps before. More than a year ago they said they had all these measures to make sure we wouldn’t have them.”
He said issues that contribute to wait-times have been well known like the lack of hospital beds for patients, the lack of continuing care beds to transfer seniors out of hospital beds, and not enough family doctors forcing people to go the emergency room.
“In my basement, I bet you I have literally have boxes and boxes of reports with the same suggestions on how they are going to fix it.”
Red Deer North MLA Mary Anne Jablonski said work is underway to address wait-times, for example Covenant Health is building a 100-bed aging-in-place facility in Red Deer.
“I know how difficult and frustrating that can be when you’re in the ER. We are doing something about it,” Jablonski said.
She said the Centre of Disease Control says 40 per cent of all visits to the ER could be managed in other care clinics or urgent care centres and the province is working to develop family care centres that will be open longer hours with nurse practitioners and doctors.
More could also be done to advertise the toll-free advice line Health Link to guide people to the proper health care facilities, she said.
On Tuesday, Minister of Health and Wellness Fred Horne directed Alberta Health Services to take steps immediately to reduce occupancy in acute care in the major hospitals in Edmonton and Calgary by Oct. 31. | https://www.reddeeradvocate.com/local-news/emergency-wait-times-took-big-jump-in-january/ |
The world of wound care is fascinating and complex, and so is the never-ending mission to find better ways to treat the millions of patients affected by chronic and acute wounds.
Traditionally, innovation in wound care has lagged behind other areas of medicine. However, that’s all changed in recent years due to the advent of new technologies and the novel application of existing ones. We’ve entered a new era where the way wound care is approached and delivered is being completely transformed. And one where the future is so bright . . . as the song says, “we’ll have to wear shades.”
On the Horizon
Numerous products in development reflect this new wave in wound care. For example, researchers in Paris are employing a simple, flexible pressure sensor for chronic wound monitoring. The sensor is embedded into a commercial dressing and monitors how firmly a dressing is glued on the skin surface, alerting caregivers to potential ischemia.
Breakthroughs are also being made in regenerative medicine. Placental and other tissues are being applied to non-healing wounds to promote cell proliferation and create allografts that provide significant healing benefits for chronic wounds, foot ulcers and burns. Though placental tissue has been part of wound care since the early twentieth century, researchers are exploring exciting new ways to utilize this ideal regenerative wound-healing therapy.1
The Power of Prediction
Equally exciting are advances being made in the application of familiar technologies. Electronic health records (EHR) have become the backbone for using machine learning and artificial intelligence (AI) to build new analytical tools that are revolutionizing wound care.
At Net Health, we see the future of innovation in the expansion and widespread adoption of these technologies. Predictive analytics, made possible by AI, is truly a game-changer for both researchers and clinicians and a primary focus for our business. For example, the Net Health Wound Care software platform now includes the Risk of Amputation Indicator, developed to reduce the risk of amputations, and the Wound Healing Velocity Indicator, which predicts wound healing rates. These tools provide insights needed to develop optimal patient therapies, implement effective interventions, and plan treatment paths to improve outcomes.
Picture This
A picture may be worth a thousand words, but digital images as part of the EHR are invaluable for wound care providers. Flat Polaroids with a handwritten description and measurements taken with a ruler have been replaced with advanced digital imaging tools such as Net Health’s Tissue Analytics. As a result, wounds can be photographed with a smartphone or tablet, are automatically measured and classified, and become accessible to anyone on a patient’s treatment team. The precision of these sophisticated programs dramatically increases the accuracy of recorded data, thus reducing error rates, cutting down costs, and leading to more successful healing. Expect more advances in this area as the technology continues to be refined.
Wound Care with Confidence
The recent and nascent innovations in wound care are astounding. With new technologies, the dedicated professionals who treat the many types of non-healing wounds can do so with even greater confidence, support, and success.
For more insights into the latest wound care technology
eBook – Predictive Analytics: The Future of Wound Care
Clinical Analytics for Wound Care
Blog – Taking Aim at Amputation
And visit Net Health Wound Care-Tissue Analytics
References
“Benefits and Limitations of Placental Tissue for Wound Healing.” Web blog post. The Wound Pros. February 22, 2022. | https://www.nethealth.com/future-of-wound-care-so-bright-well-have-to-wear-shades/ |
0%63°52°Sunny with a high of 63 °F (17.2 °C). Winds variable at 5 to 7 mph (8.0 to 11.3 kph).Night - Clear. Winds variable at 6 to 9 mph (9.7 to 14.5 kph). The overnight low will be 52 °F (11.1 °C).
Today - Sunny with a high of 63 °F (17.2 °C). Winds variable at 5 to 7 mph (8.0 to 11.3 kph). | https://www.yahoo.com/news/weather/lebanon/south-lebanon/%D8%A8%D8%B1%D8%AC-%D8%A7%D9%84%D8%B4%D9%85%D8%A7%D9%84%D9%8A-1-56440835 |
The evaluation of body composition: a useful tool for clinical practice.
Undernutrition is insufficiently detected in in- and outpatients, and this is likely to worsen during the next decades. The increased prevalence of obesity together with chronic illnesses associated with fat-free mass (FFM) loss will result in an increased prevalence of sarcopenic obesity. In patients with sarcopenic obesity, weight loss and the body mass index lack accuracy to detect FFM loss. FFM loss is related to increasing mortality, worse clinical outcomes, and impaired quality of life. In sarcopenic obesity and chronic diseases, body composition measurement with dual-energy X-ray absorptiometry, bioelectrical impedance analysis, or computerized tomography quantifies the loss of FFM. It allows tailored nutritional support and disease-specific therapy and reduces the risk of drug toxicity. Body composition evaluation should be integrated into routine clinical practice for the initial assessment and sequential follow-up of nutritional status. It could allow objective, systematic, and early screening of undernutrition and promote the rational and early initiation of optimal nutritional support, thereby contributing to reducing malnutrition-induced morbidity, mortality, worsening of the quality of life, and global health care costs.
| |
SAN DIEGO - Researchers at the UC San Diego School of Medicine today announced their success in determining that a type of DNA-repairing enzyme is neutralized by DNA lesions caused by exposure to ultraviolet light.
RNA polymerases are enzymes that monitor, detect and repair damaged sections of DNA to maintain genetic integrity. The researchers found that the processes of a specific polymerase, called Pol 1, stalled when it attempted to repair a lesion caused by UV light damage in a DNA strand.
Pol 1 is responsible for up to 60 percent of transcription activity in growing cells, and for identifying lesions and activating repairs at the site of the lesion, according to the study. If left unchecked, DNA lesions caused by UV light exposure can result in cancerous growths such as melanoma.
"[Pol 1 is] the most active RNA polymerase in growing cells and so its ability to identify lesions has significant influence on whether a cell can survive UV-caused genetic damage,'' said Dr. Dong Wang, the study's co-corresponding author and an associate professor at UCSD. "However, little is known about how this enzyme actually processes UV-induced lesions.''
The upshot of the study is that the findings could possibly lead to the development of novel anti-cancer drugs that harness Pol 1's transcription ability according to Wang.
The study, published this week in the journal PNAS, was a collaborative effort with researchers in Spain and Finland. Funding came from the National Institutes of Health, the Spanish Ministry of Science and the Ramon Areces Foundation. | https://www.villagenews.com/story/2018/08/16/regional/ucsd-researchers-record-activity-of-dna-repairing-enzyme-in-new-study/53714.html?m=true |
Culture has a profound impact on the creation and reception of artwork. Regardless of the medium or subject, Art reflects its cultural context, providing an important context against which to measure the artist’s work. All artworks are products of their culture, reflecting prevailing beliefs and assumptions. Hence, the arts culture is a crucial component of a just and equitable society. Here are some ways that Art influences society. Let’s explore some of them.
Art is a physical manifestation of a culture
In its simplest form, art is the creation of an artistic product, usually in the form of a painting, sculpture, or other physical manifestation. Its creation and expression express fundamental human needs and urges, such as a sense of harmony, balance, and rhythm. Furthermore, art can also provide a unique means for expressing the individual’s imagination. Because art is not bound by the formalities of language, it can produce different meanings and forms.
While defining the concept of art is relatively easy, determining the nature of an individual work of art is not. In some cases, art can be a form of communication that evokes a sense of wonder or cynicism, or it can evoke an emotional response. It can also represent a cultural context, or be purely trivial. Whatever its meaning, art is a way to grasp a culture and its environment, human or otherwise.
Art is a strategy to achieve
An effective arts strategy balances internal resources, industry norms, and community perceptions. To make the most of its potential to engage the community, institutions must engage the community from the beginning. Inclusion leads to new relationships, including with subscribers, donors, educators, and subscribers of other types of organizations. The next step is strategic planning. It consists of building a public dialogue with all stakeholder groups to determine the needs and desires of the community.
A key element of an arts-enhanced curriculum is the inclusion of art as a device to support other curriculum areas. The arts aren’t explicitly outlined in the curriculum, but rather serve as “hooks” to get students to engage in the learning of content. These curriculums often lack the training and expertise necessary to ensure that they meet the highest standards. However, these approaches are often mistaken for arts integration.
Art is a core component of an equitable society
Art can inspire us to act, and art has the unique ability to transport cultures from one place to another. It can educate us and inspire others to accept our culture, which has been helpful in fighting intolerance, racism, and other forms of unjust societal segregation. Art has long been used to promote human rights, and its images can stir the heartstrings of the affluent as well as the poor.
Art can educate people about anything, and has the power to break down barriers of race, class, and economic status. It can promote cultural appreciation, especially among our technology-obsessed generation. It can also help preserve cultural traditions. All these factors contribute to making art an important component of a truly equitable society. Therefore, the importance of art cannot be overstated. Art and culture are a vital part of our society.
Art can change mindsets
Abigail Tucker, the author of “Art Can Change Mindsets,” has conducted extensive research on how art can change people’s mindsets. People respond to works of art differently depending on what they see and experience. In one study, art influenced a volunteer’s reaction to Michelangelo’s Expulsion from Paradise. The researcher found that art can influence the muscles of the volunteers’ brains.
The Situationists, a group of artists in the 1960s, pioneered the concept of suspending established culture and order through art. Their works jolted viewers and showed the power of art to suspend established norms and order. The Extinction Rebellion adapted this idea, using art to transform central London. It was highly effective. In addition to the Situationists’ work, other influential groups have used art to affect public opinion and behavior. | https://www.junewayne.com/2022/08/13/how-the-arts-culture-influences-society/ |
Machine learning algorithms may change computing, but they're a bit of a black box. Still, there are ways to tame them with flexible data governance, according to tech startup exec Andrew Burt.
As a lawyer on the staff of the FBI Cyber Division, Andrew Burt spent a good deal of time looking at the intersection of national security and technology. That meant looking at policy in an organization charged to look at massive amounts of sensitive data. Now, as chief privacy officer and legal engineer at startup Immuta Inc., he is one among a new cadre working to bring more data governance to machine learning, the artificial intelligence-style technology that is moving from laboratories into mainstream computing.
Machine learning algorithms are something of a black box for governance, as the technology does not necessarily disclose how it reached its decisions. To cast some light on this black box and what it means to data governance, we recently connected with Burt to discuss sensitive data processing at scale.
How will data governance change when decisions made based on machine learning algorithms are more widely employed?
Andrew Burt: It's the multibillion dollar question. What is challenging is that machine learning for the first time at scale is starting to occupy a significant place in the decision-making process. Organizations are using technology to make decisions in ways that at least have the potential to remove the human entirely from that decision-making process. That has excited some people and scared some, too.
It is different because the old types of governance were about process -- about who saw what data when. That's been the bread and butter of data governance. That model assumes there still is an audit trail and you can ask someone what happened.
As machine learning comes to occupy part of this decision-making sphere, we're losing that ability. Governance, now, is actually beginning to impact what types of decisions can be made, and what types of rights the subjects of the decisions have.
We are starting to hear people wonder if machine learning algorithms are too much of a black box. How do we begin to govern artificial intelligence?
Burt: Actually, there is a host of ways we can govern and actively control and monitor the process of creating machine learning models. It's not a binary choice between letting machine learning models run amok or strengthening governance so much that there is no machine learning.
There is no better way of having visibility into black box models than having a very good understanding of what type of data is actually going into them.
There are really three buckets here. You have the data, the model and the decisions. There are ways to govern using each of those buckets. Each has a role to give visibility into the way that machine learning models are actually being deployed.
The most important bucket is understanding the data that is used to train the model. If you don't understand the data to start with, there can be huge risks embedded within the models. There is no better way of having visibility into black box models than having a very good understanding of what type of data is actually going into them. That includes everything from the time the data is collected, -- gauging for the possibility of biases in the data itself, observing the activity when it is [extracted, transformed and loaded] -- to the time it's used in a model.
And what about with the machine learning model itself?
Burt: There you find a spectrum where there is a tradeoff between the traceability and the actual accuracy of the model. There are some circumstances where governance concerns are going to have to hold some weight on the scale. There may be circumstances where there is a level of interpretability we just can't sacrifice.
Historically, in fields like finance, interpretability has really been prioritized. In fact, data scientists in that field have leaned very heavily on models like linear regression where you have the ability to play it back. So, the second bucket is about the model choice itself.
But there are going to be circumstances when the models we use literally are black boxes. So, finally, the third bucket is the actual output for the decision. There are some technical ways, in fact, where you can reduce the level of opacity in these models. One is LIME, which stands for Local Interpretable Model-agnostic Explanations. What that is able to do is, basically, after each decision, to model the reason why that decision was made. What it does is isolate the exact features that are driving the decision that is being made, even in the face of black box algorithms there is a level 'post-hoc' review or backward-looking review for some of these models.
It seems what Immuta is pursuing could be a platform for differential privacy within an organization. Does that reflect the fact that 'one size does not fit all' for data these days?
Burt: Differential privacy up until now has lived within academic research industry and within the tech giants. What we have done is tried to make it easy to implement and easy to use. What that means is data can be shared while also having mathematical protection for the personally identifiable information within the data. That concept is what we call personalizing data.
Organizations are finding that what they need in order to speed their data science programs is the ability to have each user seeing only the data they are allowed to see in the right form for each corresponding purpose. So, within any organization, permissions and rights, and the ability to use data for different purposes, that is going to vary across the spectrum of users.
Data access patterns are going to change depending on a variety of contexts. That relates to both the underlying storage technology and governance concerns. Different data will have different restrictions attached to it, and that will change.
What issues do you expect when implementing machine learning in your organization?
How do we ensure governance around machine learning algorithm? do we need to have some review mechanism in place to ensure machine algorithm patterns are developed as desired?
What are the other ares which we need to take care from Risk & control point of view for machine learning tool? | https://searchdatamanagement.techtarget.com/feature/Machine-learning-algorithms-meet-data-governance |
Organizational change management is one of the most misunderstood – and undervalued – aspects of complex ERP implementations and digital transformations. It seems that executives and project teams are constantly trying to figure out how to define the best change management process for their organizations.
Each organizational change management engagement that we help our clients with looks a bit different. Each one needs to be tailored for their specific culture, internal resources, and other unique variables. Prosci certification and other toolsets are good starting points to be personalized to your cultural and political dynamics.
With this in mind, there are a number of common ways to define the best change management process for your digital transformation. Here are five steps to get started:
The first step is to conduct an organizational readiness assessment to determine where the change management pitfalls are likely to be. We typically conduct this via a series of anonymous online surveys and focus groups with key employees and stakeholders. The key is not simply to ask if employees are ready for change – because most will say they are – but instead to look for underlying sources of resistance.
Quantitative and qualitative results from survey and focus groups should then be analyzed for root causes of resistance. For example, you will want to look for perceived lack of communication, poor cross-business coordination, fears surrounding other changes within the organization, and other root causes that will eventually manifest into resistance to your digital transformation project. Keep in mind that most resistance is unintentional and below the surface, so it takes experience to identify them.
Your digital transformation is likely going to impact people’s jobs in ways that you may not foresee at the moment. This includes new roles and responsibilities, new processes, and other changes that transcend simple changes to their day-to-day ERP system. These change impacts should be identified during the blueprint or design state of your project. It is important to recognize how different workgroups will be affected by new processes and technologies.
Once you have defined change impacts, you will want to define how future-state roles and responsibilities will look. This organizational design work should include any consolidation of roles and migration to shared service models. Unfortunately, this is one of the most overlooked components of organizational change management strategies. It is also one of change management activities that has the potential to deliver the most value to your organization.
Once the above items have been completed, it is important to define a change management process and communications plan tailored for the unique nuances of your organization. Some of the answers that should be addressed in your include:
These and other questions should be answered as part of your change management process.
Change management is a critical success factor for any digital transformation. It is also largely misunderstood and undervalued. The above provides a simple framework to get started on the best change management process that is best suited for your organization. These are areas that are typically overlooked by Deloitte, Accenture, Capgemini, and other large system integrators. | https://www.thirdstage-consulting.com/how-to-define-the-best-change-management-process-for-your-digital-transformation/ |
A low dose of the alpha2 agonist clonidine ameliorates the visual attention and spatial working memory deficits produced by phencyclidine administration to rats.
Psychotomimetic N-methyl-D-aspartate/glutamate receptor antagonists, such as phencyclidine (PCP), have been shown to produce a spectrum of behavioral, neurochemical and anatomical changes in rats that are relevant to aspects of schizophrenia, including impairments of working memory and visual attention. The alpha(2) noradrenergic receptor agonist clonidine prevents some of the behavioral effects of NMDA antagonists, suggesting that monoaminergic systems mediate some aspects of these deficits. We sought to determine the ability of clonidine to modify the PCP-induced deficits of visual attention and spatial working memory in rats. In a lateralized reaction time task, a lower dose of clonidine (10 microg/kg) ameliorated the impairment of choice accuracy produced by PCP (2.5 mg/kg, IP), while the higher dose of clonidine (50 microg/kg) slowed response times and induced a deficit of choice accuracy on its own. The high dose of clonidine effectively prevented the motor impulsivity produced by PCP. In addition, clonidine (10 microg/kg) prevented PCP-induced performance deficits in a delayed non-match to sample task. These data indicate that clonidine may attenuate deficits of attention and working memory produced by PCP, perhaps in part by preventing some of the downstream neurochemical and anatomical effects of this psychotomimetic drug.
| |
The greenback outperformed its peers on Tuesday, thanks to positive ISM Services PMI data. It sustained its momentum early Wednesday, with the U.S. dollar Index reaching a new multi-decade high. Investors remain cautious in the middle of the week as attention switches to central bank events. The ISM Services PMI increased to 56.9 in August from 56.7 in July, exceeding market expectations of 55.1 and indicating the greatest expansion in services activity since April. As a result, markets are now pricing in a 74% chance of a 75 basis point Fed rate hike in September, up from 57% early Tuesday. Traders look forward to the Bank of England's (BoE) Monetary Policy Hearings and the Bank of Canada's (BoC) interest rate decision later in the day.
EUR
The Euro is holding onto slight gains this morning after succumbing to further bearish pressure yesterday and reaching its lowest level in 20 years. Traders are anticipated to remain on the sidelines ahead of the European Central Bank’s (ECB) monetary policy announcement on Thursday. The renewed pressure was brought on by the optimistic service activity figures in the U.S. that increased the likelihood of a 75-bps rate hike. Meanwhile, on Tuesday, several ECB members sounded warier about aggressive policy normalization and reminded of the Fed-ECB policy divergence. Eurostat will now publish figures on the second-quarter employment change and GDP growth.
GBP
Reports that incoming UK Prime Minister Liz Truss planned to freeze home energy bills for 18 months aided the British pound's resilience against its major rivals yesterday. Truss said in a speech on Tuesday that she would lower taxes to reward hard work, take action on the energy problem, and publish her plan as soon as this week. The Pound remains reasonably quiet early Wednesday. Traders are now paying attention to the Bank of England's monetary policy hearings. Meanwhile, BoE policymaker Mann stated that interest rates should be hiked more aggressively since the central bank's gradualist policy has failed to contain the rise in borrowing costs so far, warning of rising inflation in the UK economy for an extended period of time.
JPY
The uninterrupted JPY depreciation resumed early Wednesday as the Yen hit a new multi-decade low. So far this week, the Yen has lost more than 400 pips. When asked about currency intervention, Japanese Finance Minister Shunichi Suzuki said on Wednesday, "we will take necessary steps." According to traders, Yen's selloff's fundamentals have remained unchanged since March. The market is pricing in a widening gap between tightening monetary policy in the United States and the Bank of Japan's tightly cemented ultra-loose stance. Expectations of further downside were bolstered by robust U.S. economic statistics posted on Tuesday, raising the likelihood of further Fed rate hikes.
CAD
The Canadian dollar maintained its bearish momentum from yesterday's session, losing 0.08% against the U.S. dollar. The risk-off mood ahead of the Bank of Canada's interest rate decision puts Loonie traders on the defensive. Aside from that, falling crude oil prices could damage the commodity-linked Loonie and add to further weakness. Moving forward, markets anticipate another massive rate hike as the central bank attempts to rein in sky-high inflation. Money markets are already pricing in a 75 basis point rate hike, raising borrowing costs to 3.25%.
MXN
The Mexican Peso declined for the third time in four days on Tuesday as the U.S. dollar and Treasury yields rose; the currency remained within its current trading range even as a measure of the greenback touched a record high. Meanwhile, three Banxico meetings are set for September, November, and December, with the curve pricing a 75bps increase this month and a 50bps increase in November. The big catalyst this week is August inflation, which is due on Thursday. In other news, Banxico Deputy Governor Jonathan Heath stated that the market consensus is correct: the reference rate might rise to 10% and that the central bank should maintain its present 600bps rate gap with the Fed.
CNY
As disappointing Chinese trade data depressed sentiment and underlined economic risks from declining global demand and multiple domestic disruptions, the Yuan plunged to levels last seen in July 2020. China's exports increased 7.1% in August, falling short of market expectations of 12.8% and dropping considerably from an 18% increase in July, while imports remained sluggish. Meanwhile, the People Bank of China stated that the forex reserve requirement ratio would be reduced by 200 basis points to 6% commencing September 15. The Yuan is down nearly 10% this year as China’s Covid-battered economy and divergent monetary policy make the currency less attractive to investors.
BRL
Yesterday, the Brazilian currency further weakened against the dollar. It dwindled by 1.62% due to hawkish Fed remarks about rate hike prospects. Today, markets will remain closed. However, investors will monitor the pro-Bolsonaro demonstrations scheduled for Independence of Brazil to indicate the incumbent's popularity and proxy for how political noise can increase amid the presidential race. | https://www.moneycorp.com/en-us/news-hub/daily-market-pulse-7-september-2022/ |
As contemporaries of each other, Edgar Allan Poe and Nathaniel Hawthorne endeavored to write about man’s dark side, the supernatural influence, and moral truths. Each writer saw man as the center-point in his stories; Poe sees man’s internal struggle as madness, while Hawthorne sees man as having a “secret sin.” Each had their reasons for writing in the Gothic format. Poe was not a religious man; he was well educated and favored reading the German Gothic literature, which would become the basis for his own writing. Hawthorne on the other hand, called on his Puritan-Calvinistic background to influence his writing style. Along with his formal education, and his self-imposed solitary time, that he spent reading and observing nature. Poe’s writing allows the reader to observe man’s thoughts and behaviors from within his mind and demonstrates how his behavior influences his surroundings. As opposed to Hawthorne’s writing, where a man’s behavior is affected from outside influences, as such, placing him in settings that will manipulate his emotional and mental behavior in an effort to deliver a moral theme. Each author would write their own version of a Gothic tale that would spin the reader’s imagination into places it might not otherwise go.
The mechanics of Gothic fiction contain two key aspects, the first is allegory, and the second is the use of symbol. Poe and Hawthorne each utilized these two distinct styles of Gothic writing. Poe would favor the use of symbols in his writing while Hawthorne depended strongly on the use of allegory to create his tales. James K. Folsom describes Hawthorne’s use of allegory as “not as a statement of artistic means, in some sense roughly equitable with ‘symbolism,’ but rather as a statement of artistic ends, in some moralistic sense. An allegory for Hawthorne is a moral tale […]” (77). Hawthorne saw his writing in allegorical terms to bring to the reader’s attention concrete realities by way of abstract ideas; he was able to imagine the natural world into an imaginary--supernatural one.
A coal fire diffuses a “scarcely visible” but “mild, heart-warm influence” throughout the room, while moonlight from the window “produces a very beautiful effect.” […] all the familiar objects of the room “are invested with something like strangeness and remoteness,” as if one were viewing them after the passage of years. […] “such a medium is created that the room seems just fit for the ghosts of persons very dear, who have lived in the room with us […] It would be like a matter of course, to look round, and find some familiar form in one of the chairs” (156). | https://brightkite.com/essay-on/allegory-symbolism-and-madness-comparing-the-demons-of-edgar-allan-poe-and-nathaniel-hawthorne |
Candlewick, 9780763636791, 384pp.
Publication Date: January 22, 2008
Other Editions of This Title:
Paperback (10/13/2009)
Hardcover (9/12/2006)
Paperback (1/25/2011)
Prebound (1/25/2011)
Compact Disc (10/14/2008)
Compact Disc (1/9/2007)
Hardcover (10/14/2008)
Prebound (1/25/2011)
Prebound (1/1/2008)
Hardcover, Large Print, Large Print (6/1/2007)
Winter 2009 Kids' List
— Mark David Bradshaw, Watermark Books, Wichita, KS
View the List
Description
Young Octavian is being raised by a group of rational philosophers known only by numbers — but it is only after he opens a forbidden door that learns the hideous nature of their experiments, and his own chilling role them. Set in Revolutionary Boston, M. T. Anderson’s mesmerizing novel takes place at a time when Patriots battled to win liberty while African slaves were entreated to risk their lives for a freedom they would never claim. The first of two parts, this deeply provocative novel reimagines past as an eerie place that has startling resonance for readers today.
About the Author
Praise For The Astonishing Life of Octavian Nothing, Traitor to the Nation, Volume I: The Pox Party…
—The Wall Street Journal
Anderson’s imaginative and highly intelligent exploration of the horrors of human experimentation and the ambiguous history of America’s origins will leave readers impatient for the promised sequel.
—The New York Times Book Review
A historical novel of prodigious scope, power and insight...This is the Revolutionary War seen at its intersection with slavery through a disturbingly original lens<I>. </I>
—Kirkus Reviews (starred review)
Fascinating and eye-opening… this powerful novel will resonate with contemporary readers.
—School Library Journal (starred review)
Octavian's narration...quickly draws readers into its almost musical flow, and the relentless action and plot turns are powerful motivators.
—Bulletin of the Center for Children's Books (starred review)
A serious look at Boston, pre-Revolution. It's layered, it's full of historic reference, and it's about slavery and equal rights.
—The Boston Globe
The story’s scope is immense, in both its technical challenges and underlying intellectual and moral questions. . . . Readers will marvel at Anderson’s ability to maintain this high-wire act of elegant, archaic language and shifting voices.
—Booklist (starred review)
With an eye trained to the hypocrisies and conflicted loyalties of the American Revolution, Anderson resoundingly concludes the finely nuanced bildungsroman begun in his National Book Award–winning novel. | https://www.indiebound.org/book/9780763636791?aff=hellbox |
Publications:
Documents:
Audio:
Biography
Emily Bianchi joined the Goizueta Business School in 2011. She holds a PhD in Management from Columbia University and a BA in Psychology from Harvard University. Bianchi's research examines how the state of the economy shapes attitudes and behaviors ranging from individualism to ethics. Her work also looks at how economic conditions in early adulthood influence later job attitudes, self-concepts, and moral behavior. Her work has been covered by The New York Times, The Atlantic, NPR's Marketplace, USA Today, The Financial Times, Businessweek and others. Prior to graduate school, Bianchi was a Senior Consultant at Booz Allen Hamilton.
Areas of Expertise (6)
Economic Conditions and Early Adulthood
Job Attitudes
Moral Behavior
Economic Conditions and Psychology
Organizational Behavior
Social Psychology
Education (3)
Columbia University: PhD, Management 2012
Columbia University: MPhil, Management 2009
Harvard University: BA, Psychology & Afro-American Studies 2001
Media Appearances (9)
CEOs Who Began Their Careers During Booms Tend to Be Less Ethical
Harvard Business Review online
2017-05-12
For CEOs who began their careers when jobs were plentiful and ethical shortcuts were more prevalent, bending rules may become the template for how things are done and what it takes to succeed and survive
How Money Affects Social Ties
TEDx online
2017-03-07
Emily Bianchi’s talk discusses economic conditions and its role in shaping attitudes and behaviors in our personal and professional lives.
Higher-Earning Households Tend To Spend More Time Alone
NPR radio
2016-05-15
"Does access to money predict social behavior? That's the question posed by researchers Emily Bianchi and Kathleen Vohs in a new study. They dug into data for nearly 30,000 respondents of the General Social Survey - that's a long-running sociological survey of American attitude and behavior - to find out how what we earn affects how we spend our time.'
Narcissists are everywhere — but they may not be the people you think they are
The Washington Post print
2016-10-07
Most (but not all) putative narcissists today are innocent victims of an overused label. Millennials? Nah. People are always more narcissistic when they’re young.
Why Richer People Spend More Time With Their Friends
The Atlantic online
2016-05-09
"A new study suggests that with money comes the luxury of choosing not to socialize mostly with neighbors and family members."
The Fall of Narcissism
The New York Times online
2014-06-04
Emily Bianchi, a professor of organization and management at Emory University and the study’s first author, told MinnPost that the relatively flush ’80s and ’90s might have helped touch off a rise in narcissism, but “the Great Recession may knock this upward trajectory off course.”...
How the recession shaped a more humble generation
Marketplace online
2014-05-30
But, they're also young people who came of age during a recession. According to a study done by Dr. Emily Bianchi of Emory University’s Goizueta Business School, recession is an event that could mitigate characteristics of narcissism. "We don’t know a whole lot about where narcissism comes from, but what we do know seems to suggest that narcissism is tempered by adversity and to some extent by failure,” she says. The word narcissist is one that is often misused to describe people who are vain, rude, or plain old self-centered. In psychology, narcissism has distinguishing characteristics other than self-admiration. “Hallmarks of narcissism are lack of empathy, a sense that one is better than other people around them, a heightened sense of self-importance. Even a willingness to exploit other people to achieve one’s own gains,” Bianchi says.
Millennials might not be as narcissistic as everyone thought
The Washington Post online
2014-05-14
That’s according to new research published in the journal Psychological Science. Emily Bianchi of Emory University notes that “people who enter adulthood during recessions are less likely to be narcissistic later in life” than people who start working during more financially comfortable times. (Thanks to Melissa Dahl, writing for the new site Science of Us, for flagging this.)...
Study: Opportunities in Young Adulthood Linked to Later Narcissism
The Atlantic online
2014-05-13
Emily Bianchi of Emory University notes in the study that “economic recessions tend to be particularly devastating for young adults,” who are more likely to be unemployed, underemployed, and underpaid during a down economy than older adults with more experience. It stands to reason that such an experience could have a lasting effect, that what you get (or don’t get) when you’re just starting out as a working adult could shape your views of what you think you deserve...
Articles (7)
How the Economy Shapes the Way We Think about Ourselves and OthersCurrent Opinion in Psychology
Emily C. Bianchi
2020-02-06
While recessions are a regular feature of modern economic life, researchers have only recently begun to explore their psychological implications. This review examines evidence that recessions are linked to changes in how people regard themselves and others. Specifically, it reviews work suggesting that recessions are associated with declines in individualism and increases in interdependence. It also reviews evidence indicating that economic turmoil is associated with greater racial animosity. Finally, it considers some psychological processes underlying these effects.
Reexamining the Link Between Economic Downturns and Racial Antipathy: Evidence that Prejudice Against Blacks Rises in RecessionsPsychological Science
Emily C. Bianchi, Erika V. Hall, & Sarah Lee
2018-03-02
Scholars have long argued that economic downturns intensify racial discord. However, empirical support for this relationship has been mixed, with most recent studies finding no evidence that downturns provoke greater racial animosity. Yet most past research has focused on hate crimes, a particularly violent and relatively infrequent manifestation of racial antipathy. In this article, we reexamine the relationship between economic downturns and racial acrimony using more subtle indicators of racial animosity. We found that during economic downturns, Whites felt less warmly about Blacks (Studies 1 and 2), held more negative explicit and implicit attitudes about Blacks, were more likely to condone the use of stereotypes, and were more willing to regard inequality between groups as natural and acceptable (Study 2). Moreover, during downturns, Black musicians (Study 3) and Black politicians (Study 4) were less likely to secure a musical hit or win a congressional election.
American Individualism Rises and Falls with the Economy: Cross-temporal Evidence that Individualism Declines when the Economy FaltersJournal of Personality and Social Psychology
Emily C. Bianchi
2016-07-16
Past work has shown that economic growth often engenders greater individualism. Yet much of this work charts changes in wealth and individualism over long periods of time, making it unclear whether rising individualism is primarily driven by wealth or by the social and generational changes that often accompany large-scale economic transformations. This article explores whether individualism is sensitive to more transient macroeconomic fluctuations, even in the absence of transformative social changes or generational turnover. Six studies found that individualism swelled during prosperous times and fell during recessionary times. In good economic times, Americans were more likely to give newborns uncommon names (Study 1), champion autonomy in children (Study 2), aspire to look different from others (Study 3), and favor music with self-focused language (Study 4). Conversely, when the economy was floundering, Americans were more likely to socialize children to attend to the needs of others (Study 2) and favor music with other-oriented language (Study 4). Subsequent studies found that recessions engendered uncertainty (Study 5) which in turn tempered individualism and fostered interdependence (Study 6).
Do Good Times Breed Cheats? Prosperous Times Have Immediate and Lasting Implications for CEO MisconductOrganization Science
Emily C. Bianchi & Aharon Cohen Mohliver
2016-09-22
We examine whether prosperous economic times have both immediate and lasting implications for corporate misconduct among chief executive officers (CEOs). Drawing on research suggesting that prosperous times are associated with excessive risk-taking, overconfidence, and more opportunities to cheat, we first propose that CEOs will be more likely to engage in corporate misconduct during good economic times. Next, we propose that CEOs who begin their careers in prosperous times will be more likely to engage in self-serving corporate misconduct later in their careers. We tested these hypotheses by assembling a large data set of American CEOs and following their stock option reporting patterns between 1996 and 2005. We found that in good economic times, CEOs were more likely to backdate their stock options grants. Moreover, CEOs who began their careers in prosperous times were more likely to backdate stock option grants later in their careers. These findings suggest that the state of the economy can influence current ethical behavior and leave a lasting imprint on the moral proclivities of new workforce entrants.
Social Class and Social Worlds: Income Predicts the Frequency and Nature of Social ContactSocial Psychological and Personality Science
Emily C. Bianchi & Kathleen D. Vohs
2016-05-17
Does access to money predict social behavior? Past work has shown that money fosters self-sufficiency and reduces interest in others. Building on this work, we tested whether income predicts the frequency and type of social interactions. Two studies using large, nationally representative samples of Americans (N = 118,026) and different measures of social contact showed that higher household income was associated with less time spent socializing with others (Studies 1 and 2) and more time spent alone (Study 2). Income also predicted the nature of social contact. People with higher incomes spent less time with their families and neighbors and spent more time with their friends. These findings suggest that income is associated with how and with whom people spend their time.
Entering Adulthood in a Recession Tempers Later NarcissismPsychological Science
Emily C. Bianchi
2014-07-24
Despite widespread interest in narcissism, relatively little is known about the conditions that encourage or dampen it. Drawing on research showing that macroenvironmental conditions in emerging adulthood can leave a lasting imprint on attitudes and behaviors, I argue that people who enter adulthood during recessions are less likely to be narcissistic later in life than those who come of age in more prosperous times. Using large samples of American adults, Studies 1 and 2 showed that people who entered adulthood during worse economic times endorsed fewer narcissistic items as older adults. Study 3 extended these findings to a behavioral manifestation of narcissism: the relative pay of CEOs. CEOs who came of age in worse economic times paid themselves less relative to other top executives in their firms. These findings suggest that macroenvironmental experiences at a critical life stage can have lasting implications for how unique, special, and deserving people believe themselves to be.
The Bright Side of Bad Times: The Affective Advantages of Entering the Workforce in a RecessionAdministrative Science Quarterly
Emily C. Bianchi
2013-12-06
This paper examines whether earning a college or graduate degree in a recession or an economic boom has lasting effects on job satisfaction. Across three studies, well-educated graduates who entered the workforce during economic downturns were more satisfied with their current jobs than those who entered during more prosperous economic times. Study 1 showed that economic conditions at college graduation predicted later job satisfaction even after accounting for different industry and occupational choices. Study 2 replicated these results and found that recession-era graduates were more satisfied with their jobs both early and later in their careers and even when they earned less money. A third cross-sectional study showed that people who entered the workforce in bad economies were less likely to entertain upward counterfactuals, or thoughts about how they might have done better, and more likely to feel grateful for their jobs, both of which mediated the relationship between economic conditions at workforce entry and job satisfaction. While past research on job satisfaction has focused largely on situational and dispositional antecedents, these results suggest that early workforce conditions also can have lasting implications for how people affectively evaluate their jobs. | https://expertfile.com/experts/bianchi_emily/emily-bianchi |
Our family has had ample opportunity this past month to discuss the importance of decisions. There have been the normal school decisions, friend issues, but added to that our family circle and community have been impacted by infidelity, pregnancy outside of marriage, loss of life, crime, etc… Those “big” items that cause you to stop. As parents, my husband and I have found ourselves pausing to think long and hard about how to explain. How to teach grace, justice and accountability. It is not easy. It often doesn’t make sense. We want our girls to be forgiving and gracious, yet we want them to understand that no place in God’s word does it say that forgiveness means no consequences, no judgement, no accountability. They have questions. We have questions. We admit all we do not understand. We share what we know to be truth.
One thing became abundantly clear as we had these discussions over the course of the past few months. God’s word is not void of instruction. His word is clear we have choices, which means we have decisions to make. Every decision matters, for THIS moment and for eternity. What seems like a seemingly singular event can carry heart ache or patterns of behavior through generations. One lapse in judgement can create life. In some of the situations we discussed, our girls could quickly see a pattern of decisions/ a series of decisions, that led to the hardship. In others it is not so clear. For some, they are innocent, yet the choices/the decisions of those around them have had impact upon them. It is much like dropping the stone in the still water. Circles form, spreading out through the entire lake. Even the smallest of stones can create a small wave, movement of still waters.
While there were many “Why” questions left unanswered, one lesson was learned by us all. We were reminded our decisions matter. Our decisions impact others. Our decisions are for THIS moment and eternity. Whether our decisions be good or bad, they have lasting impact.
I am not glad the situations we discussed exist. They are difficult. They carry pain and sorrow. They bring disappointment. However, I am glad the situations led to discussions which led to realizations. In THIS moment we are conscious of our every choice. I know we will not always be on high alert, although we should be. But for today we are. I pray the consciousness lingers. I pray we remember. I pray we choose His truths, His ways, and we decide obedience. | https://simplifiedorganizedstyled.com/2014/11/03/every-decision-matters-for-this-moment/ |
The present invention relates to a decoding device and a decoding method that decode coded data by using a low density parity check (which will hereinafter be abbreviated to LDPC) code.
An error correcting code is applied to a communication system that transmits the data without any error, or a computer system etc that reads, without any error, the data saved on a magnetic disc, a compact disc, etc. This type of error correcting code is classified roughly into a block code and a convolutional code. The block code is defined as a method by which information data are divided into data blocks and a codeword is created from the data blocks. On the other hand, the convolutional code is defined as a method of coding the data in relation to other data blocks (e.g., the data blocks in the past).
LDPC code is a type of block code having a very high error correcting capability and exhibiting a characteristic close to a Shannon limit. LDPC code has already been utilized in the field of the magnetic discs etc, and will be, it is expected, applied to a next generation mobile communication system.
The LDPC code is generally defined by a parity check matrix (parity parity check matrix) in which matrix elements consist of "0" and "1", and the matrix elements "1" are sparsely allocated. Then, in the LDPC code, the parity check matrix is defined by (the number of check symbols (parity bit length) M x code length (code bit length) N) matrix (see FIG. 11).
r
c
c
c
r
LDPC codes include regular LDPC code and irregular LDPC code. Regular LDPC code employs a parity check matrix in which a weight (the number of "1"s in one row in the parity check matrix) w of each row and a weight w of each column are each fixed, and a relationship w<<N is established. Irregular LDPC code uses a parity check matrix in which the weight of each row and the weight of each column are not fixed. Irregular LDPC code is exemplified by an IRA (Irregular Repeat Accumulate) code, defined by the parity check matrix in which the weight w of each column is not fixed while the weight w of each row is fixed.
A decoding method of this type of LDPC code is an SPA (Sum-Product Algorithm). The SPA is an algorithm of reduced a calculation quantity but increased an error rate characteristic owing to such a feature of the LDPC code that the matrix elements of the parity check matrix contain a lesser number of "1"s. The SPA is an algorithm for outputting an estimated word on the basis of likelihood information (Log Likelihood Ratio (LLR)) of the codeword obtained by a check node process (row process) and a variable node process (column process). The SPA performs high-accuracy coding by repeating the check node process and the variable node process a predetermined number of times (the number of rounds).
An SPA-based decoding procedure will hereinafter be explained. It is to be noted that the following description might use an expression in the case of representing a parity check condition shown by the parity check matrix in the form of a Tanner graph (bipartite graph). Specifically, the matrix elements "1" of the parity check matrix are expressed as [edges], code bits corresponding to the respective columns in the parity check matrix are represented by [variable nodes], and check bits corresponding to the respective rows in the parity check matrix are represented by [check nodes] as the case may be.
j
ji
i
ij
i
i
(0)
(0)
At first, with respect to such a check node s as to gain the elements h=1 of the parity check matrix about all of the variable nodes x, the relative likelihood q(0) of the conditional anterior probability in the following Formula (1) is initialized (which will hereinafter be termed an initializing process). In the following Formula (1), q(0) designates such a relative likelihood of the posterior probability as to establish x=0 of a round "0" and becomes the relative likelihood of the anterior probability for a reception signal.
q
0
i
j
q
0
i
0
0
=
i
ji
j
ji
ji
j
i
j
i
j
j
j
(u)
(u)
u,
i
i
[Mathematical Expression 1]
Next, the check node process is executed. In connection with such variable nodes x as to establish h=1 about all of the check nodes s, the relative likelihood r(b) of the posterior probability is obtained in the following Formula (4). The relative likelihood r(b) of the posterior probability shown in the following Formula (4) represents the relative likelihood of the posterior probability that gains the check node s=0 under such a condition as to establish x=b of a round S represents an aggregation of of the variable nodes x connected to the check nodes s, and S\i designates an aggregation of values obtained by subtracting from S.
α
u
i
j
=
sign
q
u
i
j
0
β
u
i
j
=
q
u
i
j
0
r
u
j
i
α
u
k
j
0
∑
m
i
∈
\
S
j
ϕ
β
u
m
j
=
∏
k
i
∈
\
S
j
ϕ
sign
x
≡
{
1
,
x
≥
0
-
1
x
<
0
ϕ
log
x
e
x
+
1
e
x
-
1
=
j
ji
i
ij
i
j
i
i
i
(u)
j
j
[Mathematical Expression 2]
Next, variable node processing is executed. In connection with such a check node s as to establish the elements h=1 of the parity check matrix about all of the variable nodes x, the relative likelihood q(b) of the posterior probability is obtained in the Formula (5). In the formula (5), X represents an aggregation of of the check nodes s connected to the variable nodes x, and X\j designates an aggregation of values obtained by subtracting from X.
q
u
+
1
i
q
0
i
0
0
=
+
∑
k
∈
X
i
r
u
k
i
0
(u)
ij
[Mathematical Expression 3]
As described above, when the relative likelihood q(0) of the posterior probability about each variable node is obtained, a temporary estimated word (estimated bit sequence) is generated based on these piece of likelihood information as shown in the Formula (6).
x
^
i
≡
{
0
,
q
u
+
1
i
0
≥
0
1
,
q
u
+
1
i
0
<
0
[Mathematical Expression 4]
T
Then, the thus-obtained estimated bit sequence is subjected to a parity check. If the Formula (7) is satisfied, the estimated bit sequence is outputted. In the Formula (7), a suffix "" represents transposition.
x
^
⋅
=
H
T
0
(u+1)
ij
[Mathematical Expression 5]
If the Formula (7) is not satisfied, there is performed an arithmetic operation shown in the Formula (8) for obtaining the relative likelihood q(b) of the conditional anterior probability for a next round (u + 1) (this process will hereinafter be referred to as an anterior variable node process).
q
u
+
1
i
j
0
=
+
q
0
i
j
0
∑
k
j
∈
\
X
i
r
u
k
i
0
=
-
q
u
+
1
i
r
u
j
i
0
0
(u+1)
ij
[Mathematical Expression 6]
Hereafter, based on the relative likelihood q(b) of the conditional anterior probability acquired by this arithmetic operation, the check node process, the variable node process, the temporary estimation and the parity check in the round (u + 1) are carried out. A series of processes are terminated after the parity check has been satisfied or the processes have been executed a number of times corresponding to a predetermined maximum round count.
Next, a configuration of the conventional decoder using the SPA will be explained. In the case of dealing with such a code that the row weight is fixed as by the IRA etc, the decoder is configured to include arithmetic units corresponding to the row weight and to process the arithmetic operation in parallel on a row-by-row basis. Especially a mobile terminal performing mobile communications imposes limitations on circuit scale and therefore has a necessity of decreasing the number of arithmetic units used in the decoder to the greatest possible degree, and this configuration is adopted, thereby making it possible to increase availability efficiency of resources of the arithmetic unit and to speed up the decoding process. FIG. 12 is a diagram showing the configuration of the conventional preferable decoder corresponding to a predetermined parity check matrix, wherein the predetermined parity check matrix is illustrated on the left side, and the configuration of the conventional decoder corresponding to the parity check matrix is shown on the right side.
The parity check matrix shown in FIG. 12 is segmented into blocks each having one edge in every row within each of the blocks (blocks 1-3 in FIG. 12). In the decoder corresponding to this parity check matrix, arithmetic units (edge-by-edge arithmetic units 21, 22 and 23) are allocated one by one to the respective blocks in the parity check matrix in order to actualize parallel processing. Also provided is a row-by-row arithmetic unit 11 that executes batchwise the arithmetic operation on a row-by-row basis by using values calculated by the edge-by-edge arithmetic units 21, 22 and 23.
In the decoder having such a configuration, the edge-by-edge arithmetic units 21, 22 and 23 execute a check node process, a variable node process, an anterior variable node process, etc with respect to the edge existing in the processing target block on a check-node-by-check-node basis (on the row-by-row basis in the parity check matrix), and the row-by-row arithmetic unit 11 performs a necessary arithmetic operation (e.g., the Formula (4) etc) on the basis of the values calculated by the edge-by-edge arithmetic units 21, 22 and 23. This type of arithmetic operation on the check-node-by-check-node basis is carried out for all of the check nodes, whereby the temporary estimation and the parity check are, it follows, executed based on the finally-acquired likelihood information (posterior probability relative likelihood) of each variable node.
In the decoder adopting this configuration, further, it is required that a memory area storing the likelihood information of each variable node is connected to each arithmetic unit using the likelihood information of the variable node. This is because the likelihood information calculated in the previous round is used in each round. In the case of considering such a connection between the memory area and the arithmetic unit, normally a configuration is taken, wherein memory blocks 31, 32 and 33 in the parity check matrix are set corresponding to the respective blocks 1, 2 and 3, and pieces of likelihood information of the respective code bits belonging to the respective block are stored in the memory blocks 31, 32, and 33. The reason for this configuration is that it makes it possible to establish a one-to-one connection between the arithmetic unit and the memory and, accordingly, to simplify the circuit configuration.
William E. Ryan, "An Introduction to LDPC Codes", "Department of Electrical and Computer Engineering, The University of Arizona", U.S.A., August 19 2003
It should be noted that a technology disclosed in the following documents is given as the conventional art related to the present invention of the present application. The conventional art document is "."
In the case of using the LDPC code in the field of mobile communications, however, an assumption is that a coding rate is changed corresponding to a propagation environment, and, in this case, it is required that the decoding process be executed using a different parity check matrix depending on every coding rate. In such a case, though it is conceivable to adopt a configuration providing a different decoder for every coding rate (each parity check matrix), it is not realistic in terms of the circuit scale to do this in the context of a mobile terminal for mobile communications.
Accordingly, there is a need for a decoder that can handle a plurality of coding rates. There is the following relationship between the coding rate and a size of the row weight and the parity check matrix.
High coding rate: Row weight is large, The number of rows in parity check matrix is small
Low coding rate: Row weight is small, The number of rows in parity check matrix is large
From this relationship, in the decoder corresponding to (i.e. handling) the plurality of coding rates, in the case of executing the arithmetic operation on the row-by-row basis, the processing time increases when executing the process at a low coding rate. Therefore, in the case of processing at the low coding rate, the processing time is required to be decreased by processing multiple rows simultaneously. Namely, it is necessary to consider the configuration of the decoder in a way that takes account of the difference in the number of the used arithmetic unit resources depending on the coding rate because of the row weight being different according to the coding rate. This entails abandoning the one-to-one connecting relationship between the arithmetic unit updating the likelihood information and the memory saving the likelihood information in the case of simultaneously processing the plurality of rows because of no relation between the respective rows in the parity check matrix.
FIGS. 13 through 16 are diagrams each illustrating the configuration of the decoder corresponding to the plurality of coding rates, wherein specifically FIG. 13 illustrates a correspondence coding specification, FIG. 14 shows the parity check matrix and the configuration of the decoder in the case of corresponding to a low coding rate (R = 1/5), FIG. 15 shows the parity check matrix and the configuration of the decoder in the case of corresponding to a high coding rate (R = 4/5), and FIG. 16 shows the configuration of the decoder corresponding to a plurality of coding rates. The decoder illustrated in FIGS. 13 through 16 is configured so that the parity check matrix is segmented into 12 blocks (B1 - B12), and the respective arithmetic units (E1 - E12) process the arithmetic operations about the respective blocks.
In this configuration, when processing one row in the parity check matrix at one cycle while corresponding to the two coding rates shown in FIG. 13, all of the processes take 4000 cycles in the case of the low coding rate, and, by contrast, 1000 cycles suffice for all of the processes in the case of the high coding rate. Accordingly, the low coding rate needs an improved processing speed, and hence the configuration is attained so that, for instance, 4 rows are processed at one cycle (see FIG. 14). On the other hand, in the case of the high coding rate, the decoder is configured so that 1 row is processed at one cycle (see FIG. 15).
As a result, for decoder configurations in the case of the two coding rates in combination, there is a need to predefine a multi-to-multi connecting relationship between the arithmetic units (E1 - E12) and the memories (M1 - M12), and further the connection needs to be switched over corresponding to the coding rate and the processing cycle.
Thus, although the decoder corresponding to a single coding rate is simply constructed of arithmetic units and memories, the decoder corresponding to a plurality of coding rates additionally requires at least a circuit for switching over the connections between the memories and the arithmetic units, and therefore such a problem arises that the circuit scale increases. Moreover, this connection switching circuit takes a complicated circuit configuration due to the necessity of dynamically switching over the multi-to-multi connection between the memories and the arithmetic units.
It is therefore desirable to provide a decoding device that actualizes a high-speed decoding process while restraining the circuit scale.
The present invention adopts the following configurations. Namely, one aspect of the present invention provides a decoding device decoding coded data coded by a low density parity check code in a way that uses a plurality of parity check matrixes, corresponding to a plurality of coding rates, in which a row weight gets fixed, comprising a pattern storing means storing information about a parity check matrix, in the plurality of parity check matrixes, corresponding to the coding rate of the coded data and the segmentation pattern of the parity check matrix, which is formed by segmenting each of the plurality of parity check matrixes into a plurality of row groups and into a plurality of column groups of which the number is the same throughout the plurality of parity check matrixes, and by allocating each of the plurality of parity check matrixes so that there is one edge allocation area, which is a unit area of a plurality of segmented unit areas and has an edge, in each of the plurality of column groups within each of the plurality of row groups, a likelihood information storing means storing likelihood information of the respective code bits of the coded data in a way that divides the likelihood information of the respective code bits of the coded data into each memory cell with respect to each of the plurality of column groups based on the stored segmentation pattern, and a plurality of edge-by-edge arithmetic means each connected, corresponding to any one of the edge allocation areas, to the memory cell storing the likelihood information about the column group to which the corresponding edge allocation area belongs, and updating the likelihood information of the code bit corresponding to an edge within the corresponding edge allocation area based on the likelihood information stored in the connected memory cell.
In embodiments of the present invention, the predetermined segmentation pattern is defined with respect to each of the plurality of parity check matrixes corresponding to the plurality of coding rates. In this segmentation pattern, the respective parity check matrixes are segmented so that there exists the plurality of row groups each having the predetermined number of rows and further there exists the plurality of column groups each having the predetermined number of columns. In the plurality of unit areas each serving as a minimum unit after segmentation, the segmentation is done so that one edge allocable area where an edge is allocated exists in each column group within each row group. The thus-determined segmentation pattern and the thus-determined parity check matrix are generated for every coding rate, and the parity check matrix corresponding to the coded data defined as a decoding target data in these pieces of information and the segmentation pattern thereof, are stored. The decoding device according to the present invention is configured corresponding to the thus-determined parity check matrix and the thus-determined segmentation pattern.
Each of the memory cells is set corresponding to each of the column groups according to this segmentation pattern, and stores the likelihood information of the code bit corresponding to the column (variable node) contained in the corresponding column group.
Each of the edge-by-edge arithmetic units is set corresponding to any one edge allocation area, and is connected to the memory cell storing the likelihood information of the code bit corresponding to the column group to which this edge allocation area belongs. Each edge-by-edge arithmetic unit updates, with this configuration taken, the likelihood information of the code bit corresponding to the position of the edge within the corresponding edge allocation area based on the likelihood information in the connected memory cell.
Thus, each memory cell is set corresponding to each column group (variable node group), and the edge-by-edge arithmetic unit is set corresponding to the edge allocation area as an only-one existence in each column group within each row group, and therefore, the connection between each memory cell and each edge-by-edge arithmetic unit is resultantly actualized in a one-to-one relationship.
Accordingly, the decoding device corresponding to the plurality of coding rates has hitherto required the switch for switching over the complicated connection between the memory cell and the arithmetic unit, however, by means of the present invention, it is possible to omit this type of switch and, more essentially, to scale down the circuit of the decoding device.
Further, the plurality of edge-by-edge arithmetic units is provided and is set corresponding to the respective edge allocation areas, and hence it is feasible to execute the parallel processing of the edge-by-edge arithmetic operations and also to execute the high-speed decoding process.
Moreover, in the decoding device according to the present invention, the pattern storing means may store a position of the edge allocation area and a position of the edge within the edge allocation area, as information about each parity check matrix, and each of the edge-by-edge arithmetic means may determine an address of the should-be-updated likelihood information within the connected memory cell based on the stored edge position with respect to a processing target edge allocation area.
By using the present invention, it may be sufficient to retain the bare minimum of information (about the position of the edge allocation area and the position of the edge within the edge allocation area) based on the segmentation pattern without retaining all the information with respect to each of the parity check matrixes, thereby making it possible to restrain a memory capacity etc and to reduce the scale of the circuit.
Furthermore, each row group may be an aggregation of the rows subjected to a decoding process at one processing cycle, and the plurality of edge-by-edge arithmetic means may be provided, of which the number corresponds to at least a numerical value obtained by multiplying a row count of the rows undergoing the decoding process at one processing cycle by the row weight of the parity check matrix, and may update the likelihood information for the single row group at one processing cycle.
The use of a concept of the row group subjected to simultaneous processing at one processing cycle, makes it feasible to uniquely determine the connection between every edge-by-edge arithmetic unit and every memory cell at any processing cycle and at any coding rate with respect to the plurality of parity check matrixes corresponding to the plurality of coding rates.
Further, each parity check matrix and the segmentation pattern may have such allocation that each unit area is formed of at least one matrix, any one of the matrixes contained in the edge allocation areas becomes an edge allocation matrix where the edge is allocated, and other matrixes become zero matrixes, the pattern storing means may store, as the information on each parity check matrix, a position of the edge allocation area, a position of the edge allocation matrix in the edge allocation area and a position of the edge within the edge allocation matrix, and each of the edge-by-edge arithmetic means may determine the address of the should-be-updated likelihood information within the connected memory cell based on the position of the edge allocation matrix related to the processing target edge allocation area and the position of the edge within the edge allocation matrix.
Preferably, each square matrix is segmented to have the plurality of rows and the plurality of columns, and further this segmentation uses the matrix taking a predetermined shape.
In this way, the number of the row groups themselves can be reduced simply by having, as the information about the parity check matrix, the position of the edge allocation area, the position of the edge allocation matrix within this edge allocation area and the position of the edge within the edge allocation matrix, and hence the should-be-stored information can be decreased owing to the information about the parity check matrix on the whole. It is therefore possible to further restrain the memory capacity.
Moreover, the matrix contained in each unit area may be a square matrix.
With this contrivance, the information needed in terms of recognizing the shape of the matrix contained in the unit area can be reduced, and hence the amount of information to be stored can be also decreased owing to the information about the parity check matrix.
Furthermore, each unit area may be one matrix, the edge may be allocated in the respective rows in the edge allocation area, the pattern storing means may store, as information about each parity check matrix, a position of the edge allocation area, a shape of the matrix in the edge allocation area, and a position of the edge within the edge allocation area, and each of the edge-by-edge arithmetic means may determine the address of the should-be-updated likelihood information within the connected memory cell based on the shape of the matrix in the edge allocation area related to the processing target edge allocation area and the position of the edge in the edge allocation area.
In embodiments of the present invention, the unit area can be formed in an arbitrary shape, and therefore a degree of freedom in terms of determining the parity check matrix can be increased, whereby the matrix having high error correcting capability as the LDPC code can be selected.
Further, each parity check matrix may be a parity check matrix in which to rearrange the columns and/or the rows of the real parity check matrix corresponding to the coded data, and the likelihood information storing means may rearrange, based on rearrangement information from the real parity check matrix into the stored parity check matrix, the likelihood information of the code bits of the coded data in accordance with the stored parity check matrix, may thereafter divide the likelihood information into the respective memory cells per column group based on the stored segmentation pattern, and may store the divided likelihood information.
As mentioned above, the parity check matrix in which to rearrange the columns and/or the rows of the real parity check matrix that should be actually used for decoding, is used. Then, for corresponding to the rearrangement, the process is executed after rearranging pieces of likelihood information of the respective code bits of the coded data in accordance with the parity check matrix.
Hence, it is possible to deal with a case where the real parity check matrix does not match with the segmentation pattern of the present invention, and it is therefore possible to determine the parity check matrix having the high degree of freedom taking into consideration the error correcting capability of the LDPC code.
It should be noted that the present invention may also be embodied in a program that allows a computer to perform any one of the functions actualized. Moreover, the present invention may also be provided in the form of a readable-by-computer storage medium storing such a program.
FIG. 1 is a diagram showing an example of a circuit configuration of a decoding device in a first embodiment;
FIG. 2 is a diagram showing a coding specification in the first embodiment;
FIG. 3 is a diagram illustrating a parity check matrix in the first embodiment;
FIG. 4 is a diagram showing the connecting relationship between the arithmetic units and the memories at a coding rate 1/5 in the first embodiment;
FIG. 5 is a diagram showing the connecting relationship between the arithmetic units and the memories at a coding rate 4/5 in the first embodiment;
FIG. 6 is a flowchart showing an operational example of the decoding device in the first embodiment;
FIG. 7 is a diagram showing the parity check matrix in a second embodiment;
FIG. 8 is a diagram showing the parity check matrix in a third embodiment;
FIG. 9 is a diagram showing the parity check matrix in a fourth embodiment;
FIG.10 is a diagram showing an example of a circuit configuration of the decoding device in a fifth embodiment;
FIG.11 is a diagram showing the parity check matrix;
FIG.12 is a diagram showing a configuration of a conventional decoder;
FIG.13 is a diagram showing a correspondence coding specification;
FIG.14 is a diagram showing a configuration of the decoder corresponding to a plurality of coding rates when operating at the coding rate 1/5;
FIG.15 is a diagram showing a configuration of the decoder corresponding to the plurality of coding rates when operating at the coding rate 4/5; and
FIG.16 is a diagram showing a configuration of the decoder corresponding to the plurality of coding rates.
Embodiments of the present invention make it feasible to provide a decoding device that actualizes the high-speed decoding process while restraining the circuit scale.Reference is made, by way of example only, to the accompanying drawings in which:
A decoding device in each of embodiments of the present invention will hereinafter be described with reference to the drawings. It should be noted that configurations in the embodiments, which will hereinafter be discussed, are exemplifications, and the present invention is not limited to the configurations in the following embodiments.
The decoding device in a first embodiment of the present invention will hereinafter be explained.
(u+1)
(u)
i
i
FIG. 1 is a block diagram showing the example of the circuit configuration of the decoding device in the first embodiment. The decoding device in the first embodiment includes an input likelihood memory 101, a q memory 102 (which will hereinafter simply be termed the memory 102), a q memory 103 (below, memory 103), row-by-row check node processing units 105 and edge-by-edge arithmetic units 110-1 to 110-Nc (corresponding to an edge-by-edge arithmetic means of the present invention). The reference symbols and numerals used in the following discussion are the same as those used in the introduction part of this specification.
(0)
i
The input likelihood memory 101 stores posterior probability relative likelihood (which will hereinafter simply be termed likelihood information (q(0)) calculated by another circuit with respect to each code bit of a received code bit sequence.
(u+1)
(u)
(u)
(u+1)
i
i
i
i
The memory 102 and the memory 103 stores the likelihood information q and the likelihood information q about each of the code bits. To be specific, the q memory 103 stores the likelihood information that is referred to in a round u, i.e., the likelihood information calculated in the previous round, and the q memory 102 stores the likelihood information calculated in the round u. Further, the memory 102 and the memory 103 are each constructed of a plurality of memory cells and each has a predetermined capacity. The likelihood information stored in each memory cell is determined in accordance with a parity check matrix. Moreover, a method of connecting each memory cell to each edge-by-edge arithmetic unit will be explained later on.
i
ji
j
The row-by-row check node processing unit 105 receives a value given in the following formula (9), which is outputted from each of the edge-by-edge arithmetic units 110-1 to 110-Nc, and calculates a value given in the following Formula (10) about all of such variable nodes x as to establish h=1 with respect to the check nodes s. "α" and "β" shown in the Formula (9) are based on the Formulae (2), (3) and (4).
α
u
i
j
⋅
ϕ
β
u
i
j
∏
k
i
∈
\
S
j
α
u
k
j
ϕ
∑
m
∈
S
j
ϕ
β
u
m
j
ji
[Mathematical Expression 7]
The edge-by-edge arithmetic units 110-1 to 110-Nc are provided on an edge-by-edge basis (block-by-block basis). The edge-by-edge arithmetic units 110-1 to 110-Nc execute parallel processing for every check node. The edge-by-edge arithmetic units 110-1 to 110-Nc each have the same configuration, and hence the edge-by-edge arithmetic unit 110-1 will be exemplified in the following description. The edge-by-edge arithmetic unit 110-1 includes an anterior variable node processing unit 111-1, a check node first processing unit 112-1, a check node second processing unit 115-1, an r memory 116-1 and a variable node processing unit 117-1.
j
i
j
i
ji
j
ji
ij
ji
(u)
(u)
(u-1)
(u)
(u-1)
The anterior variable node processing unit 111-1 executes the anterior variable node process described in the introduction with respect to a target edge (a first edge) of the check node s. Specifically, the anterior variable node processing unit 111-1 reads a piece of likelihood information q(0) about the target edge of the check node s from the q memory 103, further reads a piece of likelihood information r(0) of the previous round of the check node s from the r memory 116-1, and executes an arithmetic operation in the following Formula (11). The following Formula (11) corresponds to the above-mentioned Formula (8). The anterior variable node processing unit 111-1 calculates conditional anterior probability relative likelihood (which will hereinafter simply be termed anterior likelihood information) q(0) utilized in the round u. Note that when the round u = 0, the likelihood information r(0) of the previous round is not read out.
q
u
i
j
q
u
i
r
u
-
1
j
i
0
0
0
=
-
[Mathematical Expression 8]
The check node first processing unit 112-1, when receiving the anterior likelihood information form the anterior variable node processing unit 111-1, performs the arithmetic operations based on the Formulae (2), (3) and (4), thereby calculating the value shown in the Formula (9). The thus-calculated value is transferred to the row-by-row check node processing unit 105 and to the check node second processing unit 115-1.
(u)
ji
j
ji
The check node second processing unit 115-1, when receiving the value shown in the formula (10) from the row-by-row check node processing unit 105 and further receiving the value shown in the Formula (9) from the check node first processing unit 112-1, calculates the likelihood information r(0) shown in the Formula (4) with respect to the target edge of the check node s. The check node second processing unit 115-1, on the occasion of calculating this likelihood information, executes the arithmetic operation in the following Formula (12). The calculated likelihood information is transferred to the r memory 116-1 and to the variable node processing unit 117-1.
r
u
j
i
α
u
k
j
α
u
i
j
0
∑
m
∈
S
j
ϕ
β
u
m
j
-
ϕ
β
u
i
j
=
⋅
⋅
∏
k
∈
S
j
ϕ
ji
ji
(u)
[Mathematical Expression 9]
The r memory 116-1 retains the likelihood information r(0) calculated in each round in order to be used in the anterior variable node process by the anterior variable node processing unit 111-1.
(u)
(u)
ji
j
j-1
ji
The variable node processing unit 117-1 performs the arithmetic operation of adding r(0) about the check node s in the variable node process shown in the Formula (5), as described in the introduction. Namely, the variable node processing unit 117-1 reads pieces of likelihood information added up to a check node s from the memory 102 and adds, to this readout likelihood information, the likelihood information r(0) calculated this time by the check node second processing unit 115-1. The added likelihood information is written again to the memory 102.
j
Parallel processing is executed for every check node s by each of the functional units described above, and the process of each functional unit described above is repeated the number of times corresponding to the number of check nodes (M). Upon completion of the processes executed (M-1) times starting from the check node count "0", the memory 102 contains the likelihood information of the respective variable nodes. Then, the contents in the memory 102 are copied to the memory 103, while the contents in the input likelihood memory 101 are copied to the memory 102, and the process described above is repeated as a process of the next round. Note that a temporary estimation process and a parity check process (not shown) are executed upon completing the processes for all the check nodes, and, if a result of the check is correct, an estimated bit sequence at this time is outputted as a decoded result without executing the process of the next round.
Next, a parity check matrix used in the first embodiment will be explained with reference to FIGS. 2 and 3. FIG. 2 is a diagram showing a coding specification in the first embodiment. FIG. 3 is a diagram illustrating the parity check matrix used in the first embodiment.
To start with, before determining the parity check matrix used in the first embodiment, the coding specification carried out in the present decoding device is determined. The present decoding device at first determines corresponding coding rates in terms of determining the coding specification. In the first embodiment, as illustrated in FIG. 2, items of data correspond to a coding rate 1/5 and a coding rate 4/5, respectively. The present invention is not, however, limited to these coding rates.
Subsequently, the mounting count (Nc) of the arithmetic units is determined. The mounting count (Nc) of the arithmetic units means the number of the edge-by-edge arithmetic units (110-1 to 110-Nc). Then, a row weight (Wr) and the number of simultaneous processing rows (Mg) are determined based on this mounting count (Nc) of the arithmetic units with respect to each coding rate. At this time, the mounting count (Nc) of the arithmetic units, the row weight (Wr) and the number of the simultaneous processing rows (Mg) are respectively determined to establish a relationship such as (Nc) = (Wr) x (Mg). For instance, in the case of taking a higher coding rate in the plurality of corresponding coding rates, the row weight may be determined so that the number of the simultaneous processing rows becomes "1". The first embodiment exemplifies a case in which the coding specification as shown in FIG. 2 is determined.
When the coding specification is determined, the parity check matrix is determined corresponding to each coding rate on the basis of this coding specification. A size of each parity check matrix is determined from a parity bit length (M) and a code length (N) in the coding specification described above.
In the first embodiment, the parity check matrixes are segmented (into groups) according to a predetermined rule, and a minimum unit of the segmented parity check matrixes is referred to as a unit area. This segmentation is done such that the parity check matrixes are segmented into row groups 151 each having a predetermined row count and into column groups 152 each having a predetermined column count, and is specifically done as below. To begin with, the parity check matrix is segmented into the row groups 151 according to every Mg-rows as illustrated in FIG. 3. Each of the segmented row groups is defined as an aggregation of the check nodes that are simultaneously processed at each processing cycle in the decoding device. According to the coding specification in the first embodiment, the row group 151 is a group having 4 rows in a case where the coding rate is 1/5 and having 1 row in a case where the coding rate is 4/5.
Further, the columns within the row group 151 are segmented into column groups 152 of which the number is equivalent to the mounting count of the arithmetic units (Nc = (Wr x Mg)). The number of columns of each of the segmented column groups may be arbitrarily set, and it may be sufficient that a total sum of the column counts of the respective column groups becomes a total column count (N) of the parity check matrix. Each of the rows contained in each column group 152 is defined as the unit area. Each row group 151 contains (Mg x (Wr x Mg)) unit areas.
Then, an edge allocable area, in which one bit "1" is allocated in an arbitrary position within a unit area among the plurality of unit areas in each row group 151, is determined. All other matrix elements excluding "1" in the edge allocable area are set to "0", and "0" bits are allocated in the unit areas other than the edge allocable area. An allocation layout is determined so that there exist edge allocable areas of a number equivalent to the mounting count of the arithmetic units (Nc = (Wr x Mg)) within each row group 151, wherein Wr edge allocable areas exist in each row within each row group 151, and one edge allocable area exists in each column group 152.
A thus-determined segmentation pattern within the row group 151, in other words an organizing mode of the column groups 152 and an allocation mode of the edge allocable areas, is set as the same pattern throughout all the row groups. The position of the edge in the edge allocable area is, however, set arbitrarily in any row group, and a superior effect of the LDPC code is obtained by determining the pattern so that the edges (matrix elements "1") are allocated sparsely. It is to be noted that the thus-determined organizing mode of the respective column groups 152 varies in their number of rows but is the same in their number of columns within each parity check matrix. The decoding device in the first embodiment retains, in the memory or the like (not shown, corresponding to a pattern storing means according to the present invention), the coding specification shown in FIG. 2, the above-determined segmentation pattern of the respective row groups 151 and the information about the parity check matrix. The segmentation pattern is retained in such a way that, for example, a first column group is organized by a first column through a fifth column in the parity check matrix, a second column group is organized by a sixth column through an eighth column in the parity check matrix, a first edge allocable area is allocated in a first row of the first column group, a second edge allocable area is allocated in the first row of the fifth column group, and so on. The information about the parity check matrix is required to have the positions of the edge allocable areas and the edge positions therein with respect to each row block 151.
A connecting relationship between the respective edge-by-edge arithmetic units 110 and the memories 102, 103 will be explained with reference to FIGS. 4 and 5. FIG. 4 is a diagram showing a connecting relationship between each edge-by-edge arithmetic unit 110 and the memories 102, 103 in the case of operating at the coding rate 1/5. FIG. 5 is a diagram showing the connecting relationship between each edge-by-edge arithmetic unit 110 and the memories 102, 103 in the case of operating at the coding rate 4/5. FIGS. 4 and 5 show the parity check matrix on the left side, and show the connecting relationship between the arithmetic unit and the memories of the decoding device on the right side. Further, portions designated by letters [X], [Y], [Z], [W] in the FIGS. 4 and 5 represent the edge allocable areas in the same rows in the parity check matrix, which are specified by the same types of letters, and the edge-by-edge arithmetic units 110 corresponding to the respective edge allocable areas are likewise specified by the same types of letters.
The memories 102 and 103 store, as described above, the likelihood information of each code bit updated in the previous round and the likelihood information of each code bit updated in the round of this time. Namely, the memories 102 and 103 have only a difference in the updated round but are the same memories with respect to the arithmetic target code bit. Hence, the following description will deal with the memories 102 and 103 as a pair of memories that is referred to simply as the [memory]. The memory is constructed of at least the same number of memory cells as the number of edge-by-edge arithmetic units, i.e., the mounting count (Nc) of the arithmetic units described above. In the first embodiment, the mounting count of the arithmetic units is "12", and therefore the memory is, as illustrated in FIGS. 4 and 5, constructed of memory cells M1 through M12.
In the decoder in the first embodiment, each memory cell and each column group 152 in the parity check matrix are arranged in one-to-one correspondence. Thus, in the decoder, when starting the decoding process, a control unit (not shown in FIG. 1) etc stores, in the predetermined memory cell corresponding to each column group 152 in the parity check matrix, the likelihood information stored in the input likelihood memory 101. To be specific, the decoder, in the case of organizing the column groups in the sequence from the first column group to the twelfth column group from the left to the right in the parity check matrix, stores, in the memory cell M1, the likelihood information of a variable node (code bit) contained in the first column group, and stores, in the memory cell M2, the likelihood information of the code bit contained in the second column group. The decoder stores, in the memory cell corresponding to each column group, the likelihood information of the variable node (code bit) contained in each column group 152 in the parity check matrix. It should be noted that each parity check matrix corresponding to each coding rate is stored in a predetermined memory or the like, and the control unit (corresponding to a likelihood information storing means according to the present invention) refers to the memory and may also store the likelihood information in the memory cell. The corresponding relationship between these memory cells and pieces of likelihood information that should be stored therein, remains unchanged regardless of the coding rate because of there being no change in the organizing mode of the column groups according to every parity check matrix.
Further, in the decoder in the first embodiment, each edge-by-edge arithmetic unit 110 and each edge allocable area in the parity check matrix are arranged in one-to-one correspondence according to every processing cycle. Each edge-by-edge arithmetic unit 110 performs the arithmetic operation about the edge in the edge allocable area corresponding to this edge-by-edge arithmetic unit 110. Note that only one edge allocable area exists in each column group 152 within each row group 151, then all the row groups 151 are organized with the same pattern, and hence it follows that each edge-by-edge arithmetic unit 110 is, in other words, associated with each column group 152.
Then, the memory cell and the edge-by-edge arithmetic unit 110 are connected to each other via a physical signal line 160. This connection is, because of getting both of the memory cell and the edge-by-edge arithmetic unit corresponding to the parity check matrix described above, inevitably determined by this correspondence. Namely, each edge-by-edge arithmetic unit 110 is, with respect to the edge allocable area corresponding to this edge-by-edge arithmetic unit 110, connected via the signal line 160 to the memory cell corresponding to the column group 152 to which the edge allocable area belongs.
The decoder in the first embodiment uses the parity check matrix described above, whereby the connection between each memory cell and each edge-by-edge arithmetic unit 110 resultantly becomes, as shown in FIGS. 4 and 5, the same one-to-one connection even at any processing cycle and at any coding rate. Accordingly, the configuration has no necessity of switching over the signal line 160 each time about every processing cycle and every coding rate, and the decoding device in the first embodiment does not require a switching device between the memory and the arithmetic unit, which has hitherto been needed for the conventional decoder.
Row-by-row arithmetic units (the row-by-row check node processing units 105) are configured in a number the same as the number of the simultaneous processing rows (Mg). Each row-by-row arithmetic unit is connected to the edge-by-edge arithmetic unit corresponding to the edge allocable area allocated in the same row in the parity check matrix. The row-by-row arithmetic unit receives an arithmetic result of each edge-by-edge arithmetic unit 110, then performs the arithmetic operation of this arithmetic result on the row-by-row basis, and returns an arithmetic result of this operation again to each edge-by-edge arithmetic unit 110.
The row-by-row arithmetic unit has a different connecting destination depending on the coding rate as shown in FIGS. 4 and 5. This is because the parity check matrix differs depending on the coding rate. As a result, the connection between the row-by-row arithmetic unit and the edge-by-edge arithmetic unit is a fixed connection during the decoding process after a certain coding process but needs changing if the coding rate is changed on the occasion of executing the next post-coding process.
As described above, each edge-by-edge arithmetic unit 110 is connected to the predetermined memory cell via the signal line 160. Each edge-by-edge arithmetic unit 110 updates the likelihood information of the code bit in which the edge is allocated in the likelihood information of the respective code bits stored in the connected memory cell. The control unit etc may give an instruction to each edge-by-edge arithmetic unit 110 about the position of the should-be-updated likelihood information in the memory cell. In this case, the control unit (corresponding to also an edge-by-edge arithmetic means according to the present invention), each time the processing cycle is finished, may refer to the row group 151 corresponding to the next cycle in the parity check matrix and to the segmentation pattern of the row group, and each edge may notify each edge-by-edge arithmetic unit 110 of the position of the edge in the edge allocable area.
An operational example of the decoding device in the first embodiment will hereinafter be described with reference to FIG. 6. FIG. 6 is a flowchart showing the operational example of the decoding device in the first embodiment. The decoding device in the first embodiment retains the parity check matrixes corresponding to the plurality of coding rates. Then, in accordance with the segmentation pattern of the parity check matrix, as described above, the memory cells configuring the memories 102, and 103 of the decoding device are connected to the edge-by-edge arithmetic units 110, and the edge-by-edge arithmetic units 110 are connected to the row-by-row arithmetic units 105.
When the input likelihood memory 101 stores the likelihood information calculated in other circuits with respect to the respective code bits of the received code bit sequence, the control unit (not illustrated in FIG. 1) of the decoding device reads the parity check matrix and the segmentation pattern corresponding to the coding rate in use (should-operate coding rate) (S601). The control unit switches over the connection between the edge-by-edge arithmetic unit and the row-by-row arithmetic unit in accordance with the readout parity check matrix and the readout segmentation pattern (S602). Subsequently, the control unit, according to the readout parity check matrix and segmentation pattern, stores, in the memory cell corresponding to each column group, the likelihood information of the code bit contained in each column group (S603).
Next, the control unit initializes or updates the round count u (S604), and starts the decoding process of the round u. At this time, the control unit notifies each edge-by-edge arithmetic unit 110 of the edge position in accordance with the readout parity check matrix (S605). The notified information is recognized from the readout parity check matrix so as to specify in which column the edge in the predetermined edge allocable area is positioned, and may also be set as an offset address (= size of area storing likelihood information of (column position - 1) x 1 code bit) corresponding to this column position.
Hereafter, the edge-by-edge arithmetic unit 110 executes the edge-by-edge arithmetic operation by using its internal functional units, wherein the likelihood information corresponding to each code bit is updated (S606). At this time, the arithmetic operation corresponding to each row in the parity check matrix is executed by the row-by-row arithmetic unit (the row-by-row check node processing unit 105). The edge-by-edge arithmetic unit and the row-by-row arithmetic unit operate respectively as follows.
(u)
(u-1)
(u)
(u-1)
i
ji
ji
ij
ji
The anterior variable node processing unit 111-1, based on the edge position information of which the control unit notifies, reads the likelihood information q(0) about the target edge from the connected memory cell of the memory 103, further reads the likelihood information r(0) of the previous round related to the target edge from the r memory 116-1, and calculates the anterior likelihood information q(0) used in the processes from now onward (refer to the Formula (11)). When in the first round, the likelihood information r(0) of the previous round does not exist and is therefore not read out.
(u)
ij
The check node first processing unit 112-1, when receiving the anterior likelihood information q(0) from the anterior variable node processing unit 111-1, performs the arithmetic operation based on the Formulae (2), (3) and (4), thereby calculating the value shown in the Formula (9). The calculated value is transferred to the row-by-row check node processing unit 105 and to the check node second processing unit 115-1.
The row-by-row check node processing unit 105 receives the value drawn from the Formula (9) that is outputted from each of the connected edge-by-edge arithmetic units, and calculates the value shown in the Formula (10).
(u)
ji
ji
ji
The check node second processing unit 115-1, when receiving the value shown in the Formula (10) from the row-by-row check node processing unit 105 and further receiving the value shown in the Formula (9) from the check node first processing unit 112-1, calculates the likelihood information r(0) shown in the Formula (4) with respect to the target edge. The calculated likelihood information is transferred to the r memory 116-1 and to the variable node processing unit 117-1. The likelihood information transferred to the r memory 116-1 is stored as it is in this memory.
(u)
ji
The variable node processing unit 117-1 reads the likelihood information added up to the previous row in the parity check matrix from the connected memory cell of the memory 102, and adds, to this readout likelihood information, the likelihood information r(0) calculated this time by the check node second processing unit 115-1. The thus-added likelihood information is written to a predetermined location, based on the edge position information of which the control unit notifies, of the connected memory cell.
The respective processing cycles (the processes corresponding to the number of the simultaneous processing rows (Mg)) are executed for all the rows of the parity check matrix (S607; NO, looped back to S606). When completing the processes for all the rows of the parity check matrix (S607; YES), the control unit makes, based on the updated likelihood information stored in the memory 102, another circuit unit (not illustrated in FIG. 1) generate the temporary estimated bit sequence (S608). Subsequently, the control unit makes another circuit unit (not shown in FIG. 1) execute the parity check of the generated temporary estimated bit sequence (S609).
The control unit, if a result of this parity check is judged valid or if the present round count is a maximum round count (S610; YES), finishes the decoding process, and outputs the temporary estimated bit sequence. If the result of the parity check is judged invalid and if the present round count is not the maximum round count (S610; NO), the control unit updates the round count (S604), and starts the next round.
Herein, an operation and an effect of the decoding device in the first embodiment discussed above will be explained.
In the decoding device in the first embodiment, a plurality of parity check matrixes corresponding to a plurality of coding rates is defined.
To begin with, the row weight (Wr) and the number of the simultaneous processing rows (Mg) are determined with respect to each coding rate, corresponding to the number of the edge-by-edge arithmetic units (Nc) in the decoding device. For others, a size of each parity check matrix is determined from the parity bit length (M) and a code length (N).
Next, each of the parity check matrixes is segmented into row groups 151 each having a previously determined number of simultaneous processing rows and into column groups 152 each having a predetermined number of columns. The number of columns included in each segmented column group is arbitrarily determined so that a total sum of the column counts of the respective column groups becomes a total column count (N) of the parity check matrix. The determination is, however, made so that there exist, in each row group 151, edge allocable areas of a number equivalent to a result of multiplying the row weight (Wr) by the number of the simultaneous processing rows (Mg), and so that one edge allocable area exists in each column group 152 within each row group 151. The edge allocable area has one bit "1" allocated in an arbitrary position within the unit area in the plurality of segmented unit areas.
The memory etc retains the thus-determined plural parity check matrixes and the segmentation pattern (the organizing mode of the column groups 152 and the allocation mode of the edge allocable areas).
In accordance with the thus-determined parity check matrixes and the segmentation pattern, each edge-by-edge arithmetic unit is set corresponding to each edge allocable area in the single row group 151 and performs the arithmetic operation of the edge in the corresponding edge allocable area. Similarly, the respective memory cells are set corresponding to the respective column groups 152, and, on the occasion of starting the decoding process, the likelihood information of the variable node (code bit) contained in each column group 152 is stored in the respective memory cells corresponding to the respective column groups 152. As a result, the connections between the plurality of edge-by-edge arithmetic units and the plurality of memory cells are determined based on this correspondence.
In this way, the connection between each memory cell and each edge-by-edge arithmetic unit resultantly becomes the same one-to-one connection regardless of the processing cycle or coding rate by using the parity check matrix described above.
Hence, according to the decoding device in the first embodiment, the signal line 160 actualizing the connection is not required to be switched over each time at every processing cycle and at every coding rate, and it is possible to omit the switching device between the memory and the arithmetic unit, which has hitherto been needed in the conventional decoder. This enables a scale-down of the circuit of the decoding device in the first embodiment.
Further, even in the case of executing the decoding process corresponding to any coding rate on the basis of the above-determined number of simultaneous processing rows, the arithmetic unit in the decoding device can be operated at the high efficiency, and the edge-by-edge processing can be done simultaneously for the plurality of rows, thereby making it possible to actualize the high-speed decoding process at any coding rate as well as making it feasible to handle various coding rates.
The decoding device in a second embodiment of the present invention will hereinafter be explained. In the decoding device in the first embodiment discussed earlier, the device configuration and the decoding processing method are determined based on a segmentation pattern with which the unit area in the parity check matrix becomes a single-row but arbitrary-number-of-columns area. The decoding device in the second embodiment uses the segmentation pattern with which the unit area is an area containing an arbitrary number of n-row/n-column square matrixes. Configurations other than the parity check matrix are basically the same as those in the first embodiment.
An example of a circuit configuration of the decoding device in the second embodiment is the same as in the first embodiment, and hence its explanation is herein omitted (see FIG. 1).
The parity check matrix used in the second embodiment will hereinafter be described with reference to FIGS. 2 and 7. The decoding device in the second embodiment shall determine, as shown in FIG. 2, the coding specification in the same way as in the first embodiment. FIG. 7 is a diagram illustrating the parity check matrix used in the second embodiment.
The decoding device in the second embodiment includes edge-by-edge arithmetic units 110 of a number equivalent to at least the mounting count (Nc) of the arithmetic units, corresponding to the coding rate 1/5 and the coding rate 4/5 as shown in FIG. 2, wherein the row weight (Wr) and the number of the simultaneous processing rows (Mg) are determined with respect to each coding rate. Then, a size of each of the parity check matrixes is determined based on the parity bit length (M) and the code length (N).
n
n
In the second embodiment, the segmentation pattern of the parity check matrix is determined so that the unit area becomes the area containing the arbitrary number of n-row/n-column square matrixes. Namely, a natural number by which to divide the row count (M) and the column count (N) in each parity check matrix is defined, whereby the segmentation is done so that each row group 151 comprises (the number of simultaneous processing rows (Mg) x ) rows, and each column group 152 comprises (n x arbitrary number in each column group) columns.
The edge allocable area in the unit area determined by this segmentation is determined in the same way as in the first embodiment. With this contrivance, each unit area other than the edge allocable area contains an arbitrary number of n-row/n-column zero matrixes per column group in a column-increasing direction. The edge allocable area is an area where any one of the square matrixes in the unit area becomes an edge allocation square matrix 181. The edge allocation square matrix 181 is an n-row/n-column matrix, wherein only one edge is allocated per row in an arbitrary position in each row.
The thus-determined segmentation pattern in the row group 151, i.e., the organizing mode of the column group 152 and the allocation mode of the edge allocable area, shall be the same throughout all the row groups. The position of the edge allocation square matrix 181 within the edge allocable area and the edge position in each row within the edge allocation square matrix 181 are set arbitrarily in any row group and are determined so that the edges are allocated sparsely, thereby utilizing the superior effect of the LDPC code.
In the decoding device in the second embodiment also, the memory etc retains, in the same way as in the first embodiment, the above-determined segmentation pattern and the parity check matrix in addition to the coding specification shown in FIG. 2. The decoding device in the second embodiment is capable of getting the information quantity about the parity check matrixes that should be retained in the memory less than by the decoding device in the first embodiment. The decoding device in the first embodiment has the necessity of having the position of the edge allocable area with respect to each row block 151 and the edge position in this area regarding to the parity check matrix. On the other hand, the decoding device in the second embodiment has the necessity of having the position of the edge allocable area with respect to each row block 151, the position of the edge allocation square matrix within each edge allocable area and the edge position in each edge allocation square matrix. Accordingly, the decoding device in the second embodiment has a larger information quantity due to the amount of information about the positions of the edge allocation square matrixes in the respective row blocks 151, but is smaller in total information quantity because of having a larger number of rows contained in the respective row blocks 151 in the second embodiment. This is because the unit area is configured by the square matrix.
The connecting relationship between each edge-by-edge arithmetic unit 110 and the memories 102, 103 is the same as in the first embodiment (see FIGS. 4 and 5). Namely, each column group 152 and the predetermined memory cell are arranged in one-to-one correspondence, and each edge-by-edge arithmetic unit 110 and each edge allocable area are arranged in one-to-one correspondence per processing cycle. Then, with this one-to-one correspondence, the one-to-one connection between each memory cell and each edge-by-edge arithmetic unit 110 is established via the physical signal line 160. Accordingly, the decoding device in the second embodiment also takes a configuration having no necessity for performing the switchover at every processing cycle and at every coding rate, and it is unnecessary for the decoding device in the second embodiment to include a switching device between the memory and the arithmetic unit, which has hitherto been needed in the conventional decoder.
n
n
n
The configuration and the processing about the row-by-row arithmetic unit (the row-by-row check node processing unit 105) are the same as those in the first embodiment. The second embodiment is, however, different from the first embodiment in terms of a processing sequence of the rows (check nodes) in the parity check matrix that undergo the likelihood information arithmetic operation in the respective edge-by-edge arithmetic units 110. Namely, in the first embodiment, each row group 151 contains rows corresponding to the number of the simultaneous processing rows (Mg) with the result that the arithmetic operation of every row group 151 is completed at one processing cycle, however, in the second embodiment, the arithmetic operation of every row group 151 becomes a unit finished after processing cycles. The next row group 151 is set as the arithmetic operation target at a point of time (a point of time when finishing processing cycles) when finishing the arithmetic operations of the first row through the -th row in the edge allocation square matrix in the edge allocable area to which each edge-by-edge arithmetic unit is allocated. Thus, the control unit, each time the processing cycle is finished, notifies of the position of the should-be-next-updated likelihood information in the memory cell to which each edge-by-edge arithmetic unit 110 is connected on the basis of pieces of information about the position of the edge allocation square matrix in the target edge allocable area and about the edge position in every row within this edge allocation square matrix, which are stored in the memory.
The decoding device in a third embodiment of the present invention will hereinafter be explained. In the decoding device according to the second embodiment discussed earlier, the device configuration and the decoding processing method are determined based on a segmentation pattern in which the unit area in the parity check matrix contains an arbitrary number of n-row/n-column square matrixes. The decoding device in the third embodiment uses such a segmentation pattern that the unit area becomes an area containing an arbitrary number of m-row/n-column matrixes. The configurations other than the parity check matrix are the same as those in the first embodiment and the second embodiment, and hence their explanations are omitted, wherein the discussion herein will be focused on the configuration of the parity check matrix and on the configurations of the memory and the edge-by-edge arithmetic unit that are related to the parity check matrix.
The parity check matrix used in the third embodiment will be explained with reference to FIGS. 2 and 8. In the decoding device in the third embodiment also, the coding specification shall be, as shown in FIG. 2, determined in the same way as in the first embodiment. FIG. 8 is a diagram illustrating the parity check matrix used in the third embodiment.
The decoding device in the third embodiment includes the edge-by-edge arithmetic units 110 of which the number is equivalent to at least the mounting count (Nc) of the arithmetic units, corresponding to the coding rate 1/5 and the coding rate 4/5 as shown in FIG. 2, wherein the row weight (Wr) and the number of the simultaneous processing rows (Mg) are determined with respect to each coding rate. Then, a size of each of the parity check matrixes is determined based on the parity bit length (M) and the code length (N).
m
n
In the third embodiment, the segmentation pattern of the parity check matrix is determined so that the unit area becomes an area containing an arbitrary number of m-row/n-column matrixes. Namely, a natural number by which to divide the row count (M) and a natural number by which to divide the column count (N) in each parity check matrix are respectively defined, whereby the segmentation is done so that each row group 151 is organized by the (the number of simultaneous processing rows (Mg) x m) rows, and each column group 152 is organized by the (n x arbitrary number in each column group) columns.
The allocating method of the edge allocable area is the same as in the first embodiment. The unit areas other than the edge allocable area contain an arbitrary number of m-row/n-column zero matrixes per column group in a column-increasing direction. The edge allocable area is an area where any one of the matrixes in the unit area becomes an edge allocation matrix 191. The edge allocation matrix 191 is an m-row/n-column matrix, wherein only one edge is allocated per row in an arbitrary position in each row.
The thus-determined segmentation pattern in the row group 151, representing the organizing mode of the column group 152 and the allocation mode of the edge allocable area, shall be the same throughout all the row groups. The position of the edge allocation matrix 191 within the edge allocable area and the edge position in each row within the edge allocation matrix 191 are set arbitrarily in any row group and are determined so that the edges are allocated sparsely, thereby effectively using the LDPC code.
In the decoding device in the third embodiment also, the memory etc retains, in the same way as in the first and second embodiments, the above-determined segmentation pattern and the parity check matrix in addition to the coding specification shown in FIG. 2. The decoding device in the third embodiment is, for the same reason as elucidated in the second embodiment, capable of reducing the information quantity about the parity check matrixes that should be retained in the memory compared to the decoding device in the first embodiment. Compared with the decoding device in the second embodiment, the information quantity increases by a size of information about the row count m because of setting the number of rows included in the unit area to the row count m, however, there is an advantage of increasing a degree of freedom in terms of determining the shape of the parity check matrix by removing the restriction that the parity check matrix be the square matrix.
The connecting relationship between each edge-by-edge arithmetic unit 110 and the memories 102, 103 is the same as in the first and second embodiments (see FIGS. 4 and 5), and the decoding device in the third embodiment also has neither the necessity for switching over the signal line 160 at every processing cycle and at every coding rate nor the need for a switching device between the memory and the arithmetic unit, which has hitherto been needed in the conventional decoder.
The processing sequence of the rows (check nodes) in the parity check matrix that undergo the likelihood information arithmetic operation in the respective edge-by-edge arithmetic units 110, is the same as in the second embodiment. In the third embodiment, however, the row count in the unit area is set to m rows, and hence the arithmetic operation of every row group 151 is in units finished at m processing cycles. With this contrivance, the control unit, each time the processing cycle is finished, notifies of the position of the should-be-next-updated likelihood information in the memory cell to which each edge-by-edge arithmetic unit 110 is connected on the basis of pieces of information about the position of the edge allocation matrix 191 in the target edge allocable area and about the edge position in every row within this edge allocation matrix 191, which are stored in the memory.
The decoding device in a fourth embodiment of the present invention will hereinafter be described. The decoding device in the first embodiment explained earlier determines the device configuration and the decoding processing method on the basis of the segmentation pattern with which the unit area in the parity check matrix becomes the 1-row/arbitrary-number-of-column area. The decoding device in the fourth embodiment uses such a segmentation pattern that the unit area becomes an area containing an m-row/arbitrary-number-of-column matrix. Configurations excluding the parity check matrix are basically the same as in the other embodiments described above, so that their explanations are omitted, and the description shall herein be focused on the configuration of the parity check matrix and the configurations of the memory and of the edge-by-edge arithmetic unit 110 that are related to the parity check matrix.
The parity check matrix used in the fourth embodiment will be explained with reference to FIGS. 2 and 9. In the decoding device in the fourth embodiment also, the coding specification shall, as shown in FIG. 2, be determined in the same way as in the other embodiments. FIG. 9 is a diagram showing the parity check matrix used in the fourth embodiment.
In the fourth embodiment, the segmentation pattern of the parity check matrix is determined so that the unit area comes to have an m-row/arbitrary-number-of-column matrix. Namely, a natural number m by which to divide the row count (M) in each parity check matrix is defined, whereby the segmentation is done so that each row group 151 is organized by the (the number of simultaneous processing rows (Mg) x m) rows, and each column group 152 is organized by the arbitrary number of columns.
The allocating method of the edge allocable area is the same as in the other embodiments. The unit areas other than the edge allocable area become an m-row/arbitrary-number-of-columns zero matrix. The edge allocable area is the m-row/arbitrary-number-of-columns matrix and is also the matrix in which only one edge is allocated per row in an arbitrary position in each row.
The thus-determined segmentation pattern in the row group 151, which represents, i.e., the organizing mode of the column group 152 and the allocation mode of the edge allocable area, shall be the same throughout all the row groups. The position of the edge allocable area and the edge position in each row within the edge allocable area are set arbitrarily in any row group and are determined so that the edges are allocated sparsely to make best use of the LDPC code.
In the decoding device in the fourth embodiment also, the memory etc retains, in the same way as in the other embodiments, the above-determined segmentation pattern and the parity check matrix in addition to the coding specification shown in FIG. 2. The decoding device in the fourth embodiment is, for the same reason as in the second embodiment, capable of reducing the information quantity about the parity check matrixes that should be retained in the memory in comparison with the first embodiment. Compared to the second embodiment and the third embodiment, the information quantity increases by a size of information about the row count m and by a size of information about the row count of each column group 152 because of the unit area size being set arbitrarily, however, there is an advantage of increasing a degree of freedom in terms of freely determining the shape of the parity check matrix.
The connecting relationship between each edge-by-edge arithmetic unit 110 and the memories 102, 103 is the same as in the other embodiments (see FIGS. 4 and 5), and the decoding device in the fourth embodiment also has neither the necessity for switching over the signal line 160 at every processing cycle and at every coding rate nor any switching device between the memory and the arithmetic unit, which has hitherto been needed in the conventional decoder.
The processing sequence of the rows (check nodes) in the parity check matrix that undergo the likelihood information arithmetic operation in the respective edge-by-edge arithmetic units 110, is the same as in the second embodiment and the third embodiment, and hence its explanation is omitted. In the fourth embodiment, however, the row count in the unit area is set to the m rows, and hence the arithmetic operation of every row group 151 becomes the unit of its being finished at m processing cycles. With this contrivance, the control unit, each time the processing cycle is finished, notifies of the position of the should-be-next-updated likelihood information in the memory cell to which each edge-by-edge arithmetic unit 110 is connected on the basis of pieces of information about the shape of the target edge allocable area and about the edge position in every row within this edge allocable area, which are stored in the memory.
The decoding device in a fifth embodiment of the present invention will hereinafter be described. In the decoding device in the fifth embodiment, the configuration of the decoding device and the processing procedure are determined by use of a virtual parity check matrix different from the actual parity check matrix that should be used for decoding.
An example of a circuit configuration of the decoding device in the fifth embodiment will be explained with reference to FIG. 10. FIG. 10 is a block diagram illustrating the example of the circuit configuration of the decoding device in the fifth embodiment. The decoding device in the fifth embodiment includes, in addition to the configuration in the first embodiment, a memory control unit 201.
Configurations other than the memory control unit 201 are the same as those in the other embodiments, and hence herein only the memory control unit 201 will be explained.
The memory control unit 201 rearranges, based on the virtual parity check matrix used in the fifth embodiment, pieces of likelihood information of the respective code bits stored in the input likelihood memory 101 according to a predetermined rule, and stores the rearranged likelihood information in the memory 102. Further, when the control unit reads the updated likelihood information stored in the memory 102 for the temporary estimation, the memory control unit 201 rearranges the likelihood information in the original sequence of the code bits from the rearranged status.
The virtual parity check matrix may take a form of any parity check matrix in the other embodiments, and the device configuration and the processing procedure are determined corresponding to the virtual parity check matrix in the same way as in the embodiments described above. The memory control unit 201 has conversion information for conversion into the virtual parity check matrix from the parity check matrix that should be actually used for decoding, and makes the rearrangement based on this conversion information. The conversion information represents information about column replacement. Actually, the conversion involves row replacement, however, the rows (check nodes) in the parity check matrix correspond to the simple processing sequence in the decoding device according to the fifth embodiment, and therefore the information about the row replacement is unrelated to the memory control unit 201. Only the information about the column replacement may suffice for the conversion information retained in the memory control unit 201.
The columns (variable nodes) in the parity check matrix correspond to the respective code bits, and hence the memory control unit 201 derives a should-be-stored position in the memory 102 from the information on the column replacement. For instance, in the virtual parity check matrix, if the first column in the parity check matrix used for the actual decoding is replaced by a fifth column, the memory control unit 201 stores the likelihood information about the first code bit stored in the input likelihood memory 101 in an area that should store the likelihood information about the fifth code bit within the memory 102. Conversely, if the likelihood information is updated and undergoes the temporary estimation, the memory control unit 201 outputs the likelihood information extracted from the area storing the likelihood information about the fifth code bit within the memory 102, as the likelihood information about the first code bit, to the circuit unit performing the temporary estimation.
Thus, the decoding device in the fifth embodiment can handle a case in which a parity check matrix for decoding the data coded by LDPC code with sufficient error correcting capability is not realizable using the first to fourth embodiments discussed above.
[First Embodiment]
[Device Configuration]
<Parity check matrix>
<Connection between Memories 102, 103 and Edge-by-Edge Arithmetic Units>
[Operational Example]
<Operation/Effect in First Embodiment>
[Second Embodiment]
[Device Configuration]
<Parity check matrix>
<Connection between Memories 102, 103 and Edge-by-Edge Arithmetic Unit 110>
[Third Embodiment]
<Parity check matrix>
<Connection between Memories 102, 103 and Edge-by-Edge Arithmetic Unit 110>
[Fourth Embodiment]
<Parity check matrix>
<Connection between Memories 102, 103 and Edge-by-Edge Arithmetic Unit 110>
[Fifth Embodiment]
[Device Configuration] | |
Un asociado de investigación clínica (CRA) es un profesional que supervisa ensayos clínicos y estudios de investigación.
Beneficios de la formación de asociado certificado en investigación clínica & Certificación:
Patrocinadores, CROs and other agencies involved in the implementation of clinical trials and other forms of medical research are increasingly looking for qualified individuals who have completed formal and approved training and certification in order to be able to rely on them to perform the tasks assigned to them.
CRA Training & Certification Program Structure:
Upon registering for the CRA Training and Certification program, members are granted access to the NBScience learning and certification platform. The online CRA Training & Qualification Program is a 24-hour standardized program that provides core clinical study learning. These are some of the important areas in which training is received after the qualification curriculum has been signed.
Introducción:
This study module, which consists of several lectures and presentations, introduces the participant to the pharmaceutical and clinical research industry.
It also allows for a thorough overview of the clinical research field and the development and developments that have led to the current clinical research environment. The Introductory Module also teaches individuals about clinical research stakeholders and the Principles of Good Clinical Practice (GCP). Technologies in the management of clinical trials are discussed in detail, and comprehensive knowledge of the major clinical research regulatory bodies that exist globally is also provided to individuals.
Drug Development:
This program consists of multiple lectures and includes instruction on the pre-clinical development of drugs and biologics, different stages of clinical drug development, design of clinical trials and endpoints in clinical trials.
Ethics in Drug Development:
Each series, consisting of multiple lectures, discusses the concept of ethics in clinical research, the Informed Consent Process, Evidence and HIPAA, and also offers instruction at the International Conference on Harmonization (yo).
Regulations in Clinical Research:
This module provides training in FDA regulations such as 21 CFR Parts 11, 50, 54, 56, 312,812, y 814.
Roles & Responsabilidades:
It is important to identify the roles of all stakeholders in the management of clinical trials so that standards that are realistic can be established. Each section provides a comprehensive overview of the roles of clinical practitioners, sponsors, suppliers and the Institutional Review Boards (IRB).
Documentos esenciales:
Essential records are documents which, individually and collectively, make it possible to assess the conduct of the trial and the quality of the data generated. These records provide proof of the investigator’s compliance, support and evaluation with the Good Clinical Practice guidelines and all relevant regulatory requirements.
One of the most important and frequent inspection findings during investigator site inspections is the inadequacy of reliable and accurate source reporting. This is also the most common pitfall found during sponsor audits. To order to ensure that the results of the study are focused on reliable and relevant data, the value of good documentation practice needs to be stressed for the investigator sites. This curriculum focuses on the core principles of good data practice, offers intensive training to key areas such as source documentation, main documents, INDIANA & NDA Requirements and the Clinical Study Report (CSR).
Study Start-Up:
Each section provides an overview of clinical procedures in the start-up phase of a clinical trial. A Study Start-Up Group, vendors and sites identified and activated, procedures established for data collection and reporting, and regulatory approvals obtained shall be established from the final procedure to the first patient visit. The program offers focused instruction in areas such as feasibility evaluation, site selection, pre-study visit, site initiation, recruitment and retention of participants, the TMF (Test Master File) and budgeting of clinical trials.
Study Monitoring & Close Out:
The monitor is responsible for “surveillance the conduct of a research project.” Research monitors must have a thorough understanding of the Code of Federal Regulations, local laws, guidelines and their assigned research protocols. A major part of the reporting duties is to inform and assist sites in compliance with FDA and other local and international regulations and/or recommendations, while also helping them meet the requirements of specific research studies. Monitors act both as communication channels between sites and sponsors and as supervisors for individual research projects. This program offers extensive training in areas such as regular site monitoring, CRF analysis and source data verification, product transparency and compliance, site closure, writing accurate monitoring reports and follow-up visit letters, and record archiving and maintenance.
Safety Reporting:
One of the CRA’s most important priorities is to ensure that clinical inspectors are fully aware of and comply with their responsibility for reporting adverse events. To do it, the CRA must often notify investigators of the criteria for adverse event reporting. Como resultado, the CRA must be aware of both the regulatory and sponsor-specific criteria for reporting significant and non-serious adverse events in clinical trials. It requires the proper use and completion of adverse event forms and criteria and conditions for reporting adverse events that may go further than the regulatory requirements. Each program offers instruction in the identification and monitoring of adverse and serious adverse events in clinical trials.
Role of Quality Assurance & Data Management:
Each program offers relevant training in quality assurance (QA) audits and testing, electronic data and signatures, information management and biostatistics..
2) Curso de GCP para investigadores y CRA
3) Curso de GCP para auditores
(ver abajo o para información detallada haga clic aquí)
Plan de estudios de entrenamiento de GCP
(1) Entrenamiento de GCP
ICH-GCP international guidelines
1: Introducción
1.1 Antecedentes
1.2 ¿Qué es GCP?
1.3 Nueva guía de GCP
1.4 Los principios de ICH GCP
1.5 Some General Point
1.6 Documentación y control de versiones
1.7 Seguro de calidad
2: Las autoridades competentes (ESE) y comité de ética independiente (IE)
2.1 Responsibilities of the CUNA
2.2 Responsibility of the IE
2.3 Asunto Formularios de consentimiento informado (ICF)
2.4 Composición, Las funciones, Operaciones, Procedures and Record
3: Investigat
3.1 Responsabilidades del investigador
3.2 Calificaciones y acuerdos del investigador
3.3 Adequate Resource
3.4 Atención médica de sujetos de prueba
3.5 Communication with IRB/IE
3.6 Cumplimiento del protocolo
3.7 Investigational Medicinal Produc
3.8 Procedimientos de aleatorización y desenmascaramiento
3.9 Informed Consent of Trial Subject
3.10 Records and Reports
3.11 Terminación prematura o suspensión de un juicio
3.12 Informes de progreso e informe final(s) by Investigators
3.13 Archivado
3.14 Consideraciones para el uso de sistemas electrónicos en la gestión de ensayos clínicos.
3.15 Updated information on electronic records and use of EMRs in clinical research. | https://nbscience.com/es/cra-ccra-certification-gcp-auditor-certification/ |
Committed to best practice
Australia’s Clean Energy Council (CEC) released guidelines for the sustainable roll-out of renewable energy projects in 2018.
We support this important initiative and we commit to honouring the Clean Energy Council’s Best Practice Charter in our renewable energy developments and associated transmission infrastructure:
1. We will engage respectfully with the local community, including Traditional Owners of the land, to seek their views and input before finalising the design of the project and submitting a development application.
2. We will provide timely information, and be accessible and responsive in addressing the local community’s feedback and concerns throughout the lifetime of the development.
3. We will be sensitive to areas of high biodiversity, cultural and landscape value in the design and operation of projects.
4. We will minimise the impacts on highly productive agricultural land where feasible, and explore opportunities to integrate continued agricultural production into the project.
5. We will consult the community on the potential visual, noise, traffic and other impacts of the development, and on the mitigation options where relevant.
6. We will support the local economy by providing local employment and procurement opportunities wherever possible.
7. We will offer communities the opportunity to share in the benefits of the development, and consult them on the options available, including the relevant governance arrangements.
8. We commit to using the development to support educational and tourism opportunities where appropriate.
9. We will demonstrate responsible land stewardship over the life of the development and welcome opportunities to enhance the ecological and cultural value of the land.
10. At the end of the project’s design or permitted life we will engage with the community on plans for the responsible decommissioning, or refurbishment/re-powering of the site. | https://www.geni.energy/post/committed-to-best-practice |
Comparison of two mechanical intraosseous infusion devices: a pilot, randomized crossover trial.
Administration of medications via the intraosseous (IO) route has proven to be a lifesaving procedure in critically ill or injured children. Two mechanical IO infusion devices have been approved for use in children, the spring-loaded IO infusion device (Bone Injection Gun, BIG) and the battery-powered IO infusion drill (EZ-IO). The objective of this pilot study was to compare the success rates for insertion and the ease-of-use of the two devices. A randomized crossover study was conducted in a local paramedic training course with 29 paramedic students participating. Participants watched two videos describing the use of the two devices, followed by a demonstration on how to use each device on a turkey bone model. Then subjects were divided into two study groups: BIG-first or EZ-IO-first. Each participant performed one insertion attempt with each device independently. All attempts were filmed by a video camera. Successful placement was defined as the visualization of fluid flow from the marrow cavity. Following the study procedure, participants completed a two-item questionnaire recording their ranking of the ease-of-use of each device and their "first choice device". Participants had a significantly higher one-attempt success rate with the EZ-IO than with the BIG (28/29 vs 19/29, p=0.016), and selected the EZ-IO as their first choice (20/29). Participants of the EZ-IO-first group assessed the EZ-IO as easier to use than the BIG (p=0.0039). The subjects of the BIG-first group found no difference in the ease-of-use between the two devices (p=0.32). As tested by paramedic students on a turkey bone model, the EZ-IO demonstrated higher success rates than the BIG and was the preferred device. Future studies are planned to determine which of the two devices is more appropriate for obtaining IO access in the setting of paediatric emergencies.
| |
In early 2017, we were contracted by the Worksafe NSW to evaluate their Investigation of serious incidences process. This work included: a literature review; interviews with all stakeholders including Inspectors, businesses and next of kin/ and or injured parties and bench-marking against other jurisdictions. The work was completed in mid 2017.
Additional Survey Analysis
In June 2018 we were contracted to provide some additional analysis on the data collected as part of a community survey looking at options for the possible redevelopment of the Gold Creek Golf Estate and Village. We provided some additional demographic information and drilled deeper into the results to provide a better understanding to the developers of the community’s concerns around the potential re-development of existing facilities.
Project Update
Simone Annis is currently working with our Associate, Julian Webb from Creeda Projects on an evaluation of the Canberra Innovation Network. This project is due for completion by the end of November.
WorkCover NSW 2010-2012
Three waves of stakeholder evaluation telephone surveys (at around 4,000 contacts per wave) and over 20 focus groups for WorkCover NSW over 2010-12, with Jetty Research as our quantitative survey partner. Survey design took place in collaboration with WorkCover, and data collation, analysis and reporting was completed to a high standard.
WorkCover NSW 2006-2009
Longitudinal Study on small businesses and workers compensation / OHS issues for WorkCover NSW. This three year study (in partnership with CREEDA Projects) used random telesurveys, telephone interviews and focus groups to better understand the factors motivating small businesses to engage and comply with workers compensation and OHS obligations.
WorkCover NSW evaluation 2009
In 2009, we prepard a comprehensive evaluation framework for the WorkCover NSW Three Year Small Business Plan activities. This work involved close collaboration with WorkCover staff to determine the use of existing data, potential for better use of WorkCover data, and the requirements of evaluation of activities. The Action Plan drew on these discussions and literature review to set out a carefully structured approach to assessing the effectiveness of the 3 Year Small Business Plan, using a mix of information from existing sources, targeted surveys, interviews and case studies. The Action Plan commented on the strengths and weaknesses of different evaluation tools and presented an ‘analysis map’ to guide WorkCover staff in running program-specific and group-wide evaluation activities.
In 2009 we also undertook an evaluation of WorkCover’s Small Business Forums program – using interviews with a mix of stakeholders including participants, WorkCover staff and Forum facilitators. The evaluation was undertaken 12 months into the Forums program and assessed the extent to which the Forums were meeting their stated aims.
Workcover 2011 evaluation
In 2011 we evaluated the Health and Safety Representatives program within WorkCover NSW using a carefully structured mix of stakeholder feedback (combining interviews, surveys and focus groups). | https://economicsolutions.com.au/category/evaluating-projects/ |
A password will be e-mailed to you.
Digital Education Research @ Monash
Home
About
Members
Publications
Book
Book Chapter
Journal Article
Report
Projects
News
Blog
The Lab
Teaching
Remote schooling and the rise of alternate ‘teachers’
The perils of algorithmic assessment
Colour-blind learning analytics improve the success of marginalised groups. What’s not to like?
TECHLASH #1 is out – digital education after COVID-19
Post-pandemic priorities … learning EdTech lessons from COVID-19
News and Opinion
DER on Radio National
Neil Selwyn is featured on the ABC Radio National show 'Future Tense' talking about his recent study of Australian public attitudes towards artificial intelligence (AI).
Remote schooling and the rise of alternate ‘teachers’
Is the Future of Higher Education Inevitably Going to be Digital First?
WSJ ‘Future of Everything’ podcast
Launch event: Teaching with technologies: Is TPACK still relevant?
Latest Publications
New article: “It’s a Black Hole . . .”: Exploring Teachers’ Narratives and Practices...
Katrina Tour, Ed Creely and Peter Waterhouse have a new article in Adult Education Quarterly. The article draws on a 6-month...
Read more
new article: Exploring the use of attendance data within the datafied school
Another new paper from our ARC-funded research into the datafication of schools ... exploring ‘anticipatory’, ‘analytical’ and ‘administrative’ aspects of how digitally-mediated attendance data is produced, used and imagined by schools.
Read more
New article: Automation, APIs and the distributed labour of platform pedagogies in Google Classroom
The human labour of school data
Tweets by @DER_Monash
Recent activity
Member login
© Monash University. | http://der.monash.edu/ |
Pinnacle Renewable Energy has announced its financial results for the 13-week (Q4 2019) and 52-week (Fiscal 2019) periods ended Dec. 27, 2019.
The company reported a net loss of $9.9 million in Fiscal 2019, compared to a net profit of $2.7 million in Fiscal 2018. The change in net profit reflects higher distribution costs, higher amortization costs reflecting the company’s new production facilities, and higher production costs due to higher fibre costs, cash conversion costs and costs incurred for third-party wood pellet purchases, partially offset by reduced selling, general and administrative (SG&A) expenses. Excluding the impact of the Entwistle Incident, net loss in Fiscal 2019 was $12.8 million.
Revenue for Fiscal 2019 totalled $377.8 million, an increase of 8.7 per cent compared to $347.4 million for Fiscal 2018. The increase was primarily attributable to higher sales volumes mostly due to a full year of revenue contribution from the production and sale of pellets from the Smithers and Aliceville facilities, each of which contributed no production volume in Q1 – Q3 2018, offset by lower production volumes at the company’s B.C. facilities due to sawmill curtailments and reductions in sawmill residual deliveries.
Adjusted EBITDA totaled $47.2 million in Fiscal 2019, compared to $55.1 million in Fiscal 2018. Increased revenue was offset by higher distribution costs, higher production costs, including higher cash conversion costs (due primarily to fibre mix constraints which increased repair and maintenance costs), higher fibre costs due to extended sawmill curtailments in the B.C. region, and costs associated with the Entwistle Incident, partially offset by the impact of IFRS 16 and business interruption amounts recoverable.
Q4 2019 financial results
Revenue for Q4 2019 totaled $91.5 million, a decrease of 11.8 per cent compared to $103.7 million for Q4 2018. The decrease was primarily attributable to disruption to reduced production in the B.C. facilities with a greater concentration of harvest residuals and the Burns Lake maintenance capital shut as well as shipping delays due to the CN rail strike in Q4 2019.
The company reported a net loss of $3.2 million in Q4 2019, compared to a net profit $7.4 million in Q4 2018. The change in net profit reflects higher SG&A expenses and amortization costs reflecting the company’s new production facilities, as well as increased finance costs, partially offset by reduced production costs. Excluding the impact of the Entwistle Incident, net loss in Q4 2019 was $5.5 million.
Adjusted EBITDA totaled $11.3 million in Q4 2019, compared to $13.8 million in Q4 2018. The decrease is attributable to lower revenues in Q4 2019 compared to Q4 2018, increased distribution and SG&A costs and an increase in other expenses. Excluding the impact of $3.2 million associated with the Entwistle Incident, as well as $2.0 million related to the adoption of IFRS 16, Q4 2019 Adjusted EBITDA was $6.1 million.
Q1 2020 CN Rail disruptions
CN rail service has impacted the ability to effectively get product to port and has caused production disruption in Q1 2020. The January derailment in B.C. damaged Pinnacle leased railcars and resulted in some lost pellets. Full recovery of costs from CN is expected.
The derailment caused service disruptions which impacted production output for a period of clean up for which Pinnacle will not be compensated.
Ten straight days of cold weather in January caused CN rail disruptions resulting in some facility downtime.
In February CN rail lines and B.C. ports have been disrupted by blockades resulting in downtime at Pinnacle’s northern facilities.
New off-take agreements
During the quarter, Pinnacle entered into a long-term, take-or-pay contract with Mitsui for 100,000 MTPA commencing in 2023.
This is the ninth contract signed with customers in Japan since the beginning of Fiscal 2018 demonstrating the company’s successful advancement of the strategy for sales growth into Japan. New contracts improve the company’s customer diversification across Japan, the U.K., South Korea, and Europe.
Production facility construction and upgrades
Plans to install a chipper and additional pelleter at the Smithers, B.C., facility have been finalized for a total capital cost of approximately $6.0 million. The upgrade will decrease costs and increase production run-rate output by approximately 15,000 MTPA. The project is expected to begin in Q1 2020, with completion expected in Q3 2020.
Construction at the High Level, Alta., facility progressed in Q4 2019 and is now in a planned suspension due to winter weather conditions until spring 2020 when warmer temperatures will allow for efficient construction to continue. An additional capital requirement of $6.0 million is expected, bringing the total capital cost to $60.0 million, with Pinnacle’s 50 per cent share being $30.0 million. Tolko has indicated that additional fibre will be available due to forest fire log processing, providing a strong supply of fibre for commissioning. As a result, management is confident that this will enable the facility to produce at the upper end of the 170,000 MTPA to 200,000 MTPA range. The facility is expected to be completed as planned in the fourth quarter of 2020.
The upgrades at Pinnacle’s Williams Lake, B.C., and Meadowbank, B.C., facilities are progressing on schedule and are expected to be completed and begin commissioning in Q1 2020 and Q3 2020 respectively. The upgrades will allow the two facilities to process a broader array of available fibre sources and achieve a series of safety and environmental advancements. This strategic investment will enhance the operating flexibility of the facilities and position Pinnacle to adapt to cyclical changes in wood fibre supply within the B.C. interior. Further, the equipment, technology and infrastructure improvements will result in an increase of 80,000 MTPA in combined overall production capacity.
Entwistle restart
The Entwistle rebuild has been completed, the furnace and dryer have been restarted, and commissioning of the new equipment is in process.
Restoration of the facility is expected at a total estimated capital cost of approximately $14.0 million. Other costs are estimated to be approximately $9.5 million, of which $9.1 million has been incurred year-to-date. Pinnacle is actively working with customers and partners to mitigate the impacts of the 2019 production shortfall and continues to work with the company’s insurance providers to determine the insurance recoveries available for the Entwistle Incident. Pinnacle expects substantially all costs incurred to be recoverable through insurance, subject to deductibles.
Outlook
Pinnacle expects growth in revenue and profitability over the next several years as a result of contracted price increases in most of our off-take agreements. In addition, as the potential demand for industrial wood pellets continues to grow globally, the company is well positioned to meet this demand growth through a combination of expansion projects at existing production facilities, some of which are currently underway, and new greenfield and brownfield growth projects. Moreover, Pinnacle will continue to evaluate potential acquisitions and joint ventures to grow our production platform, and continue to capture opportunities in the growing Asian marketplace as a result of its longstanding relationships with customers in the region.
The recent restart of the Entwistle Facility and strong initial performance combined with the commissioning of the destoner will add production volume throughout the year, and an expected positive contribution to Adjusted EBITDA in 2020. Additionally, as the Aliceville and Smithers facilities are both operating at full run-rate production, incremental production volume and Adjusted EBITDA contribution is expected for 2020.
The above mentioned derailment and blockade of CN rail service has continued to impact the ability to get production to port, and in some cases has caused production disruption in Q1 2020. Pinnacle incurred additional costs to divert finished goods from our facilities for ship loading to different ports for some shipments. In Q1 2020 thus far the company has lost 20kMT of production because of disruption to rail and port service, and currently anticipates approximately two million dollars of Adjusted EBITDA will be missed in Q1 2020 as a result of the CN and port blockades and the subsequent rail delays.
Production output of Pinnacle’s B.C. sawmill suppliers has continued at consistently lower levels. Although current forecasts are for reduced stumpage costs for B.C. logs in mid-2020 and improved sawmill economics, Pinnacle continues to retain fibre inventories and employ other sourcing strategies to manage unforeseen disruptions.
While Pinnacle remains focused on improving fibre, fibre processing, haulage, and cash conversion costs, production and revenue are expected to continue to be impacted through 2020, as will the Adjusted Gross Margin as Pinnacle’s B.C. facilities continue to process a wider mix of harvest residuals. | https://www.canadianbiomassmagazine.ca/pinnacle-reports-9-9m-net-loss-in-2019/ |
It is very likely that we have all experienced nerves or anxiety at some point. When these symptoms are of considerable intensity, we can say that we have suffered a nervous breakdown.
A nervous crisis occurs when the environmental situation exceeds the resources we have to deal with it. In this article, we will know what this type of seizure is, what are its usual symptoms (and their types), the causes and treatments that can be applied.
Nervous crisis: what is it?
We use the term “nervous breakdown” to refer, in a non-medical way and in everyday language, to the anxiety attack. A nervous crisis can occur both in healthy people (without any mental disorder) under very stressful conditions and in people with a certain type of mental disorder. In this second case, nervous breakdown is often one of the symptoms underlying the disorder.
Basically a nervous breakdown it can last from a few minutes, to hours (most common), days and even weeks.
But what exactly is a nervous breakdown? In ordinary language, we use this concept to refer to high states of anxiety and nervousness that arise when we find ourselves overwhelmed (or overwhelmed) by circumstances; in other words that is to say, our resources are insufficient to meet the demands of the environment.
Often these requests are very stressful and lead to a number of characteristic symptoms, which we will see later.
Environmental requirements
Generally speaking, it can be argued that a person suffering from nervous breakdown exhibits a series of anxiety and / or nervous symptoms. All this means that its ability to meet the demands of the environment is drastically reduced, and therefore its functioning is impaired and ends up being dysfunctional or inadequate.
The requirements of the environment in which the person is involved, which they can include professional, social, personal situations… are perceived by the individual as too demanding and impossible to manage.
This perception can change from person to person, which is why the causes or triggers of a nervous breakdown (average demands) will never be the same for one person or another. However, they share one common element, which is their perception of uncontrollability or inability to manage.
symptoms
There are a number of characteristic symptoms of a nervous breakdown. However, it should be mentioned that these can vary considerably from one person to another, depending on their personal characteristics, the situations that trigger the crisis, the demands of the environment, etc.
Thus, the most common symptoms of a nervous breakdown are of three types: psychological symptoms, physiological symptoms, and behavioral symptoms. While the three symptom types are related and often overlap, let’s take a look at some of the symptoms that each of these categories group together:
1. Psychological symptoms
Psychological symptoms refer to the person’s psyche and mental processes. These include the following:
1.1. Feel restless
The person with a nervous breakdown may have a feeling of constant or intermittent restlessness. She may feel nervous, tense, like “about to lose control.” This feeling is very psychological, but can end up affecting other types of symptoms, such as physiological.
1.2. cognitive impairments
Cognitive impairment may also occur, such as difficulty recalling memories (memory alterations), difficulty in paying attention and concentrating, slowness in decision making (Or inability to take-), etc.
In general, and as a comment, we know that mental disorders often lead to cognitive impairment (For example, depression, generalized anxiety disorder, etc.). Cognitive impairment (eg dementia) should not be confused with pseudodementia or depressive pseudodemence.
1.3. by irrational
Another psychological symptom that can appear during a nervous breakdown is irrational fear, which is often disproportionate or lacks a clear trigger.
2. Physiological symptoms
The physiological symptoms correspond to the most bodily terrain and include physical alterations such as the following:
2.1. tired
Fatigue involves a strong feeling of fatigue, such as heaviness, Which slows down the development of activities of daily living. This fatigue can be caused by ongoing stress, psychological factors, or both.
2.2. Loss of appetite
Another physiological symptom is weight loss a nervous breakdown. This can be caused by the chronic stress the person is under or by the constant feeling of nerves they feel in their stomach.
2.3. Sleep disorders
Anxiety (and psychological factors in general) and sleep are closely linked; Thus, a person suffering from anxiety (or nervous breakdown) is very likely to have trouble sleeping as well, which makes it difficult for them to achieve restful and satisfying sleep.
These alterations can result in difficulty falling asleep (onset insomnia), difficulty in maintaining it all night (maintenance insomnia) or the presence of early awakening (terminal insomnia).
2.4. Headache
Migraines and headaches are also common with nervous breakdowns, In connection with physical or physiological symptoms. These symptoms also appear in various anxiety disorders.
3. Behavioral symptoms
The behavioral symptoms of a nervous breakdown cover the most behavioral terrain of a person. Some of these symptoms result in:
3.1. social isolation
The person may end up becoming socially isolated, avoiding staying with friends or a partner, not seeing family members, etc. This is all usually caused by the discomfort caused by the other symptoms and fear of suffering a nervous breakdown again in social situations.
3.2. aggressive behavior
Sometimes an uncontrolled or exaggerated anger can appear, which results in aggressive or stimulating behaviors and which only aggravates the discomfort and tension felt by the person.
3.3. excessive crying
finally another characteristic behavioral symptom of a nervous breakdown is crying, Which is usually excessive (sometimes without a clear trigger) and inconsolable.
the causes
The causes of a nervous breakdown can vary from person to person. Usually these seizures they have a multifactorial originAnd as we have seen, they appear to be the consequence of a demanding environmental situation or environmental demands before which the person is deemed incapable of acting.
So, the main cause of a nervous breakdown is a very stressful situation; examples are divorce situations, loss of a loved one, high workloads, work problems, financial problems, etc.
At the biological level, there has also been talk of a genetic predisposition suffering from this type of crisis, which is added to the stressful situation, triggers a nervous breakdown. Inheritance is also likely to play an important role.
Finally, another possible cause is an underlying mental disorder, such as anxiety disorder, psychotic disorder, depressive disorder, etc. It will be important to discern the symptoms in order to correctly diagnose nervous breakdown. On another side, capricious, suggestive and personality factors may also play a role key in its origin; for example, people with neuroses are at higher risk of developing one.
treatment
The most appropriate treatment for a nervous breakdown is one that involves a multidisciplinary approach. Psychotropic drugs may, however, offer some short-term benefits. in the long term, the ideal will always be a complete treatment including psychotherapy.
Psychological techniques they can use include cognitive restructuring techniques to deal with dysfunctional thoughts, relaxation and breathing techniques that decrease anxiety and physical symptoms, and psychoeducation to help the patient understand the origin and maintenance of their seizure. nervous.
In addition, offer the patient adaptive tools and coping mechanisms in the face of stressful situations, they will also help to eliminate these symptoms.
Bibliographical references:
- Cavall, VE (2002). Manual for the cognitive-behavioral treatment of psychological disorders. Flight. 1 and 2. Madrid. 21st century.
- Stekel, W. (2012). Nervous states of anxiety and their treatment. V Xerte, 23 (106), 468. | https://psychologysays.net/clinical/nervous-crisis-symptoms-causes-and-treatment/ |
The palmaris longus is seen as a small tendon between the flexor carpi radialis and the flexor carpi ulnaris, although it is not always present.
It is a slender, fusiform muscle, lying on the medial side of the flexor carpi radialis.
It arises from the medial epicondyle of the humerus by the common flexor tendon, from the intermuscular septa between it and the adjacent muscles, and from the antibrachial fascia.
It ends in a slender, flattened tendon, which passes over the upper part of the flexor retinaculum, and is inserted into the central part of the flexor retinaculum and lower part of the palmar aponeurosis, frequently sending a tendinous slip to the short muscles of the thumb.
It can be palpated by touching the pads of the fifth and first fingers and flexing the wrist. The tendon, if present, will be very visible.
Variation
The palmaris longus is a variable muscle, absent in about 16 percent of Caucasians, and less frequently absent in other populations. It may be tendinous above and muscular below; or it may be muscular in the center with a tendon above and below; or it may present two muscular bundles with a central tendon; or finally it may consist solely of a tendinous band.
The muscle may be double.
Slips of origin from the coronoid process or from the radius have been seen.
Partial or complete insertion into the fascia of the forearm, into the tendon of the Flexor carpi ulnaris and pisiform bone, into the scaphoid, and into the muscles of the little finger have been observed.
Additional images
- Musculuspalmarislongus.png
Front of the left forearm. Superficial muscles.
Due to its apparent unimportance, it is often used as a replacement for other tendons should injury arise.
References
- ↑ Thompson NW, Mockford BJ, Cran GW (2001). "Absence of the palmaris longus muscle: a population study". Ulster Medical Journal. 70 (1): 22–4. PMID 11428320. Unknown parameter
|month=ignored (help)
- ↑ Sebastin SJ, Puhaindran ME, Lim AY, Lim IJ, Bee WH (2005). "The prevalence of absence of the palmaris longus--a study in a Chinese population and a review of the literature". Journal of Hand Surgery. 30 (5): 525–7. PMID 16006020. Unknown parameter
|month=ignored (help)
External links
This article was originally based on an entry from a public domain edition of Gray's Anatomy. As such, some of the information contained herein may be outdated. Please edit the article if this is the case, and feel free to remove this notice when it is no longer relevant. | http://www.wikidoc.org/index.php/Palmaris_longus |
October 2008 Archives
There is a crucial difference between a meltdown and a slowdown. Like the difference between 25% and 8%.
As an indication of the kind of losses that can characterize a meltdown, Italian Prime Minister and media mogul Silvio Berlusconi lost more than 25% (or $2 billion) of his stock value with the 38% decline of the Italian stock market so far. In contrast, with the Chinese economy caught in a less ominous-sounding slowdown, according to Hurun's 2008 China Rich List the average wealth of China's 800 richest people declined by only 8% this year, and the compiler of the list concluded that China's rich are surviving the credit crunch in better shape than expected.
Yet as a palpable indication of the real impact of the global financial crisis on China, real estate estate heiress Yang Huiyan - the richest person in China last year - saw her wealth fall from $17.5 billion to $4.9 billion. And while the Chinese economy is currently seen to be slowing rather than melting, the plight of Yang Huiyan and other Chinese property developers in the current crisis forms part of a significant impact on China's real economy, reflected (as the Times put it) in human misery in real lives.
Yet much of the actual human misery in China associated with the crisis has fallen on the export manufacturing base in the Pearl River Delta, where an already dire situation has attained the features of a crisis. At least 2.7 million factory workers in southern China stand to lose their jobs after demand slowed considerably for electronics and toys. 9,000 of the 45,000 factories in the cities of Guangzhou, Dongguan and Shenzhen are expected to close down in the coming months. This year's orders from the US for Christmas products made by Chinese manufacturers have fallen off a cliff, and the latest CLSA China Manufacturers Purchasing Index (based on monthly questionnaires sent to 400 Chinese manufacturers), released in early October, indicated the steepest fall in the volume of both domestic and foreign orders since the survey began in June 2004.
52% of China's toy exporters, still reeling from inflated manufacturing and labor costs as well as a spate of recalls in 2007, have gone out of business in 2008, as have a number of Hong Kong-linked firms. One of the biggest of these was Smart Union Group and its Hejun Toy Factory in Dongguan, whose bankruptcy earlier this month was ascribed by local media to the firm's attempt to overcome tough export conditions by committing more resources while existing on loans. When Hong Kong banks tightened their credit facilities in the wake of the financial crisis, 7000 factory workers in China suddenly found the factory gates barred.
Yet if the financial crisis is wreaking havoc in southern China, Guangdong Vice Governor Wan Qingliang is not too ill-disposed to what's been going on. By emptying the cage for the new birds, as he put it, Wan sees the financial crisis as an opportunity to further the process of transplanting modern manufacturing into Guandong's melting low-value empire. And if the misery in Guangdong can be passed off as unavoidable growing pains, bringing hardship to millions yet ultimately facilitating modernization, then its fate is tied up with an historic turning point where the need for transition has become critical under the influence of the ongoing financial crisis. As China approaches 30 years of economic reform, the growth that has brought China this far needs to be reconstituted to shift the balance of the Chinese economy from FDI and exports to domestic consumption, services and innovation.
With the latest hardship brought on by the financial crisis, the suffering toy factories of Guangdong could become monuments to a China that is slowly fading from view: a China characterized by low-value industries and blemished by recalls of unsafe and poor quality products. Guangdong, once the harbinger of China's economic miracle and now caught up in the crunch of the financial crisis, could contribute to bringing the era of the dirt-cheap China price to a close by advancing China's transition to modern manufacturing. Yet in Guangdong, the current price to pay is an overbearing share of misery, for which the promise of the China of tomorrow is little solace.
Image from Mainstreetmeltdown.com, displaying an ice sculpture that was installed by two artists in New York's Manhattan financial district, slowly melting away as a monument to the plight of the US economy.
Since the advent of economic reform in China, Chinese exports as well as China's share of world exports have grown at a fast pace to surpass that of many developed economies. As a result, by the end of 2006 China had become the world's third-largest exporter after the US and Germany. In April 2008 the WTO announced that China had overtaken the US to become the world's second-biggest exporter, second only to Germany.
Yet the rise in China's exports has also been characterized by an important shift in its export structure. Where twenty years ago China was primarily an exporter of textiles, textile articles and apparel, today China is the world's largest exporter of electronics and machinery-related products, which make up 43% of China's total exports. China is rapidly moving up the technology ladder and is therefore becoming more competitive in exports of high-value capital goods - industries where traditionally Western countries such as Japan, Germany and the US have held the competitive advantage. But to what extent is China really able to compete with these countries today?
To better analyze this competitive landscape, we can make us of the industry classification of the OECD in its STAN Bilateral Trade Database, edition 2006, to compare Chinese and US high technology exports during the past few years.
According to the chart above, if we assume that the US' share of high-technology exports will continue the trend it has followed in the past 5 years, China's high-technology exports have already surpassed those of the US. So what does this mean? Is China the new high-technology power? Are there grounds for concern about China's threat to the competitiveness of Western industries?
I believe if these questions were posed to Laozi, the father of the yin yang concept, his answer would not be yes or no, and neither black or white, but rather a combination of both.
Yin: A sourcing opportunity rather than a threatAccording to OECD sources,
some 55% of China's total exports are attributed to production and assembly-related activities, and 58% of these are driven by foreign enterprises, of which 38% are entirely foreign-owned. In fact, among the top 10 high-technology companies by revenue, not one of them is Chinese.
China's export performance, therefore, is directly linked to its specialization in assembly operations and the high value-added inputs imported from Western economies. This has facilitated a rapid diversification of its manufactured exports, from low-end manufactures to high-technology products.
Yang: A sourcing opportunity but a future threatAlthough Chinese high-tech ability is still subsidized by foreign technology transfers and government support, Chinese companies are developing competitive advantages in several areas of high-value industrial and equipment manufacturing. Good examples are Huawei (a telecommunications equipment maker based in Shenzhen), whose equipment and services were considered good enough to beat Siemens in a German tender; Zhenhua Port Machinery, which had a full two-thirds of global port crane orders in 2006; and Tian Di Science & Technology, the national leader in the design and manufacturing of coal mining equipment.
To effectively make use of China's cost advantages as a high-technology assembly center, foreign companies will have to carefully consider to what extent and with which strategic framework technology transfers are implemented and imported inputs are assembled in China. At the same time, and considering China's evolving high-technology exports, trying to avoid China as a high-technology sourcing destination will likely result in an unfeasible cost structure and a loss of competitiveness. Successfully dealing with China's sourcing challenges and particularities will finally determine whether China is a threat or an opportunity.
I recently visited several steel mills. Above all esle, I found the mills were desperately in need of new orders.
Demand
from both the domestic and overseas markets began to shrink markedly
from August this year, and with the current financial crisis there is
no evidence to suggest that this trend will be reversed anytime soon.
Stockers
have been reducing new orders to steel mills, while at the same time
cutting prices by large percentages of up to 30%, until now. Some small
steel mills have been forced to close, and big mills have reduced
production. The related products include plates, coils, wires, welded
tubes and other lower value-added products.
Prices for hollow
bars, however, have dropped by 17% so far, on average less than the 12-30% drop of
other steel products. Hollow bars are a kind of seamless tube, widely
used in machinery industries and in the petrochemical industry, as well
as for boilers and power generation services. In the hollow bars market
there are fewer competitors, and it is a comparatively higher
value-added product. The overwhleming majority of hollow bars are sold
directly from the mills to the end-users in China and to overseas
buyers.
The current global financial crisis is reshaping the world's economies at various levels and it seems that no country in the world will be immune from the ripple effects. The lack of liquidity in global financial markets, combined with the drop in purchasing power is leading to a decrease in profits that are forcing many companies to re-evaluate and completely restructure their procurement strategies. In essence, the only way to keep profit margins at a healthy level - while at the same time remaining competitive in global markets - will be cutting costs and/or increasing revenues. It is within this space that new opportunities lie to source from or manufacture in China.
The potential benefits of sourcing from China are now more than ever becoming powerful, even necessary drivers for some companies to overcome the present difficulties. If approached correctly and strategically, China can be a solution that will not only help stabilize the present turbulence, but also establish a foundation for a more sustainable and profitable outcome by engaging the main components of the profit equation.
Access to a large potential market - Impressive economic growth throughout the years has given rise to a new Chinese middle class with increasing purchasing power. This factor, combined with the Chinese saving culture and low dependency on credit, has minimized the effect of the global financial crisis in China. Companies can take advantage of this and see China not only as a cost saving solution but also as a new source of revenue.
Competitive strategy - Many companies will accept that China is a viable solution in the current financial crisis, so quickly engaging the best Chinese suppliers before the competition reaches them will be instrumental to successful procurement strategies.
All of these benefits have been mentioned before, but for many companies the global financial crisis has transformed the benefits of sourcing from China into essential requirements for remaining competitive or even solvent in the global market.
*This is the second in our series of Chinese language
postings under the category 'Experiences in Dealing with Foreign
Traders' (外贸经验), aimed at Chinese suppliers. This posting is the second of five
parts setting out operational tips for Original Equipment Manufacturers
in China.
This posting launches a new section at the China Sourcing Blog entitled 'On the Ground,' which is written by members of the China Sourcing Unit of THE BEIJING AXIS and based on their direct experiences in conducting sourcing operations on the ground in China.
I am currently involved in some projects to assist foreign clients in sourcing machines and equipment from China, and I can share some of my experiences of the processes of delivery, installation and commissioning.
With the growth of China's equipment exports, the world is becoming more aware of China's relatively advanced technology and strong capacity to manufacture equipment, apart from China's traditional price advantage. However, when foreign buyers look closely at made-in-China equipment, they find that the problems lie in the details. Poor quality paint, bad welding and cuttings all reveal China's weakness in managing the details when compared to foreign brands. Even for packaging design, good foreign suppliers always fully consider users' convenience, while Chinese suppliers sometimes completely ignore the end-users' needs.
Chinese suppliers seem to care more about delivery times and schedules, which is why they can usually deliver on time. However, Chinese suppliers should learn to pay more attention to other factors. In my experiece, when installing and commissioning equipment in foreign factories, the process could often be suspended for safety reasons. Yet Chinese suppliers, if stopped from working, will complain about foreigners' over-cautiousness and 'low' efficiency.
Chinese suppliers are manufacturing equipment as sub-contractors under various world-class brands, i.e. Siemens, Demag, Danielle, etc. This is evidence of the growth of Chinese manufacturing capacity. Yet what Chinese suppliers still lack are 'soft skills' that would significantly improve professionalism and project management.
Briefing: Since 1957, the China Import and Export Fair, also known as the Canton Fair, has been held twice annually, in spring and autumn. It is China's largest trade fair, with the greatest variety of products and the largest attendance and business turnover.
* This is the first in our series of Chinese language postings under the new category 'Experiences in Dealing with Foreign Traders' (外贸经验) specifically aimed at expanding the focus of the China Sourcing Blog to Chinese suppliers. This posting is the first of five parts setting out operational tips for Original Equipment Manufacturers in China.
Economic logic dictates that if one does not manufacture where the production costs are the lowest, one may lose global competitiveness and therefore market share. Yet this is not only a question of low labour costs. Far from making a decision based upon a single criterion, entrepreneurs analyze a wider picture within which many other factors may influence the success of sourcing operations in a certain country. Such factors include set-up costs, labor supplies, export subsidies, import and export duties, tax incentives, proximity, country risk, political stability, the investment environment, abilities and qualifications of human resources, access to technology, infrastructure, logistics and consumption markets.
Labor costs are the lowest in countries such as the Philippines and Vietnam, for instance, yet the lack of existing infrastructure or an industrial base are likely going to increase the cost of business operations. On the other side of the spectrum, developed countries such as the US or Germany excel in modern transportation networks, but labor costs are extremely expensive and will predominate in an unfeasible cost structure.
In order to identify the best destination for a particular sourcing operation, one should firstly determine the critical performance indicators that may differ from country to country, and secondly compare those indicators between the selected potential sourcing countries. Following this logic one could build an 'Export Competitiveness Model' which could determine the likelihood of a successful managerial decision.
An enterprise would typically consider four critical performance indicators in its decision-making process:
Exports: The more a country exports the more competitive its production in global markets will be. A country with a high level of export implies a developed industrial base and related transportation infrastructure.
Labor costs: The lower the labor costs, the lower the production cost structure and therefore the larger the profit margin.
Country risk: The lower the country risk, the more sound and stable the legal environment will be to support business operations.
Political stability: The more politically stable a country is, the more sustainable its operations will be in the future.
As shown in the graph above, China's profile is the thinnest and hence China remains the most competitive. China's country risk and political stability index is higher than that of Germany or the US, but with much lower labor costs it reaches a very similar level of exports. In addition, there is little difference between China's country risk, political stability index and labor costs compared to those of Brazil and Thailand, yet its level of exports notably exceeds that of these developing economies.
Ultimately we can find many reasons to choose China as the most suitable sourcing destination. Yet the full answer depends not on one or two factors but on a combination of different factors that together create a favorable environment for sourcing operations.
The International Sourcing Fair was held at the Shanghai Mart from 23 to 25 October 2008. I was there early on the 24th.
The best thing about the fair was the 'reverse sourcing' model applied at the event, facilitating both buying and selling via bi-directional trading. This method was different from any other fair I have attended before as it reverses the normal pattern by making all the buyers sit behind booth while the sellers go around hunting for targets. There was also a wide range of products at the fair, from consumer and medical goods to industrial supplies.
Generally speaking, this kind of fair saves the buyer some time in not having to search for suppliers one by one, but the immense variety in products - which I originally considered a distinct advantage - made the fair look like a big grocery store. Given the limited space available, it would perhaps have been better to focus on a particular industry. From a sourcing point of view, it was hard if not impossible to identify good suppliers at the fair. Hence the hosts could consider setting parameters to filter suppliers by means of industrial rankings, sales volume or other standards, if they want to improve the fair and attract more buyers next year.
For more information on China sourcing events or assistance with attending any event in China, visit The Beijing Axis' homepage.
Times are changing at the China Sourcing Blog, and 'm happy to announce that this blog is about to be given a whole new lease of life.
Processes have finally been put in place for CSB to feature regular contributions from The Beijing Axis' China Sourcing Unit members, in the shape of 'on the ground' China sourcing knowledge emanating from their daily experiences of working in the field. We will also feature a new Events section to highlight upcoming sourcing fairs, as well as a Knowledge section to review the latest and best knowledge sources related to China sourcing. And of course we will continue to do what we have always done at CSB: Analyze, discuss, distill, dissect and probe this thing called China Sourcing.
Apart from all the additions mentioned above, we will also be piloting a new Chinese-language section on the blog aimed at Chinese suppliers.
With this we are slowly coming of age at CSB, and becoming what we have always meant to become: THE China Sourcing Blog.
| |
Unfortunately, this job is expired as of 3/15/2016.
To continue in your search through the appropriate job categories, click either Administrative Staff, Higher Education Executives, Faculty, Post-Doc and Graduate Assistants, . You can also utilize "guess-free" keyword search tools with up to eight pre-defined criteria. Search for and Apply to academic postings directly from the site. Post your background either confidentially or overtly. Stand out and be discovered!
Under the supervision of the Associate Director of College Completion and as part of the office of Student Persistence, the Leadership and College Completion Coordinator will be responsible for enhancing leadership programming/development and providing outstanding customer service to all students at MSU Denver with enhanced focus on specific student populations as dictated by MSU Denver’s predictive model. The Coordinator will work with all areas of the University (Faculty, Staff, and Support Services) to coordinate programming and ensure the best possible customer experience for students. With College Completion, the Coordinator will act as a student advocate when appropriate and provide a place for all students to go to seek out information and assistance. With Leadership Programming, the Coordinator will work collaboratively with students, staff, faculty and the community to offer a broad range of services and leadership opportunities for student involvement. Students will receive both support and resources to graduate as engaged, active participants in their local, national and global communities. The Coordinator is critical in the functioning of the office of student persistence which provides communication, programming, and interventions to support student success.
For the College Completion area, the Coordinator will assist with developing new persistence programming and services to help MSU Denver students retain and graduate. The Coordinator will work with existing programs to provide seamless co-management of students’ persistence needs. The Coordinator will work in collaboration with all University partners to ensure the best possible experience for all students. The Coordinator will work with the Associate Director of College Completion to provide professional mentoring and advocacy support to assist students in overcoming barriers or obstacles to persistence and retention. The Coordinator will refer students to other learning support services and instructors when appropriate. The Coordinator will work with faculty as needed to assure appropriate and effective learning support and resources are available for students that have requested assistance.
The Coordinator will assist in the collection of data and the preparation of reports to assess support for students, the effectiveness of the office of student persistence, and the continued objectives of the institution (i.e., under-represented student retention, academic progress, or graduation).
The successful candidate will ensure success and productivity by assessing every student that contacts the office of student persistence needs and making recommendations that are individualized and appropriate for the student’s goals. The Coordinator will convey with complete accuracy the policies and regulations of MSU Denver of each specific degree or certificate program. Also, the Coordinator will assume full responsibility for addressing every student’s questions (which requires significant interaction with MSU Denver department staff, ability to troubleshoot and resolve specific issues, and tracking resolution of all issues)
(40%) College Completion
• Contact on an annual basis all students at MSU Denver who have earned 135 or more credit hours in order to help them assess their progress toward graduation.
• Conduct outreach on an ongoing basis to readmitted students, previously enrolled students and others who may be interested in completing an unfinished degree
• Provide incoming/returning students all necessary information, assistance in locating resources, and support in restarting a college degree.
• Conduct initial intake interviews and offer basic advising for incoming students, including a thorough review of DegreeWorks reports and transcripts, and referrals to appropriate offices on campus for additional resources.
• Discuss options for college completion and provide support as students develop a plan for graduation.
• Gather initial student data through appropriate instruments including surveys, pre- and post-enrollment questionnaires, and interviews. Follow through with mid-point and terminal data collection, including exit interviews.
• Contact and work with faculty, department chairs, Deans, and other administrators as necessary to address proactively student needs in support of college completion.
• Determine best practices for adult student success; provide student support that improves retention.
(55%) Leadership Programming and Scholar Success Program Support
• Enhance Leadership Programming for the Student Persistence area, including the Student Academic Success Center, to help support retention and persistence goals at MSU Denver.
• Recruit and mentor a diverse group of leadership students.
• Create and maintain a database of the leadership students.
• Establish time-lines and objectives for the recruitment, documentation, tracking and follow-up of students.
• Coordinate services with academic department chairs, faculty and deans as appropriate to ensure student success.
• Oversee leadership budget in conjunction with the Associate Director.
• In partnership with the applied learning center, develop community partnerships to foster civic learning opportunities for leaders.
• Conduct program assessment and produce reports regarding the various leadership cohorts.
• Hire, train, and supervise peer mentors.
• Collaborate with the Scholar Success Program Coordinator as needed to support scholars.
(5%)Other duties as assigned
Master’s degree in higher education, counseling or related field.
A minimum of one year of full-time experience working in a higher education setting specifically working with the retention of students.
A minimum of one year of experience in Academic Advising
Experience with leadership programming and/or development
A minimum of two years supervising student staff
A minimum of one year of budget management experience
Experience using student information database systems.
Experience using Microsoft Office suite (Word, Excel, Access, Power Point and Outlook).
Demonstrated customer service experience.
Experience managing scholarship recipient selection.
Knowledge of strategic student retention best practices and predictive modeling.
At least three years of full-time experience working in a higher education setting specifically related to the retention of students.
Excellent oral and written communications skills with students, faculty and staff.
Knowledge and practical application of student development theory and concepts for student success.
Enthusiasm for the college experience and an understanding of the transformative role it plays in the lives of students.
Experience managing and directing volunteers
Experience with marketing programs and outreach to students
Proven ability to perform in a student focused and fast-paced environment.
Demonstrated ability to work as part of a team with a commitment to service and excellence.
Excellent project management skills, including the ability to set and adhere to strict timelines and propose and implement effective solutions to roadblocks and problems.
Experience working with a customer relationship management system (CRM).
The successful candidate must work with and be sensitive to the educational needs of a diverse urban population.
Experience using various leadership theories and methods for creating and implementing leadership training and programming for students.
Knowledge of civic engagement best practices.
IMPORTANT: In order to be considered as an applicant you must apply via the online application system, www.msudenverjobs.com.
References refers to a list of three professional references and their contact information.
Official transcripts will be required of the candidate selected for hire.
|Employment Type:||Administrative Staff|
|Degree Required:||Masters|
|Experience:||See Job Description|
|Level of Job:||Analyst / Staff
|
|Salary:||Not Specified|
|Type of School:||4 - Year / Masters Institution|
|Application Requirements:||CV/Resume
|
Cover Letter
References
Transcripts
You were inactive for over twenty minutes. To protect you, we have logged you out. Any unsaved data has been lost. | http://www.scholarlyhires.com/Job/14882/Leadership-and-College-Completion-Coordinator/Metropolitan-State-University-of-Denver |
Remember Goldilocks? The little lady who wandered into the forest and tested each porridge and each bed till she found just the right fit? Well, business professionals often feel like they’re Goldilocks, looking for the perfect solution that’s catered just to their specific needs. Mixing metaphors a bit, it’s quite like looking for the foot that fits the glass slipper, or the needle in the haystack. That’s because professional services, especially a procurement service, are rarely one-size-fits-all.
The procurement service is the system of processes and procedures that entail the sourcing of suppliers of goods and services, awarding contracts based on submitted bids, and gaining goods and services at the lowest possible cost. While procurement is often limited to these activities directly related to the sourcing of suppliers, it’s becoming more and more common to broaden this definition.
Modern definitions of the procurement lifecycle extend into the payment for and receipt of goods and services rendered and even goes beyond that into the accounting, financing, and inventory practices of the firm. The more comprehensive of a definition your firm employs, the more involvement your procurement professionals will have in the vertical of your company.
How To Choose the Right Procurement Service
There is a vibrant and healthy marketplace when it comes to choosing a procurement service. With as many options as there are in the industry, narrowing the list down can seem overwhelming at times. However, there are a few easy ways for you and your firm to define what the optimal procurement service looks like for your company.
The first step in defining the optimal procurement strategy for your firm is, to begin with defining the various aspects of procurement as they relate to your specific company and industry.
For example, the procurement lifecycle. As mentioned above, there is a huge range of definitions that companies assign to the procurement lifecycle. This is because it varies so heavily from industry to industry, and even from company to company.
Therefore, looking at relevant industry examples can help inform you on how to define the procurement life cycle within your own organization. Another great place to start is with your sourcing definition. The sourcing definition of your company applies to the tactics and strategies your procurement team employs in the sourcing of suppliers.
Sourcing is a crucial aspect of procurement. No matter where your procurement life cycle begins, whether with a purchase order requisition, or the reverse-auction process, sourcing is an integral element of procurement regardless. It’s the process through which your professionals identify the most compatible suppliers of goods and services based on submitted bids, and the lowest offered cost.
Creating a Strategic Sourcing Process
To optimize your company’s sourcing capabilities, it’s wise to compose a strategic sourcing process. This seven-step strategic sourcing process is a great place to begin developing your own.
Having a strategic sourcing process will elevate your sourcing ability and make for a stronger procurement performance year-to-year overall.
A Final Word
There really is no such thing as one-size-fits-all. But there doesn’t need to be, not in fairy-tales, and not in your procurement department. One of the great beauties of the universe is the uniqueness inside each one of us; your procurement strategy should be too.
Optimizing your procurement strategy leads to a whole slew of benefits that aren’t covered in this article.
For more information on procurement strategy, or anything else related to procurement, visit ProcurePort today. ProcurePort is the internet’s number one resource for everything from procurement strategy and tactics to software, technologies, and more. | https://blog.procureport.com/procurement-service/ |
Plot.
Before we had time to question it, technology had changed so much. Every home, desk, and palm has a black mirror.
This TV show Is About.
Wiki.
Black Mirror is a British dystopian science fiction anthology television series created by Charlie Brooker. He and Annabel Jones are the programme's showrunners. It examines modern society, particularly with regard to the unanticipated consequences of new technologies. Episodes are standalone, usually set in an alternative present or the near future, often with a dark and satirical tone, although some are more experimental and lighter.Black Mirror was inspired by older anthology series, such as The Twilight Zone, which Brooker felt were able to deal with controversial, contemporary topics with less fear of censorship than other more realistic programmes. Brooker developed Black Mirror to highlight topics related to humanity's relationship with technology, creating stories that feature "the way we live now – and the way we might be living in 10 minutes' time if we're clumsy."The series premiered for two series on Channel 4 in December 2011 and February 2013. After its addition to the catalogue in December 2014, Netflix purchased the programme in September 2015. It commissioned a series of 12 episodes later divided into the third and fourth series, each comprising six episodes; the former was released on 21 October 2016 and the latter on 29 December 2017. A standalone interactive film titled Black Mirror: Bandersnatch was released on 28 December 2018. A fifth series, comprising three episodes, was released on 5 June 2019.The series has garnered positive reception from critics, received many awards and nominations, and seen an increase in interest internationally after its addition to Netflix. The show has won eight Emmy Awards for "San Junipero", "USS Callister" and Bandersnatch, including three consecutive wins in the Outstanding Television Movie category.
Filming Locations.
Toronto, Canada · Cape Town, South Africa · London, United Kingdom · Los Angeles, United States of America
You May Also Like.
Look at the other titles that might be interesting for you
TV
Rick and Morty
6.28
2013
Rick is a mentally-unbalanced but scientifically-gifted older man who has recently reconnected with his family. He spends most of his time with his grandson. Morty's family life causes a lot of distre...
TV
Breaking Bad
6.11
2008
Walter White, a New Mexico chemistry teacher, was given two years to live after he was diagnosed with Stage III cancer. As he enters the dangerous world of drugs and crime, he becomes filled with a se...
TV
The IT Crowd
5.69
2006
Two people are the subject of a comedy show. nerds and their stupid female manager work in the basement of a successful company and are never treated with respect
TV
Master of None
5.75
2015
The New York actor takes on pillars of maturity such as the first big job, a serious relationship, and busting sex offenders on the subway.
TV
True Detective
6.28
2014
An American anthology police detective series uses multiple timelines to uncover personal and professional secrets of those involved in investigations.
TV
Better Call Saul
6.38
2015
Six years before Saul and Walter meet. Saul is a lawyer. A man is working. The man who puts "criminal" in "criminal lawyer" is tracked.
TV
Twin Peaks
5.48
1990
Laura Palmer's body was washed up on a beach in Washington state. FBI Special Agent Dale Cooper is called in to investigate her death only to uncover a web of mystery that leads him deep into the hear...
TV
Fargo
5.85
2014
The stories in the anthology series are located in Minnesota.
TV
Stranger Things
6.39
2016
A small town discovers a mystery involving secret experiments, terrifying supernatural forces, and one strange little girl after a young boy goes missing.
TV
Mr. Robot
6.46
2015
A contemporary and culturally relevant drama about a young programmer who suffers from an anti-social disorder and decides that he can only connect to people by hacking them. He protects people that h...
TV
Lost
6.6
2004
The survivors need to work together. There are many secrets on the island.
TV
Silicon Valley
5.91
2014
In the high-tech gold rush of modern Silicon Valley, the people who are most qualified to succeed are the least capable of handling success. Silicon Valley is an American sitcom that centers around si...
TV
13 Reasons Why
6.29
2017
A series of tapes reveal the mystery of a teenage girl's tragic choice after she took her own life.
TV
House of Cards
6.17
2013
In the present day Washington, D.C., House of Cards is the story of a ruthless and cunning politician and his wife who will stop at nothing to achieve their goals. There is a dark world of greed, sex ...
TV
BoJack Horseman
6.06
2014
The sitcom horse of the 90s is 20 years old. BoJack Horseman was the star of the hit TV show "Horsin' Around," but today he's washed up, living in Hollywood, complaining about everything, and wearing ...
TV
Dexter
6.21
2006
Dexter Morgan, a blood spatter pattern analyst for the Miami Metro Police, leads a secret life as a serial killer, hunting down criminals who have slipped through the cracks of the justice system.
TV
Sherlock
6.91
2010
The famous sleuth and his doctor partner solved crimes in the 21st century.
TV
Narcos
5.82
2015
There is a chronicle of the war against drug traffickers.
TV
How I Met Your Mother
6.32
2005
A father retells to his children the journey he and his four best friends took leading up to him meeting their mother in a series of flashbacks.
TV
Mindhunter
6.19
2017
An FBI agent develops profiling techniques as he pursues notorious serial killers and rapists.
TV
Westworld
6.71
2016
The dawn of artificial consciousness and the evolution of sin is the subject of this dark odyssey. Human appetites can be fulfilled.
Last updated:
This article uses material from the Wikipedia article "Black_Mirror", which is released under the Creative Commons Attribution-Share-Alike License 3.0. | https://moviefit.me/titles/6177-black-mirror |
Q:
Number of Relatively Prime Factors
Given a number $n$, in how many ways can you choose two factors that are relatively prime to each other (that is, their greatest common divisor is 1)?
Also, am I going in the correct direction by saying if $n$ is written as $p_1^{a_1}p_2^{a_2}\dots p_k^{a_k}$, where $p_i$ is a prime and $a_i\geq 1$, then the number of factors $n$ has is $(a_1 + 1)(a_2 + 1)\dots (a_k + 1)$, and when I choose a factor $x = p_1^{b_1}p_2^{b_2}\dots p_k^{b_k}$, the number of factors is $(c_1 + 1)(c_2 + 1)\dots (c_k + 1)$, where $c_i = 0$ if $b_i\neq 0$ and $c_i = a_i$ otherwise?
A:
We interpret the question as asking for the number of unordered pairs of (distinct) divisors of $n$ that are relatively prime. For me it is easier to think in terms of ordered pairs.
So we are producing an ordered pair $(x,y)$ of relatively prime divisors of $n$. Examine one after the other the primes $p_i$. At each prime we have three types of choices: (i) assign $p_i$ to $x$; (ii) assign it to $y$; (iii) assign it to neither. If we assign $p_i$ to $x$, it can be done in $a_i$ ways, for the power of $p_i$ is at our disposal. Same with $y$. And we can assign to neither in $1$ way,for a total of $2a_i+1$. Thus the total number of ordered pairs is $P$, where
$$P=\prod_1^k(2a_i+1).$$
This includes the ordered pair $(1,1)$. Now for unordered pairs of distinct relatively prime divisors, there are $\frac{P-1}{2}$ possibilities.
Remark: If we want the product of the two factors to be $n$, the counting becomes much simpler. For ordered pairs, we choose a subset of the set of primes, and assign $p_i^{a_i}$ in the chosen subset to $x$, and assign the rest to $y$. There are $2^k$ ways of choosing $x$, and then $y$ is determined. So there are $2^k$ ordered pairs, and for $n\gt 1$, there are $2^{k-1}$ unordered pairs.
| |
Full-text links:
Download:
References & Citations
Bookmark(what is this?)
Mathematics > Combinatorics
Title: Characterization of saturated graphs related to pairs of disjoint matchings
(Submitted on 23 Nov 2020 (v1), last revised 24 Nov 2020 (this version, v2))
Abstract: We study the ratio, in a finite graph, of the sizes of the largest matching in any pair of disjoint matchings with the maximum total number of edges and the largest possible matching. Previously, it was shown that this ratio is between 4/5 and 1, and the class of graphs achieving 4/5 was completely characterized. In this paper, we first show that graph decompositions into paths and even cycles provide a new way to study this ratio. We then use this technique to characterize the graphs achieving ratio 1 among all graphs that can be covered by a certain choice of a maximum matching and maximum disjoint matchings.
Submission historyFrom: Samuel Qunell [view email]
[v1] Mon, 23 Nov 2020 03:08:33 GMT (120kb,D)
[v2] Tue, 24 Nov 2020 01:58:55 GMT (120kb,D)
Link back to: arXiv, form interface, contact. | http://export.arxiv.org/abs/2011.11187 |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF INVENTION
DETAILED DESCRIPTION
This invention relates to rubber that has been reconstructed from crumb rubber and more particularly to a process involving the application of high pressure steps in which the crumb rubber is compressed in the presence of a binder, with the resultant product being used for building materials and structural elements.
There have been many attempts to recycle rubber tires and the like by grinding up the tires into what is called crumb rubber. However the reconstituted product made from crumb rubber has not been satisfactory because its structural properties do not approximate natural or synthetic rubber. Thus prior attempts have failed to produce a satisfactory reconstituted rubber product.
As a result, reclaimed rubber has not been utilized in structural products such as tires, railroad ties, building panels and the like, and it was simply not considered for these applications.
As to railroad ties, utility poles or marine pilings or pallets, the dominant factor has been cost. Until a cost effective replacement for wood is available, wooden poles, pilings, pallets will remain to be the standard of the industry but one that is ecologically troublesome.
It is noted that in all of these applications pressure treated wood is utilized, with the wood impregnated with creosote or CCA which is a form of arsenic, copper, chromate arsenate, as well as ACQ. ACQ is a copper cordinary, with CCA usually being a penta-chloral phenol. The purpose of the impregnation of the wood is to keep insects from attacking the wood and to kill insects as well as preventing mold and fungus from destroying the wood. However, all of these chemicals leach out, with the leached out chemicals going into groundwater. The leaching problem is so severe that leaching of these chemicals is classified as a carcinogenic hazard. In particular, creosote which is the common impregnatable resin is particularly high in carcinogenic chemicals.
Note that when one sees green utility poles or green plywood, the green color is a result of the CCA arsenic which is particularly environmentally toxic. It is so toxic that when using a skill saw respirators are required. In terms of utility poles and railroad ties alone, there are two billion pounds of preservative utilized, worth 6.2 billion dollars. Because this 2 billion pounds of preservative leaches into the soil, its use is completely banned in 26 countries other than the United States.
In terms of substituting plastics for these wooden building materials, cost is a prohibitive factor. Because the cost of plastic follows the cost of crude oil, as the crude oil price goes up so does that of plastics.
There is another problem with the use of plastics and that is the strength and longevity of plastics. As to structural strength of the plastics, typically polyethylene is an extremely weak material. It is very soft and has a relatively high coefficient of thermal expansion which must be kept under control. Thus the use of polyethylene as a building or structural material has been limited. Moreover, even if the polyethylene is reinforced, it is barely equal to wood, even though its use provides an adequately good defense against insects and degradation due to mold or mildew.
Thus in terms of strength, plastics cannot provide the properties which make wood so attractive. For instance as to plastic lumber utilized for decks, the plastic lumber is not considered structural. This is because its very rubbery flexible nature requires a massive substructure underneath, usually made out of the ecologically-challenged impregnated wood.
Aside from plastics, utility poles, railroad ties and the like have been made of concrete. However, the cost is usually prohibitive, as is the typical weight of the concrete column or railroad tie. For instance a typical railroad tie weighs about 220 pounds, whereas a concrete railroad tie weighs between 700 and 900 pounds. Moreover, the concrete is susceptible to the elements. Even with steel rebar reinforcement, the reinforced concrete when placed in a wet environment results in water working its way through to rust the rebar. Moreover, the water expands inside the concrete and cracks the concrete tie.
Additionally, with wood, plastic or concrete it is very difficult to control the modulus of elasticity of the structural product. For instance wood elasticity depends on the species and the way that the wood is cut. This cannot be changed and since its composition cannot be manipulated, elasticity can not be controlled. Thus, the modulus of elasticity (MOE) cannot be altered to for instance provide a soft riding railroad tie.
With respect to building panels, one can obviously vary the density of the core material and obtain different properties for wood-faced panels. Building panels are more or less monolithic in structure so that as long as one has obtained good performance and load carrying capability, then appropriately priced wood-based building panels can be manufactured.
All of the above described structural elements are rubber-free. Were it possible to utilize a molded rubber product reconstituted from crumb rubber, one could achieve inexpensive insect and weathering resistance while at the same time being able to vary the modulus of elasticity of the structural element.
By way of further background, in order to reinforce non-wood building elements such as railroad ties, utility poles and the like made of plastics, it might be thought desirable to reinforce the plastic with bamboo. In the past bamboo fiber which is for instance 20% stronger than steel, pound-for-pound, has been embedded in polyethylene plastic to provide a structural article. However, the problems with the utilization of bamboo fiber in such plastics are that the plastic manufacturing process is dependent upon heat for melting the plastic, typically 350°-400° Fahrenheit. When the plastic is melted if one introduces bamboo one almost certainly has at least a tiny percentage of moisture. It is noted that moisture and plastics do not go together. This is because moisture creates steam during the heating of the plastic. Thus, when trying to manufacture building or structural elements out of polyethylene and reinforcing it with bamboo, not only is the structural integrity in question due to the uncontrolled moisture content and the required heat, one is also faced with the rising cost of polyethylene itself.
It is noted that the moisture content in bamboo is around 9% where the bamboo is strongest. More moisture and strength drops off If one attempts to dry the bamboo to reduce moisture content to less than 9%, the resulting reinforced plastic article exhibits undesirable properties.
Note there are 1,330 varieties of bamboo and there are 25 to 30 types of bamboo referred to as timber bamboo. The timber bamboos are the ones that have exceptional strength, noting that the majority of the other bamboos are really no more than grasses. Moreover, the majority of bamboos are short, maybe 3 feet tall, whereas the 25 to 30 timber bamboos can grow to 110 to 120 feet high.
It will be appreciated that bamboo is typically very good at pulling silicates out of the soil, with the silicates giving bamboo its strength. If one cuts a tube of bamboo utilizing for instance a skill saw, one can actually see sparks fly off the saw due to the toughness of the bamboo. Moreover, bamboo grows in linear bundles which are nearly perfect bundles. Thus one could take a 40 foot tube of bamboo and split it at one end and the split would go all the way down to the other end perfectly lined up. There are those who refer to bamboo as nature's composite, and bamboo has been compared very favorably to Kevlar and carbon fiber.
In short, using bamboo as a strengthening agent for plastics has not proved successful. There is therefore a need for an inexpensive eco-friendly, insect and weather resistant molded structural member.
It has now been found that crumb rubber can be reconstituted into moldable high quality rubber utilizing a high pressure process in which the crumb rubber is made to flow into a mold. This reconstituted rubber product can then be molded into a wide variety of products including structural elements. Since the final molded products are made from inexpensive crumb rubber, the resultant molded product is likewise inexpensive. The final rubber product is insect and weather resistant. Moreover, the molded product can incorporate reinforcing members or fillers to even further reduce the cost of the final product. Such reinforcing elements and fillers include polymer based elements such as polystyrene strapping materials as well as bamboo and rice hulls.
It has been found that one can reduce the amount of crumb required by adding fillers such as rice hulls at loadings up to 80% rice hulls to reduce overall cost without sacrificing structural performance.
As part of the reconstitution the crumb rubber is compressed at high pressure at the presence of a specialized urethane, sodium silicate or other acceptable glues. In one embodiment, the applied pressure is stepped until the crumb rubber is flowable by first applying 1600 PSI and then in 15 second intervals stepping up the pressure by 500 PSI until the pressure reaches 3600 PSI.
The utilization of rubber, whether or not reinforced with bamboo or other additives, provides an exceptionally good insect repellant and weather resistant material that can be used in fence posts, railroad ties, telephone poles and indeed in building panels, with the modulus of elasticity being particularly well controlled. Gone is the requirement for caustic preservatives and with the utilization of crumb rubber, the cost of the building or structural elements well below wood structures.
While plastics have been investigated in the past, the question is to whether one could find a rubber substitute for plastic that was inexpensive enough to provide structural materials with the required properties. It was found that one could take crumb rubber and certain binder chemicals to produce a resin that one could mold with or without reinforcing materials or fillers to provide the required ecologically-friendly strong building component.
While it was theoretically possible to mix crumb rubber with epoxies and polyesters, the expense is prohibitive.
Crumb rubber on the other hand has the ability to last 100 years under ultraviolet light, has the ability to take shock and impact and its low present cost of 8 cents a pound is an attractive starting point. What is then required is a perfect resin matrix material. Note previously in the processing of crumb rubber, the crumb particles were forced together at only 40 PSI and in order to make them adhere one had to use a large amount of glue or adhesive.
On the other hand, it was found that the amount of glue or adhesive could be reduced to less than 10% by weight of the total article as long as one uses a specially-tailored urethane or sodium silicate binders and as long as one boosts the pressure above 1600 PSI where the crumb rubber was found to change state to a flowable plastic state. Specifically it was found that the crumb rubber turned into a flowable product that can be molded into a tire, with the resultant molded tire having properties that almost exactly duplicate a tire made of natural and synthetic rubber. This is confirmed when providing a transverse cut through the tire.
It is noted that in the subject process of providing reconstituted rubber, one does not add heat. The practically zero energy usage to make the molded parts makes the process exceedingly cost effective as it operates in an ambient temperature cure cycle.
As compared to wood railroad ties, for instance, the most expensive parts of making a wooden railroad tie are the extruders and extruder lines which operate at very high temperatures that require the raw material to be heated and then cooled.
Moreover, hard woods are very difficult to impregnate with creosote and the like such as CCA or penta because these materials do not penetrate into hard wood such as oak. If one wishes to increase the penetration, on might seek to use a soft wood such as pine. However pine is too soft for railroad tie applications.
As mentioned before, creosote use was the only cost effective way of making railroad ties, poles and the like insect and weather resistant. Note that for creosote the railroad tie is placed in a large metal tank and a vacuum is drawn on the wood after which the creosote is injected to penetrate the wood fibers.
On the other hand, in the subject invention crumb rubber is not only ecologically-friendly but also is resistant to environmental degradation from moisture, hot and cold cycling and insect infestation.
Crumb rubber is usually retrieved from recycled tires that are ground up to about the size of a lump of coal. Thereafter these nuggets are ground down to about walnut size, with further grinding techniques bringing the walnut size bits of rubber down to mesh sizes from a −10 mesh size to a −40 mesh size.
While it is possible to grind the rubber down to a −300 mesh size, it is more expensive to provide such fine particles.
With respect to tires such as truck tires, earth mover tires or passenger car tires, the percentage of natural rubber is usually high as compared to any synthetic rubber utilized for the tire. While the actual ratio of natural rubber to synthetic rubber varies from tire to tire, it has been found not to be a critical issue with the subject reconstituted rubber.
In terms of structural elements provided by the reconstituted rubber process described herein, the final reinforced product is made in one embodiment through a composite process, for instance involving a reinforcing fiber that basically carries the load, with the resin matrix maintaining alignment and for instance an isotropic high pressure that maintains all components locked in space. If non-flammable rice hulls are used as an extremely inexpensive filler, the interlocked rice hulls also add structural rigidity to the molded product while at the same time being fire resistant. In addition, polystyrene straps can be inserted into the rubber matrix to provide excellent reinforcement properties.
As will be described the use of a particular type and formulation of urethane such as available under the trade names Jowat and Gorilla Glue, have certain ratios of isocyanates and pre-polymers to the mixture. It has been found that most urethanes do not work well with crumb rubber as binders. However, the Jowat and Gorilla Glue ethylenes work precisely because of the high isocyanates to pre-polymer ratio in the mixture.
Note with the subject process that the reconstituted rubber may be molded to any arbitrary shape.
Moreover, unlike polyethylene by itself, the crumb rubber matrix can be utilized with a variant of the adhesive system to be able to bond the crumb rubber together so that for instance one can bond brackets to the crumb rubber molded piece.
Note that in one embodiment the adhesive or glue mixed with the crumb rubber is predominantly a urethane system with a high isocyanate content. With numerous isocyanate sites there are a multiplicity tenacious bonding sites, with urethanes in general being known for the ability to bond to cellulosics and other substrates.
In terms of for instance railroad ties, one would blend the crumb rubber with 1 to 2% of the binder chemical in for instance a high sheer or high intensity mixer, with the crumb rubber and its other constituents deposited for instance at the bottom of a mold.
Thereafter, in one embodiment in which the molded product is to be reinforced, layers of bamboo alternating with the layers of crumb rubber are placed one on top of the other such that the alternating layers of crumb rubber, binder mix and layers of unidirectional bamboo fiber are subjected to the +1600 PSI pressures. The amount of pressure for instance provided by a press can be applied for 10 to 15 second intervals for pressures starting at for instance 1600 PSI, and going to 2100 PSI, 2600 PSI and finally to 3100 PSI. Similar performance has been achieved using sodium silicate binders.
Note in one embodiment there is a 42 minute cure time involved, although this can be dramatically reduced with the addition of certain chemical additives.
It will be appreciated that the curing process involved with the reconstituted crumb rubber offers a major advantage because the molded material cures in the presence of moisture during the curing process in which the moisture is the catalyst that initiates the cure. This is useful in terms of the introduction of bamboo because in the crumb rubber process there is inherently enough moisture from the bamboo to initiate cure. If there is not enough moisture, in general adding a few drops of water is effective, since typical urethane cure cycles involve the water molecule as the catalyst.
Note in the curing process the urethane molecules form long chain molecules which cross link and produce the cure.
As to the weight of the final product, railroad ties usually weigh about 220 pounds per tie and with 4 cubic feet of material in a railroad tie one is looking at 50 to 55 pounds per cubic foot of density.
In terms of the particular stepped pressurized treatment in the mold it has been found that if one closes the mold and increases pressure in increments of 500 PSI, what happens is that the crumb rubber starts to enter a kind of plastic state and starts to flow around itself or any other reinforcing materials such as bamboo sticks. If one gives the rubber a little time to flow under one pressure, then the next increase in pressure is more effective to complete the flow. By giving the resultant composite a little bit more time to flow, one can bring the press down harder to another increase of 500 pounds per square inch.
It has been found in one embodiment that when one starts for instance at 500 PSI and goes up to 2000 PSI then the cure time is about 42 minutes.
Rather than using bamboo as a filler, in one embodiment of the subject invention rice hulls are preferred, for instance for structural panels. Additionally polystyrene straps can be used for structural enhancement.
While the present invention has been described in connection with the preferred embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications or additions may be made to the described embodiment for performing the same function of the present invention without deviating therefrom. Therefore, the present invention should not be limited to any single embodiment, but rather construed in breadth and scope in accordance with the recitation of the appended claims. | |
When I was a child, I hated mowing the lawn. I had to add gasoline to the greasy engine, I had to pull start the mower, and pull again, and again. Prime it, pull, pull, pull, finally it would start. I would spend half a day mowing our lawn. It was hot, grass made my eyes water and made me sneeze, and I would get sweaty and dirty. It was miserable.
However, I have been spoiled by renting, and all of our landlords have taken care of lawn maintenance. I have not mowed a lawn in several years, and I was loving it. However, that came to an end today. The church lawn has been looking like a jungle after our spring came quite early. I was putting it off, and putting it off, and today I could not take it anymore, so I mowed the grass.
I didn’t just use any lawn mower, however, I bought a reel lawn mower for the church, you know, the old-timey type with the spinning blades and no engine? Yep, that type. So I didn’t have to worry about filling it with gas, nor did I have to worry about pull starting it, or priming it. I didn’t have any grease, no loud noise, and no grass (or twigs) being thrown out with immense velocity. It was just me pushing this simple manual lawn mower and all I could hear was the sound of the blades spinning and the “swish-swish” of the grass being sliced so cleanly and evenly.
At the very beginning I was frustrated. I was frustrated because I did not go to seminary to mow the lawn. But as I got into the task, it was blissful. There are few things that are better than meaningful physical work.
However, that was radically different today when I was mowing the lawn. I could look over what I had done and see very visible and very tangible fruit of my labor. It looked much nicer after the grass had been cut so evenly and so uniform. My arms were a bit sore after pushing the mower over the uneven ground, but that too is the fruit of labor. At the end of my task which I began reluctantly, I simply wanted to keep going.
I have had this experience several times over the past several weeks. I’ve replaced shower heads in the showers at church, I’ve fixed a faucet on one of the sinks. At first I was frustrated that I had to do these tasks which seemed to be meaningless and not fitting for my calling. However, I have come to view it very differently. I actually enjoy some of the maintenance tasks around the church. At times, it can get burdensome when I have a worship service and sermons weighing down on me, but it gives me a welcome break from the day to day of my time here.
After being called to a position that typically does not have many physical labor tasks involved in it, and renting where my landlord takes care of the maintenance, I have been able to understand again, not only in my head but also in my heart, the value of human labor. The real tragedy of modernity is not that we have to work, but it is that we have to work in ways that dehumanize us and divorces us from our understanding of labor as good for the soul. Likewise, one of the tragedies of unemployment is not just that people lack a sufficient income, but in many instances, it deprives people of their ability to expend labor in a meaningful way.
Work is something that was given to us by God even before sin entered the world. “The LORD God took the man and put him in the garden of Eden to till it and keep it” (Genesis 2:15, NRSV). Humans were created from the very beginning to work, not to be exploited, but to work and reap the benefits of that work. Adam tended the garden so that he could eat from his labors. However, sin perverted this order and in many instances, we must do meaningless work for someone who treats us poorly so that they can reap the real benefits of it. However, despite this, work still remains crucial to humanity.
Interestingly enough, even the visions of the arrived Kingdom of God include work, Isaiah 65:21-23 includes labor: building houses and planting vineyards. However, the difference is that those who build the houses live in them, and those to tend to vineyards eat its fruit. This is the order that we were created for, and this is why we were created for work, not the twisted version that we have now.
However, twisted as it is, work continues to be important for people, and there must be opportunities for people to work. Pastoring a church in a low-income neighborhood, I often have people tell me that what poor folks need is a job, not more government money. I could not agree more. The folks who are financially or materially poor do not need more assistance, they need smarter assistance. We need to make our safety net one that honors and respects the value that work has and the dignity that work can provide.
I have also had people tell me that the Bible says that if you don’t work, you don’t eat, which of course, the Bible does not say. However, 2 Thessalonians 3:10 does read, “…Anyone unwilling to work should not eat” (NRSV). The difference here is that unwilling and unable are two different things. Someone who is not able to find a job is in a much different situation than someone who does not desire to find a job. Work has a value in it that nothing else can provide. Work was given to us by God, but was twisted after the fall. Our hope is not that we will one day be delivered from work, but rather that we will be be able to work in such a way that we can see it as a meaningful expenditure of human energy.
As for me, if I don’t answer my phone at church, perhaps I’m mowing the lawn. Perhaps I’m fixing a faucet or patching up a wall. One day I look forward to the new heaven and new earth, when I can simply bask in the ability to mow the lawn with my reel lawn mower.
This entry was posted in Spirituality and tagged Calling, Maintenance, Small church, Work on April 4, 2012 by Matthew van Maastricht. | https://thealreadynotyet.com/tag/work/ |
M.L. Gordon has been a contributor to a number of national and international haiku journals, as well as to the Utmost Poetry Gallery.
moment—and in three lines, it is just a moment—of inward growth, of contemplation and depth unapproachable in longer poetic forms.
The way haiku is often taught in elementary schools does it a great disservice, as it is customarily presented as an easy form of poetry, a little thimbleful of 17 syllables "about nature," as many well-meaning teachers explain.
Traditionally, the Japanese haiku has five syllables in the first line, seven in the second, and five in the third; however, English-language haiku tends to adhere more to the "short-long-short formula rather than a specific syllable count. And while many haiku do revolve around nature, modern haiku writers give more emphasis to the "a-ha!" moment—a spontaneous, reaction to the world around them. The result is authentic and immediate poetry—all at once incisive, reflective and meaningful without artificially attributing significance to an experience.
It is hard not to feel the profundity of such a moment expressed so acutely.
The blending of images here—the implied instead of stated comparison between the moon and the scar—speaks for itself about a memory of pain. It also demonstrates another principle of haiku: the caesura, or pause (here, between the first and second line), at which point two images or ideas are often juxtaposed unexpectedly, seen with a narrowed focus, or punned upon for that "a-ha" moment of clarity so honored in haiku.
The message regarding the value of human life is clear, but without being overbearing. The image created, haunting in its poignancy, becomes in a way its own small sermon.
With its simple nature, haiku as a warm-up can serve as a way to practice fresh imagery minus overt similes, heavy modifiers and forced rhyme, the "baggage" of many Western forms of poetry. And while haiku in and of itself is rewarding, it often becomes a jumping off point for longer compositions, such as haibun (haiku prose; a haiku with commentary) or renga (linked/sequenced stanzas). It may even give the poet a good concrete image on which to base a more Western form of poetry. Modern Haiku, which celebrates some more experimental modes of Western haiku, and William J. Higginson's The Haiku Handbook: How to Write, Share, and Teach Haiku are wonderful resources for learning about this form of poetry.
Simply stated, haiku is a way to strip away that which is unnecessary in our lives and reflect upon the divine spark within us all. | http://www.utmostchristianwriters.com/articles/article0019.php |
Today 22 April, 2021 Kapellerfeld Sehr & Iftar timings are Sehri Time: 03:48 AM and Iftar Time: 7:56 PM as per (Sunni Hanafi, Shafi). View 30 days calendar for Kapellerfeld with Suhoor and Iftar time with PDF Download and Print features.
Ramadan Time 2021 in Kapellerfeld is based on the time of Inteha e Sehar and iftar. Today 22 Apr 2021 Kapellerfeld Sehri time is 03:48 AM and Kapellerfeld Iftar time is 7:56 PM. The total duration of Fasting depends on the duration or number of hours between Suhoor and Iftar that is from Dawn to Dusk.
The accurate information about Kapellerfeld Sehri time and Kapellerfeld Iftar time is mentioned on this page. The Iftar time in Kapellerfeld for Shia Muslims comes a few minutes later than the Sunni Muslims. However, there is also a difference between the Sehri time in Kapellerfeld for both fiqhs. Besides Ramadan, people also do Nafil fasting. They keep and open their fast according to the Sehri time today in Kapellerfeld and Iftar time today in Kapellerfeld.
People from other cities also visit Kapellerfeld and they want to remain updated with information related to the Kapellerfeld Sehri time and Kapellerfeld Iftar time. You can also utilize this page, in case, you are visiting any nearby town or village. However, it is recommended that you should follow a difference of one or two minutes during the Sehri time in Kapellerfeld and Iftar time in Kapellerfeld.
Today Sehri Time in
Kapellerfeld
is
03:48 am
on
22 Apr 2021. It is usually recommended to stop eating one or two minutes before the Sehri time. The beginning time for Fajr prayer is the start of fasting, and Quran Verse for Sehri start time is:
"And eat and drink until the white thread of the dawn becomes distinct from the black thread (darkness of night); then complete the fast up to the night."
Today
Kapellerfeld
Iftar Time is
7:56 pm
on
22 Apr 2021, and tomorrow
23 Apr 2021
iftar time will be
7:58 pm.
Ramadan Start: Apr 14, 2021 & Ramadan End: May 13, 2021. | https://hamariweb.com/islam/kapellerfeld_sehr-o-iftar-timing4220.aspx |
Climate change, pandemics and countless conflicts are leading to a fundamental disorientation that characterises our times: The world seems to have fallen apart. Traditional explanations are no longer adequate to deal with the complexity of the present, which oscillates between digital communication and radical individualism, past, present and an uncertain future. Disoriented between innovation and tradition, Western society is in permanent crisis mode.
Art reflects these phenomena and develops its own strategies for dealing with them. Especially in large-scale installations, the audience is confronted with an abundance of information and events that simultaneously test their receptivity. The clear distinction between artwork and audience dissolves in favour of multimedia spaces in which the internet and digitalisation take on leading roles. Installations are no longer viewed, but experienced; they set an interactive process in motion that also facilitates a variety of approaches.
The exhibition unites installations by nine international artists of the younger generation. All the installations deal with different aspects of content, whether artificial intelligence, ecology or gender fluidity. They illustrate that social questions can no longer be answered in a valid way. Each one forms a cosmos that lives from the interplay of time and space, of different objects and media formats. This inner density is continually co-produced by the audience as it arranges and assembles them according to the situation. Together, the nine installations create the image of a world that has come apart at the seams, in which ambivalences have to be endured to the point of pain – in order to interpret them productively.
Installations by: | https://skny.com/news-events/julian-charriere-in-world-out-of-joint-9-installations |
Introduction {#sec1-2055217319827618}
============
Cognitive dysfunction has long been recognized as one of the prominent disabling sequelae of multiple sclerosis (MS). The prevalence of cognitive dysfunction ranges from 43% to 70% and can be present even in the early stages of the disease.^[@bibr1-2055217319827618]^ Patients with MS experience cognitive dysfunction in a number of domains, most prominently in attention,^[@bibr2-2055217319827618]^ visual and verbal memory and processing speed.^[@bibr3-2055217319827618]^ While the exact mechanism of cognitive dysfunction in MS is not yet known, brain atrophy is increasingly recognized as a marker of MS disease progression and severity, likely reflecting ongoing degeneration in both white and gray matter.^[@bibr4-2055217319827618]^ Brain volume loss in patients with MS has been shown to occur at a faster rate than in healthy controls. Estimates of average brain volume loss in a normal adult range from 0.1% to 0.3% annually while brain volume loss in an untreated patient with MS is estimated at 0.7% annually.^[@bibr5-2055217319827618]^
Brain volume loss in MS has been shown to correlate with worsening of disability as assessed by a number of clinical scales.^[@bibr6-2055217319827618]^ Whole brain volume and regional brain volume loss have also been shown to correlate with cognitive decline in MS as assessed by the cognitive elements of a number of disability scales and neuropsychological tests.^[@bibr7-2055217319827618]^ Various brain regions have been identified as regions of interest (ROIs) related to cognitive dysfunction in MS including cortical gray^[@bibr8-2055217319827618]^ and white matter,^[@bibr7-2055217319827618]^ thalami,^[@bibr9-2055217319827618]^ basal ganglia,^[@bibr10-2055217319827618]^ amygdalae and hippocampi.^[@bibr11-2055217319827618]^
Subjective cognitive concerns (SCCs),^[@bibr12-2055217319827618]^ previously called subjective cognitive complaints or memory complaints, are a self-reported perception of dysfunction in memory or thinking with or without impairment on objective cognitive testing.^[@bibr13-2055217319827618]^ SCC in the absence of objective impairment on neuropsychological testing has been termed subjective cognitive decline (SCD) and is recognized as a preclinical stage of mild cognitive impairment and dementia.^[@bibr14-2055217319827618]^ SCD has been explored most extensively in Alzheimer dementia in which it has been shown to be a predictor of progression to dementia^[@bibr14-2055217319827618]^ and to correlate with neuroanatomical changes including reduced hippocampal volumes,^[@bibr15-2055217319827618]^ but little is known about the clinical and pathological relevance of SCC and SCD in MS. Some studies have shown a correlation between SCC in MS and objective cognitive impairment, especially in cases of mild impairment of immediate recall and processing speed,^[@bibr16-2055217319827618]^ while other studies have shown a gap between subjective reports and objective testing and a stronger correlation of SCC with depression^[@bibr17-2055217319827618]^ and fatigue.^[@bibr18-2055217319827618]^
Our aim is to explore whether SCCs can relate to reduced brain volumes in MS, in particular, if the patient's experience of dysfunction is associated with pathological changes in the brain. A number of different measurement tools have been used to assess self-reported SCC in MS including the cognitive elements of the Multiple Sclerosis Quality of Life-54,^[@bibr17-2055217319827618]^ the Cognitive Failures Questionnaire,^[@bibr18-2055217319827618]^ the Perceived Deficits Questionnaire,^[@bibr16-2055217319827618]^ the Multiple Sclerosis Neuropsychological Screening Questionnaire (MSNQ)^[@bibr19-2055217319827618]^ and the Cognitive Function Scale.^[@bibr20-2055217319827618]^ Given the prevalence of standardized patient-reported outcomes (PROs) for assessing the patient's subjective experience of illness as well as their use in clinical trials in MS,^[@bibr21-2055217319827618]^ we relied on PROs reflecting cognition from the Quality of Life in Neurological Disorders (Neuro-QoL) measures in our study. Neuro-QoL is a widely used, National Institute of Neurological Disorders and Stroke (NINDS)-funded, self-report battery of short form questionnaires that is completed by patients to help assess various aspects of quality of life as related to neurological disease,^[@bibr22-2055217319827618]^ has been validated in MS^[@bibr23-2055217319827618]^ and employed to assess self-report SCCs in other research protocols.^[@bibr24-2055217319827618]^ We hypothesized that PROs assessed by Neuro-QoL reflecting perceived cognitive dysfunction in patients with MS would be associated with regional brain volume loss in ROIs, such as the thalami, basal ganglia, amygdalae and hippocampi, as measured by automated volumetric quantitation from standard of care magnetic resonance imaging (MRI).
Methods {#sec2-2055217319827618}
=======
Participants {#sec3-2055217319827618}
------------
De-identified PROs and NeuroQuant volumetric data were gathered by retrospective chart review. All patients were seen at the Rocky Mountain MS Center between May 2014 and October 2016 and had a diagnosis of MS made by neuro-immunology trained faculty following McDonald criteria. As part of standard of care, all patients completed a battery of PROs, including those related to upper and lower mobility, mood, cognitive concerns, and other disease-related symptoms. Standard of care quantitative MRIs were also performed in each patient, and automated volumetric quantitation was performed using NeuroQuant software. For purposes of the current analysis, we reviewed records of patients who underwent quantitative brain MRIs within 90 days of completing the Neuro-QoL short forms. Research studies using the clinical database of de-identified PROs and NeuroQuant volumetric data were approved under Colorado Multiple Institutional Review Board \#14-0394. All patients who were seen at the Rocky Mountain MS Center between May 2014 and October 2016, met criteria for diagnosis of MS, completed PROs and had NeuroQuant MRI within 90 days of completing PROs were included in the study.
Assessing SCCs {#sec4-2055217319827618}
--------------
SCCs were assessed using Neuro-QoL, which includes short forms on anxiety, depression, fatigue, upper and lower extremity functions, applied cognition including executive function and general concerns, emotional and behavioral dyscontrol, positive affect and wellbeing, sleep disturbance, social participation and satisfaction and disease stigma. Specific questions per domain are rated by patients on a five-point scale (e.g. 'never' to 'very often'). Raw scores from the 'Applied Cognition: General Cognitive Concerns' (GCC) short form were used as a marker of SCC. The GCC short form includes eight questions related to applied cognition such as 'my thinking was slow', or 'I had trouble thinking clearly' with a total raw score ranging from 8 to 40. This section has been shown in a previous analysis to be a strong factor that accounts for much of the variance in Neuro-QoL responses for our samples,^[@bibr25-2055217319827618]^ and assesses for subjective concerns in processing speed and working memory. As a comparator, we also analyzed scores from the 'Lower Extremity Function (Mobility)' (LEF) short form. Despite overlap in ROIs related to physical disability and cognitive impairment in MS, we used this domain for comparison to allow added specificity in testing our primary hypothesis. This short form also includes eight questions answered with a five-point scale for lower extremity functions, a 1 denoting 'unable to do' and 5 able to do 'without any difficulty' with a total raw score also ranging from 8 to 40. Neuro-QoL also provides the opportunity to convert raw scores into normalized T-scores, which help compare the respondent to either a clinical population or healthy normative population. In order to help simplify interpretation of findings without requiring comparison with a normative sample, raw scores were used for all analyses.
MRI acquisition and analysis {#sec5-2055217319827618}
----------------------------
All MRI was obtained as standard of care at our institution, which includes a standardized sagittal three-dimensional (3D) T1 acquisition for brain volumetric analysis generally following recommended NeuroQuant parameters with few modifications. MRI parameters for each scanner are as follows: Siemens Symphony Tims 1.5 T scanner: sagittal 3D T1 magnetization-prepared rapid gradient echo, TR = 1890 ms, TE = minimum, TI = 1100 ms, flip angle = 8, matrix = 192 × 192, FOV = 240 mm^2^, slice thickness = 1 mm. Phillips Achieva 1.5 T scanner: sagittal 3D T1 fast field echo, TR = shortest, TE = 4 ms, flip angle = 8, matrix = 192 × 192, FOV = 240 mm^2^, slice thickness = 1 mm. Phillips Achieva 3T scanner: sagittal 3D T1 fast field echo, TR = shortest, TE = shortest, flip angle = 9, matrix = 192 × 192, FOV = 240 mm^2^, slice thickness = 1 mm. GE Discovery 750W 3.0T scanner: sagittal 3D T1 inversion recovery fast spoiled gradient echo, inversion time = 600 ms, TE:M in full, flip angle = 8, matrix = 192 × 192, FOV = 240 mm^2^, slice thickness = 1 mm.
All sagittal 3D T1 volumetric images were analyzed using NeuroQuant software, which is a fully automated method for quantifying brain structures shown to have significant statistical agreement in calculating brain volumes in MS to other validated methods such as SIENAX.^[@bibr26-2055217319827618]^ All processing was performed in standard fashion as described in <https://www.cortechslabs.com/resources/installed-system/> at the time of image acquisition per clinical protocol at our institution. NeuroQuant analyzes a high resolution non-contrasted T1-weighted 3D sagittal MRI and constructs a segmentation-based measurement of both cortical and subcortical volumes. The software corrects for a number of factors, deletes non-brain tissue using its active contour model and separates various anatomical structures using a probabilistic atlas. NeuroQuant then compares volumes to a normative database adjusting for age, gender and intracranial volume.^[@bibr26-2055217319827618],[@bibr27-2055217319827618]^ Using a customized data retrieval pipeline, calculated volumes for each brain region were automatically extracted from the NeuroQuant processing server for storage in the study database.
Statistical analysis {#sec6-2055217319827618}
--------------------
Linear regression was used to analyze the relationship between SCC and brain ROI volumes (cortical white matter, cortical gray matter, thalami, basal ganglia, amygdalae and hippocampi) after inclusion of relevant covariates as explained below. All brain volumes were standardized to intracranial volume. We also ran a comparison analysis for specificity; in this second set of analyses, we examined self-reported lower extremity mobility and the same brain ROI volumes and covariates. All statistical analyses were performed using SPSS software (IBM Corp, released 2016. IBM SPSS Statistics for Windows, version 24.0. Armonk, NY, USA: IBM Corp.) using a *P*≤0.05 threshold.
Relevant covariates were chosen for both theoretical and statistical purposes. Age, disease-modifying treatment, disease severity, disease duration^[@bibr6-2055217319827618]^ and gender^[@bibr28-2055217319827618]^ have been associated with differences in brain volume, while fatigue^[@bibr18-2055217319827618]^ and depression^[@bibr17-2055217319827618]^ have been associated with differences on measures of cognition. Theoretically, we considered age (in years), disease severity (patient determined disease steps; PDDS), disease-modifying treatment, disease duration (in years), gender, self-reported depression (as measured by Neuro-QoL depression short form), and self-reported fatigue (as measured by Neuro-QoL fatigue short form) as potential covariates. Statistically, univariable linear regression analyses with a lax significance value (*P*≤0.10) were used to identify which theoretically determined covariates were predictive of our outcome variables (ROI volumes). Age, PDDS, disease duration, gender and fatigue were found to be significant covariates following this method. However, collinearity analyses revealed a strong relationship between PDDS and disease duration. Given that disease duration was more strongly predictive of ROI volumes than PDDS, PDDS was dropped as a relevant covariate. Depression did not significantly predict any of the ROI volumes, but given its well-established role in SCC in MS,^[@bibr17-2055217319827618]^ it was included as a covariate. Therefore, the following covariates were included in all subsequent analyses: age, disease duration, gender, depression and fatigue.
Results {#sec7-2055217319827618}
=======
Patient population {#sec8-2055217319827618}
------------------
We identified 921 unique patients with a diagnosis of MS who underwent quantitative brain MRIs during the selected study period. Of those 537 had completed PROs and 158 participants had completed PROs within ± 90 days of imaging (see [Table 1](#table1-2055217319827618){ref-type="table"}).
######
Patient population and demographics.

Characteristic Study population (*N*=158) Percentage
------------------------------- ---------------------------- ------------
Demographics
Age (years ± SD) 49.66 ± 11.54
Sex
Men 30 19.0
Women 128 81.0
Race
Caucasian 127 80.4
Hispanic/Latino 6 3.8
Black/African American 4 2.5
Other 6 3.8
Unknown 19 12.0
Disease-modifying therapy
None 13 8.2
Injectable 24 15.2
Oral 53 33.5
Intravenous 68 43.0
Disease duration (years ± SD) 12.33 ± 8.26
Raw scores on Neuro-QoL assessment of GCC ranged from 8 to 40 points with a mean score of 29.9 ± 9.2 and on LEF scores ranged from 12 to 40 points with a mean score of 35.9 ± 6.0.
Association of SCC with thalamic and cortical gray matter volumes {#sec9-2055217319827618}
-----------------------------------------------------------------
Linear regression supported a relationship between SCCs and normalized thalamic and cortical gray matter volumes after controlling for disease duration, age, gender, depression and fatigue. Greater self-reported SCCs was associated with smaller thalamic volumes, t~150~ = 2.406, *P* = 0.017. Independent of the covariates, SCC accounted for a modest amount of variance on thalamus/intracranial volume, partial *r*^2^ = 0.038 (see [Figure 1](#fig1-2055217319827618){ref-type="fig"}). Similarly, greater SCCs was associated with smaller cortical gray matter after accounting for covariates, t~150~ = 2.777, *P* = 0.006, partial *r*^2^ = 0.050 ([Figure 2](#fig2-2055217319827618){ref-type="fig"}).
{#fig1-2055217319827618}
{#fig2-2055217319827618}
SCC was not significantly associated with any other ROIs after controlling for covariates. By comparison, LEF short form scores were not significantly associated with any ROIs that we examined (all *P* \> 0.05) (see [Table 2](#table2-2055217319827618){ref-type="table"}).
######
Results of linear regression model relating patient-reported outcomes to standardized brain regions of interest with age, disease duration, gender, depression, and fatigue as covariates.

Region of interest General cognitive concerns Lower extremity function
-------------------------- ---------------------------- -------------------------- ------- --------- ------------ ---------- --------- --------- ------- --------- ------------ ----------
Amygdalae and hippocampi 0.057 0.744 0.458 0.004 --0.000008 0.000018 0.055 0.803 0.423 0.004 --0.000011 0.000025
Basal ganglia 0.125 1.591 0.114 0.017 --0.000004 0.000039 --0.011 --0.154 0.878 \<0.001 --0.000032 0.000027
Thalami 0.223 2.406 0.017 0.038 0.000005 0.000055 0.060 0.717 0.474 0.003 --0.000022 0.000047
Cortical white matter --0.026 --0.249 0.804 \<0.001 --0.000723 0.000561 0.003 0.028 0.977 \<0.001 --0.000852 0.000877
Cortical gray matter 0.240 2.777 0.006 0.050 0.000250 0.001482 0.135 1.732 0.085 0.020 --0.000104 0.001581
Discussion {#sec10-2055217319827618}
==========
SCCs have been little studied in MS. Our results suggest a possible association between self-report SCCs and reduced volume of thalamic and cortical gray matter accounting for 3.8% of the variance in thalamic volume and 5.0% of the variance in cortical gray matter volume. Reduced thalamic^[@bibr9-2055217319827618]^ and cortical gray matter^[@bibr8-2055217319827618]^ volumes have been implicated in objective cognitive impairment in MS. Certain cognitive domains feature prominently in the Neuro-QoL GCC short form section used in our study as a marker of SCC, including processing speed, attention and episodic memory. Objective measures of cognitive performance in patients with MS have shown an association of both reduced cortical gray matter and thalamic volumes with slowed cognitive processing speed^[@bibr29-2055217319827618]^ and an association of reduced thalamic volume with reduced episodic memory performance.^[@bibr30-2055217319827618]^ To our knowledge, our study is the first to show an association between increased SCCs and reduced volumes in ROIs that have correlated with objective cognitive dysfunction in MS. Still, not all ROIs implicated in cognitive dysfunction in MS reached significance in our study including amygdalae, hippocampi and basal ganglia.
In MS, as in other neurological disorders that can cause cognitive impairment, SCCs have not clearly correlated with objective cognitive impairment on neuropsychological testing,^[@bibr17-2055217319827618],[@bibr18-2055217319827618]^ and there are conflicting data.^[@bibr16-2055217319827618]^ Our study suggests a relationship between SCCs and changes on volumetric imaging, lending possible neuro-anatomical significance to self-report SCCs and patients' subjective experiences as well as to the use of Neuro-QoL as an evaluation tool. Further research will be needed to explore differences between patients who report SCCs with or without objective impairment on neuropsychological testing and the volumetric patterns of these groups.
Two previous studies reported on the association between SCC and brain volumes in MS. A 2006 study by Benedict and Zivadinov,^[@bibr19-2055217319827618]^ which used MSNQ to evaluate SCC, showed no association between self-report SCC and MRI outcomes but did show an association between informant-report SCC and increased T1 and T2 lesion volume and reduced whole brain parenchymal fraction. There are likely to be a number of reasons why that study did not show an association between self-report SCCs and brain volumes and ours did. In our study a greater number of participants underwent volumetric imaging (158 versus 27) and we calculated regional volumes for ROIs which yielded our significant results rather than employing whole brain parenchymal fraction and global lesion volume. We also utilized a different tool for evaluating SCC, Neuro-QoL, and employed a continuous raw score for SCC rather than a binary cut-off which may have allowed greater sensitivity for subtle subjective differences. We did not collect informant-report SCC which could have yielded additional results. Another study^[@bibr20-2055217319827618]^ reported an association between increased hippocampal volumes and increased SCCs; that study also had fewer participants and employed a scale for measuring SCC that has not been validated in MS.
The relationship between cortical gray matter atrophy and cognitive dysfunction in MS is well established.^[@bibr8-2055217319827618]^ An earlier study identified more extensive reductions in cortical gray matter volume in bilateral frontal, temporal and parietal cortices in MS patients with lower performance on neuropsychological testing,^[@bibr31-2055217319827618]^ while other studies that used different criteria for cognitive impairment showed preferential reduced volume of temporo-occipital gray matter.^[@bibr32-2055217319827618]^ While our study did not distinguish between different cortical gray matter regions, it is consistent with the increasing evidence for the role of cortical gray matter atrophy in cognitive dysfunction in MS.
Why thalamic atrophy is associated with cognitive impairment in MS is a field of ongoing investigation. A recent study found that decreased cognitive processing speed was related to localized atrophy of the anterior and superior surface of the left thalamus,^[@bibr33-2055217319827618]^ indicating the likely involvement of anterior thalamic nuclei that help make up the Papez circuit, which is known to be involved in episodic memory. Functional studies have shown increased activation of the thalamus by functional MRI in patients with MS on accurate encoding of visuospatial tasks.^[@bibr30-2055217319827618]^ Diffusion tensor imaging failed to show a relationship between changes in white matter tracts of the Papez circuit and cognitive impairment,^[@bibr34-2055217319827618]^ possibly indicating direct involvement of the gray matter structures. Alternatively, thalamic volume may simply serve as a marker of white matter disease throughout the brain due to its diffuse networks and rich reciprocal connectivity also accounting for its association with non-cognitive disability.^[@bibr35-2055217319827618]^
The most prominent limitation of our study is the lack of objective cognitive testing, which limits our ability to understand the relationship of SCCs to objective cognitive impairment which is planned in future work. While SCCs may therefore capture a subset of patients with objective impairment, the lack of clear correlation of SCCs to objective cognitive impairment on neuropsychological testing^[@bibr17-2055217319827618],[@bibr18-2055217319827618]^ may lend support to SCCs existing separate from or prior to objective impairment. Other limitations of our study include not differentiating between different MS subtypes and lack of other imaging analyses such as brain parenchymal fraction and T2 lesion volume load. Our effect size was also modest probably for a number of reasons. While most research studies that investigate brain volume with MRI use more time-consuming volumetric analysis such as SIENAX and FIRST, we employed the fully automated NeuroQuant image processing pipeline that did not require significant human post-processing.
It is challenging to study subjective concerns, which are based entirely on patients' subjective report adding another layer of variability; subjective human experience in its uniquely individual nature is beyond standardization. Self-report SCCs may also be confounded by a host of other factors including depression, anxiety and fatigue. Deficits of insight into cognitive impairment, or in a person's ability to think about their own thinking, metacognition, may have different patterns of injury or be related to more advanced impairment. This may account for the greater variability at the extremes whereby a highly functioning patient with nearly intact cognition will report subjective impairment due to decline from a previous baseline that may not be detectable in any structural changes, while another patient with advanced pathological impairment may not be aware of or accurately report their subjective deficits;^[@bibr16-2055217319827618]^ this may explain the variability of thalamic volumes at the extremes of patient reported scoring of cognitive concerns (see [Figure 1](#fig1-2055217319827618){ref-type="fig"}).
We expected our comparator, LEF, to relate to some motor ROIs such as cortical white matter volume but did not find our comparator to relate to any ROIs. The NeuroQuant software may not be as accurate at detecting cortical white matter volume as it is at calculating certain gray structures such as the thalamus given their more clearly delineated borders and centralized location. Also, the spinal cord, which contributes significantly to lower extremity disability in MS, was not included in our analysis. Alternatively, in our sample, patients had higher scores on LEF than on GCC indicating a lesser degree of subjective lower extremity dysfunction perhaps making it more challenging to associate this subjective dysfunction with neuroanatomical changes.
Despite these limitations, our study may lend neuroanatomical significance to patient-reported SCCs by associating them with reduced thalamic and cortical gray matter volumes, regions that have been shown to correlate with objective measures of cognitive dysfunction in MS.
The authors would like to extend their appreciation to the patients, staff and clinicians at the Rocky Mountain MS Center and the COPIC Medical Foundation.
Conflict of Interests {#sec11-2055217319827618}
=====================
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Enrique Alvarez has consulted for the following companies: Biogen, EMD Serono, Genzyme, Genentech, TG Pharmaceuticals, and Novartis. He received research funding from the Rocky Mountain MS Center, Biogen, Novartis, Acorda, and TG pharmaceuticals.
Justin M Honce has consulted for and/or received research support from Genentech, Novartis and Biogen.
Timothy Vollmer has received compensation for activities such as advisory boards, lectures and consultancy with the following companies and organizations: Academic CME; Alcimed; Anthem Blue Cross; Genentech/Roche; Biogen IDEC; Novartis; CellGene; Epigene; Rocky Mountain MS Center; GLG Consulting; Ohio Health; TG Therapeutics; Topaz Therapeutics; Dleara Lawyers; Teva Neuroscience. He has received research support from the following: Teva Neuroscience; NIH/NINDS; Rocky Mountain MS Center; Actelion; Biogen; Novartis, Roche/Genentech, UT South Western and TG Therapeutics, Inc.
Luis D Medina has received research support from the Alzheimer's Association.
The other authors declare no relevant disclosures or conflicts of interest.
Funding {#sec12-2055217319827618}
=======
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funding was provided by the COPIC Medical Foundation.
[^1]: These authors contributed equally to this work.
| |
The theme of this year’s Idea Translation Lab course is PLASTIC, tying in with Science Gallery’s upcoming exhibition of the same name. In the Idea Translation Lab module, Trinity College Dublin undergraduate students work at the boundaries of art, science & engineering to develop original ideas and projects where these disciplines meet. It is a cross-disciplinary course in which students develop entrepreneurial, creative and critical thinking skills through collaborative group projects.
Throughout the Hilary Term, the students have used Science Gallery Dublin as a lab to explore the intersection of art and science, the research and development of new materials, and with their final projects, they propose a range of material futures we might inhabit.
During this event, the students will present an exhibition of their projects, discuss them with visitors, and pitch their ideas in an 8 minute presentation to an expert judging panel and audience.
11:00 — 12.30 Exhibition of student projects in OPEN SHOP (OPEN LAB exhibition space)
12:15 — 13.00 Presentations, audience QA & feedback from judges (Paccar Theatre)
PROJECTS:
OCEO: Sustainable Snacks on Demand
THE PLASTIC AGE: Polymerist Artifacts from the 23rd Century
NO PLASTIC BEYOND THIS POINT: Are you a hazard to your environment? | https://www.eventbrite.ie/e/idea-translation-lab-class-of-2019-showcase-tickets-59751867444 |
First Online:
Abstract
In studies with paired samples of continuous data the mean difference is usually compared to zero. Instead of a paired t-test also a Bayesian analysis on the mean difference is possible. A traditional paired t-test of two treatment modalities on hours of sleep provided a significant difference with t = 3.184, p-value = 0.011. A Bayesian paired t-test provided support in favor of the traditional test with a Bayes factor of 0.178. The 95% confidence intervals of (1) traditional, (2) Bayesian and (3) bootstrap tests were respectively:
-
between 0.51517 and 3.04483,
-
between 0.2809 and 3.2791,
-
between 0.76025 and 2.8400.
The traditional t-test confidence interval was wider than the bootstraps t-test confidence interval, while the Bayesian 95% credible interval is the widest. Some overfitting in the traditional and Bayesian intervals can not be ruled out, and, in the Bayesian, this may be more so than in the traditional. Nonetheless, the amount of overfitting is limited with confidence intervals between ≈2.3 and ≈2.8. | https://rd.springer.com/chapter/10.1007%2F978-3-319-92747-3_5 |
University of Aberdeen partners with IBM to drive innovation in cognitive computing
Aberdeen, Scotland - 17 Mar 2016: The University of Aberdeen has become the first Scottish university to partner with IBM (NYSE: IBM) to offer students and staff access to Watson Engagement Advisor, one of IBM's cognitive computing technology solutions.
The partnership will provide students within the University’s Department of Computing Science the opportunity to gain hands-on experience of IBM’s system, which is widely regarded as world leading in the field of cognitive computing. This partnership will not only allow the University to expand its curriculum and help nurture the next generation of innovators, it will also provide exciting research opportunities that will further cement its status as a university at the forefront of work in this area.
Cognitive computing systems learn and interact naturally with people to extend humans capabilities. These systems also work with experts to make sense of complex data in order to facilitate better decisions.
Watson represents a new era of computing based on its ability to interact in natural language, process vast amounts of disparate data, and learn from each interaction. IBM has worked closely with the world’s leading academic institutions ever since the development and introduction of Watson.
As a result of this partnership with IBM, Aberdeen becomes one of only four UK Institutions to have access to the Watson Engagement Advisor solution and its experts. It will initially be used by students undertaking the Department of Computing Science’s Semantic Web Engineering module, which is taught by Dr Jeff Z. Pan, who is the leader of the Knowledge Technology group in the department. It will eventually be offered more widely across a range of relevant programmes.
Academics at the University are already undertaking cutting-edge cognitive computing research using Watson.
Researchers are collaborating with a team of IBM scientists on the EU Marie Curie K-Drive project, which investigates ways of understanding and utilising big data and knowledge graphs for applications, such as those in the treatment of cancers. This involves using IBM Watson's question & answering, knowledge representation and dialogue capabilities. The results of the work will also form the basis of new research proposals from the University for the EU Horizon 2020 Programme.
Dr Pan, the coordinator of the K-Drive project, said: “With access to Watson we are providing the next generation of students with experience of the latest techniques in cognitive computing, which puts them in a strong position when it comes to a career in the industry.
“The partnership with IBM is an exciting opportunity to advance our research in this area. Cognitive computing is empowering human decision-making processes by understanding and exploiting data which is structured and unstructured, and our research is focused on how to make the best use of both types of data.”
IBM Academic Initiative Leader, Paul Fryer, said: "Cognitive represents an entirely new model of computing that includes a range of technology innovations in analytics, natural language processing and machine learning. The collaboration between IBM and the University of Aberdeen, which builds on a long-standing relationship, aims to help nurture the next generation of innovators; and is the first initiative of this type in Scotland."
About the University of Aberdeen
Founded in 1495, the University of Aberdeen is Scotland's third oldest and the UK's fifth oldest university, and is consistently ranked among the top 1% of the world's universities.
The University’s Department of Computing Science has a long-standing reputation in Intelligent Systems, with world recognised expertise in areas such as knowledge technologies, multi-agent systems and natural language generation.
In the most recent UK research quality assessment (REF 2014) the department was ranked 16th in the UK in terms of research intensity, with 70% of its research assessed to be world leading or internationally excellent in terms of originality, significance and rigour, and 100% being internationally recognised.
The K-Drive project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 286348.
For more information about Dr Jeff Z. Pan’s research, visit: http://homepages.abdn.ac.uk/jeff.z.pan/pages/
IBM Watson: Pioneering a New Era of Computing
Watson is the first open cognitive computing technology platform and represents a new era in computing where systems understand the world in the way that humans do: through senses, learning, and experience. Watson continuously learns, gaining in value and knowledge over time, from previous interactions. With the help of Watson, organizations are harnessing the power of cognitive computing to transform industries, help professionals do their jobs better, and solve important challenges.
To advance Watson, IBM has two dedicated business units: Watson, established for the development of cloud-delivered cognitive computing technologies that represent the commercialization of "artificial intelligence" or "AI" across a variety of industries, and Watson Health, dedicated to improving the ability of doctors, researchers and insurers and other related health organizations to surface new insights from data to and deliver personalized healthcare
For more information on IBM Watson, visit: ibm.com/Watson and ibm.com/press/watson
For more information on the Watson Developer Cloud, visit: http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/
Join the conversation at #ibmwatson. Follow Watson on Facebook and see Watson on YouTube and Flickr. | https://uk.newsroom.ibm.com/2016-Mar-17-University-of-Aberdeen-partners-with-IBM-to-drive-innovation-in-cognitive-computing |
---
abstract: 'The model of Dark Matter is proposed where the Dark Matter is a classical color field. The color fields are invisible as they may interact with colored elementary particles like ’t Hooft - Polyakov monopole only. The comparison with the Universal Rotation Curve is carried out.'
author:
- 'Vladimir Dzhunushaliev [^1]'
title: Classical color fields as a dark matter candidate
---
Introduction
============
In astrophysics and cosmology, Dark Matter (DM) is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. According to present observations of structure larger than galaxy-sized as well as Big Bang cosmology, DM accounts for the vast majority of mass in the observable universe. Among the observed phenomena consistent with DM observations are the rotational speeds of galaxies and orbital velocities of galaxies in clusters, gravitational lensing of background objects by galaxy clusters such as the Bullet cluster, and the temperature distribution of hot gas in galaxies and clusters of galaxies. DM also plays a central role in structure formation and galaxy evolution, and has measurable effects on the anisotropy of the cosmic microwave background. All these lines of evidence suggest that galaxies, clusters of galaxies, and the universe as a whole contain far more matter than that which interacts with electromagnetic radiation: the remainder is called the “dark matter component”.
The first to provide evidence and infer the existence of a phenomenon that has come to be called “dark matter” was F. Zwicky [@Zwicky]. He applied the virial theorem to the Coma cluster of galaxies and obtained evidence of unseen mass.
According to results published in [@Clowe:2006eq], dark matter has been observed separate from ordinary matter through measurements of the Bullet Cluster, actually two nearby clusters of galaxies that collided about 150 million years ago. Researchers analyzed the effects of gravitational lensing to determine total mass distribution in the pair and compared that to X-ray maps of hot gases, thought to constitute the large majority of ordinary matter in the clusters. The hot gases interacted during the collision and remain closer to the center. The individual galaxies and DM did not interact and are farther from the center. Recently the Bullet cluster data offered as evidence of dark matter
The numerous astrophysical observations, e.g., Doppler measurements of rotation velocities in disk galaxies, have established the failure of the classical Newtonian theory, if only visible matter is taken into account [@Combes95]. Historically, theoretical concepts addressing this problem can be subdivided in two categories. The first category comprises the DM theories [@BiTr94], whereas the second group assumes that Newton’s gravitational law requires modification [@Milgrom:1983ca].
DM theories are based on the hypothesis that there exist significant amounts of invisible (non-baryonic) matter in the universe, interacting with ordinary visible matter only via gravity. Since empirically very successful, DM has become a widely accepted cornerstone of the contemporary cosmological standard model [@Sa99]. Nevertheless, it must also be emphasized that until now DM has been detected only indirectly by means of its gravitational effects on the visible matter or the light.
In order to explain the existence of region where the velocity of stars is $\approx const$ it is necessary (in the DM framework) to have lengthly enough a region with DM. For the classical SU(3) gauge theory it is the ordinary situation: the point is that the classical Yang-Mills equations ordinary give us the solutions with an infinite mass [@Obukhov:1996ry], i.e. the mass density is those that $\rho \geq r^{-2}$. By indirection this problem is connected with the confinement problem in quantum chromodynamics which claims that the field distribution between quark and antiquark is a flux tube with a finite linear energy density. Such flux tube can not be obtained in the framework of the classical Yang-Mills theory. It is possible in the framework of the quantum Yang-Mills theory only.
The classical gauge theories are a nonlinear geberalization of Maxwell electrodynamics. The non-Abelian gauge theories were invented by Yang and Mills in 50$^{th}$ of the preceding century. For most of this period it was not known whether any of the interactions obseved in nature can be described by a non-Abelian gauge theory. Nevertheless, the elegance of these theories attracted interest. The Weinberg-Salam model ($SU(2) \times U(1)$ gauge theory) and quantum chromodynamics ($SU(3)$ gauge theory) are the two existing Yang-Mills theories of real phenomenological importance. These theories can be formulated in terms of Feynman path integrals, i.e. functional integrals over all classical field configurations weighted by a factor $\exp(-\mathrm{action})$. If one knew everything about classical field configurations, then in principle all questions concerning the quantum theory could be answered. Partial information about classical fields might yield, at least, some insight into the quantum theory. For review of classical solutions of Yang-Mills theories see Ref. [@Actor:1979in].
In this paper we use the solutions of the classical SU(3) gauge theory for the explanation of the rotational velocity of stars outside the core of the galaxy.
Initial equations for the SU(3) gauge field
===========================================
We consider the classical SU(3) Yang-Mills gauge field $A^a_\mu$. The field equations are $$D_\nu F^{a\mu \nu} = 0
\label{1-10}$$ where $F^a_{\mu \nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu + g f^{abc} A^b_\mu A^c_\nu$ is the field strength tensor; $f^{abc}$ are the SU(3) structural constants; $a,b,c = 1, 2, \cdots , 8$ are color indices; $g$ is the coupling constant. We use the following ansatz [@corrigan] $$\begin{aligned}
A_0^2 &=& - 2 \frac{z}{gr^2} \chi(r), \quad
A_0^5 = 2 \frac{y}{gr^2} \chi(r), \quad
A_0^7 = - 2 \frac{x}{gr^2} \chi(r),
\label{1-30}\\
A^2_i &=& 2 \frac{\epsilon_{3ij} x^j}{gr^2} \left[ h(r) + 1 \right] ,
\label{1-40}\\
A^5_i &=& -2 \frac{\epsilon_{2ij} x^j}{gr^2} \left[ h(r) + 1 \right] ,
\label{1-50}\\
A^7_i &=& 2 \frac{\epsilon_{1ij} x^j}{gr^2} \left[ h(r) + 1 \right]
\label{1-60}\end{aligned}$$ for the $SU(2) \in SU(3)$ components of the gauge field and $$\begin{aligned}
\left( A_0 \right)_{\alpha , \beta} &=& 2 \left(
\frac{x^\alpha x^\beta}{r^2} - \frac{1}{3} \delta^{\alpha \beta}
\right) \frac{w(r)}{gr} ,
\label{1-80}\\
\left( A_i \right)_{\alpha \beta} &=& 2 \left(
\epsilon_{is \alpha} x^\beta + \epsilon_{is \beta} x^\alpha
\right) \frac{x^s}{gr^3} v(r) ,
\label{1-90}\end{aligned}$$ for the coset components; $i=1,2,3$ are space indices; $\epsilon_{ijk}$ is the absolutely antisymmetric Levi-Civita tensor; the functions $\chi(r), h(r), w(r), v(r)$ are unknown functions. The coset components $\left( A_\mu \right)_{\alpha \beta}$ in the matrix form are written as $$\left( A_\mu \right)_{\alpha \beta} =
\sum \limits_{a=1,3,4,6,8} A_\mu^a \left( T^a \right)_{\alpha , \beta}
\label{1-110}$$ where $T^a = \frac{\lambda^a}{2}$ are the SU(3) generators, $\lambda^a$ are the Gell-Mann matrices. The corresponding equations are $$\begin{aligned}
x^2 w'' &=& 6w \left( h^2 + v^2 \right) - 12 h v \chi,
\label{1-120}\\
x^2 \chi'' &=& 2 \chi \left( h^2 + v^2 \right) - 4 h v w,
\label{1-130}\\
x^2 v'' &=& v^3 - v + v \left( 7 h^2 - w^2 - \chi^2 \right) + 2h w \chi,
\label{1-140}\\
x^2 h'' &=& h^3 - h + h \left( 7 v^2 - w^2 - \chi^2 \right) + 2 v w \chi
\label{1-150}\end{aligned}$$ here the dimensionless radius $x = r/r_0$ is introduced, $r_0$ is a constant.
Numerical investigation
=======================
In this section we present the typical numerical solution of Eq’s . We will investigate the case $\chi = h = 0$ $$\begin{aligned}
x^2 w'' &=& 6w v^2 ,
\label{2-10}\\
x^2 v'' &=& v^3 - v - v w^2 .
\label{2-15}\end{aligned}$$ For the numerical investigation we have to start from the point $x = \delta \ll 1$. Here we have the following approximate solution $$v \approx 1 + v_2 \frac{x^2}{2}, \quad
w \approx w_3 \frac{x^3}{2}, \quad x \ll 1
\label{2-20}$$ where $v_2, w_3$ are arbitrary constants. The typical behavior of functions $v(x)$ and $w(x)$ is presented in Fig. \[fg1\].
The mass density $\rho(x)$ is $$\rho = \frac{1}{2 c^2} \left(
- F^a_{0i} F^{a0i} + \frac{1}{4} F^a_{ij} F^{aij}
\right) =
\frac{1}{g^2 c^2 r_0^4} \left[
4 \frac{{v'}^2}{x^2} +
\frac{2}{3} \frac{\left( x w' - w \right)^2}{x^2} +
2 \frac{\left( v^2 - 1 \right)^2}{x^4} +
4 \frac{ v^2 w^2}{x^4}
\right] =
\frac{1}{g^2 c^2 r_0^4} \varepsilon(x)
\label{2-60}$$ where $c$ is the speed of light. The profile of the dimensionless energy density $\varepsilon(x)$ in Fig. \[fg2\] is presented.
The rotation curve is defined as $$V^2 = G \frac{m(r)}{r} =
\frac{4 \pi G}{c^2} \frac{1}{r}
\int \limits^r_0 r^2 \rho(r) dr =
\frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2} \frac{1}{x}
\int \limits_0^x x^2 \varepsilon(x) dx =
\frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2} \frac{m(x)}{x}
\label{2-70}$$ where $m(r)$ is the mass of the color fields $A^a_\mu$ under sphere with the radius $r = x r_0$, $m(x)$ is the dimensionless mass, ${g'}^2 = g^2 c \hbar$ is the dimensionless coupling constant, $G$ is the Newton gravitational constant.
The asymptotical behavior of the solution $x \gg 1$ is $$\begin{aligned}
v(x) &\approx& A \sin \left( x^\alpha + \phi_0 \right),
\label{2-80}\\
w(x) &\approx& \pm \left[
\alpha x^\alpha + \frac{\alpha - 1}{4}
\frac{\cos \left( 2 x^\alpha + 2 \phi_0 \right)}{x^\alpha}
\right] ,
\label{2-90}\\
3 A^2 &=& \alpha (\alpha - 1)
\label{2-100}\end{aligned}$$ with $\alpha > 1$.
The comparison with a Universal Rotation Curve of spiral galaxies
=================================================================
Unfortunately we have not any analytical solution and therefore it is very difficult to carry out the numerical investigation as the coefficient $\frac{G \hbar}{c}$ in Eq. is very small and asymptotically we have strongly oscillating function $v(x)$ in Eq. (if $r_0$ is very small). Therefore in this section we will investigate the rotation curve of gauge DM close to the center and far away from the center.
In Ref. [@Persic:1995ru] a Universal Rotation Curve of spiral galaxies is offered which describes any rotation curve at any radius with a very small cosmic variance $$V_{URC} \left( \frac{r}{R_{opt}} \right) =
V(R_{opt}) \left[
\left( 0.72 + 0.44 \log \frac{L}{L_*} \right)
\frac{1.97 x^{1.22}}{ \left( x^2 + 0.78^2 \right)^{1.43}} +
1.6\, e^{-0.4(L/L_*)} \frac{x^2}{x^2 + 1.5^2
\left( \frac{L}{L_*} \right)^{0.4}}
\right]^{1/2} {\rm km~s^{-1}}
\label{3-10}$$ where $R_{opt} \equiv 3.2\,R_D$ is the optical radius and $R_D$ is the disc exponential length-scale; $x = r/R_{opt}$; $L$ is the luminosity. We would like to compare the rotation curve for the color fields with the Universal Rotation Curve where, for example, $L/L_* = 1$ $$V_{URC} \left( \frac{r}{R_{opt}} \right) =
V(R_{opt}) \left[
\frac{1.4184 \; x^{1.22}}{ \left( x^2 + 0.78^2 \right)^{1.43}} +
\frac{1.07251 \; x^2}{x^2 + 1.5^2}
\right]^{1/2} {\rm km~s^{-1}}.
\label{3-20}$$ For the Dark Matter the Universal Rotation Curve is $$V_{DM}^2 \left( \frac{r}{R_{opt}} \right) =
V^2(R_{opt}) \frac{1.07251 \; x^2}{x^2 + 1.5^2} \;
{\rm km~s^{-1}}.
\label{3-30}$$ The profiles of $V_{URC}(x), V_{DM}^2(x), V_{LM}^2(x)$ in Fig. \[fg3\] are presented ($V_{DM}^2$ is the rotation curve for the Dark Matter, $V_{LM}^2(x)$ is the rotation curve for the light matter).
At the center $r \approx 0$ the approximate solution has the form and the mass density approximately is $$\rho(x) \approx \frac{6}{g^2 c^2 r_0^4} \frac{v_2^2}{r_0^4}.
\label{3-40}$$ Consequently the rotation curve will be $$V^2 \approx \frac{G \hbar}{c} \frac{1}{{g'}^2} \frac{v_2^2}{r_0^2}
\left( \frac{r}{r_0} \right)^2 \; m~s^{-1} .
\label{3-50}$$ The comparison with Eq. by $x \ll 1$ gives us $$v_2 \approx 20 \sqrt{\frac{c}{G \hbar}} \; \;
g' \frac{V_{opt}}{R_{opt}} r_0^2 .
\label{3-60}$$ If $r_0$ is very small in comparison with $R_{opt}$ then far away from the center the functions $v(x)$ and $w(x)$ are presented in Eq’s - and the mass density is $$\varepsilon_\infty(x) =
c^2 \rho_\infty(r) \approx \frac{2}{3} \frac{1}{g^2 r_0^4}
\alpha^2 \left( \alpha - 1 \right) \left( 3 \alpha - 1 \right)
\left( \frac{r}{r_0} \right)^{2 \alpha - 4}.
\label{3-70}$$ In this case we can estimate the values of square of speed in the following way $$\begin{aligned}
V^2 &=& \frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2} \frac{1}{x}
\left(
\int \limits_0^{x_1} x^2 \varepsilon(x) dx +
\int \limits_{x_1}^x x^2 \varepsilon(x) dx
\right) \approx
\left[
\frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2} \frac{1}{x}
\int \limits_0^x x^2 \varepsilon_\infty(x) dx
\right] - V^2_0 ,
\label{3-73}\\
V^2_0 &=& \frac{G \hbar}{c} \int \limits_0^{x_1} x^2
\left[ \varepsilon_\infty(x) - \varepsilon(x) \right] dx
\label{3-76}\end{aligned}$$ here for the region $x > x_1$ the asymptotical Eq. is valid. One can say that $V_0^2$ is a systematical error of equation $$V^2 = \frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2} \frac{1}{x}
\int \limits_0^x x^2 \varepsilon_\infty(x) dx
\label{3-78}$$ and the numerical value of $V^2_0$ is defined near to the center of galaxy where according to Eq. the difference $\varepsilon_\infty(x) - \varepsilon(x)$ is maximal. Thus the asymptotical behavior of the rotation curve for the domain filled with the SU(3) gauge field is $$V^2 \approx \frac{2}{3} \frac{G \hbar}{c} \frac{1}{{g'}^2 r_0^2}
\alpha^2 \left( \alpha - 1 \right) \left( 3 \alpha - 1 \right)
\left( \frac{r}{r_0} \right)^{2 \alpha - 2}
- V^2_0 .
\label{3-80}$$ In Fig. \[fg4\] we see that it is possible to choose the parameters $\alpha, r_0, g', V_0$ in such a way that we have very good coincidence of the Universal Rotation Curve and the rotation curve for spherically symmetric distribution of SU(3) gauge field -.
Invisibility of color fields
============================
The invisibility of classical color fields is based on the fact that only *colored* elementary particles may interact with the classical non-Abelian gauge fields. The equations describing the motion of a colored particle are Wong’s equations $$\begin{aligned}
m \frac{d^2 x^\mu}{ds^2} &=& -g F^{a \mu \nu} M^a \frac{d x^\nu}{ds} ,
\label{3-10}\\
\frac{d M^a}{ds} &=& -g f^{abc} \left(
A^{b \mu} \frac{d x^\mu}{ds}
\right)M^c
\label{4-10}\end{aligned}$$ where $x^\mu(s)$ is the 4D trajectory of the particle with the mass $m$, $M^a$ is the color components of color charge of the particle, $\left( M^a \right)^2 = M^2 = const$. Now we see that the ordinary elementary particles do not interact with the color fields presented here as they are colorless in the consequence of confinement for strong interactions.
Only monopoles and dyons may interact with these fields. Another possibility to find the influence of the color fields on elementary particles is the following. Some particles (proton , neutron and so on) may have an inner structure, i.e. a color electric or magnetic dipole or quadrupole which will interact with the external inhomogeneous color field. But this interaction should be very small.
Conclusions and discussion
==========================
We have shown that in principle the classical SU(3) color fields have weakly decreasing mass density that allows us to offer the corresponding gauge field distribution as a candidate for DM expaining the Universal Rotation Curve of spiral galaxies. The question about the adaptability of the presented model of DM for the explanation of galaxy movement in clusters and galaxy clusters in superclusters demands the separate consideration that will be done in our future investigations. The distinctive feature of this DM model is that it uses well established SU(3) Yang-Mills theory and there is not necessity neither to modify Newton’s gravitational law or to introduce weakly interacting supersymmetrical particles. It is interesting that the invisibility of gauge DM is connected with the confinement in quantum chromodynamics.
Very important questions are: how big is the SU(3) domain, has it an infinite or finite volume ? In this connection it is necessary to do the next remark. The gauge fields presented here are strongly oscillating far away from the center (for $r_0 \ll R_{opt}$). On some distance from the center the oscillations become so strong that quantum effects begin essential: we should take into account the Heisenberg Uncertainty Principle. These effects appear when the quantum fluctuations of gauge field become comparable with the magnitude of the color fields on the distance about the period of oscillations. In this case we have to apply *nonperturbative* quantization in order to describe the quantum color fields. Now we would like to note that in Ref. [@Dzhunushaliev:2006di] the approximate model of a non-perturbative quantization is offered and the solution describing a ball filled with a gauge condensate is received. The quantum fields of the ball decrease asymptotically as $\frac{e^{-r/l_0}}{r}$ (here $l_0$ is a constant) and we will have a finite total mass of the SU(3) domain. Thus as a whole the picture of the color field distribution (which is DM in the presented model) looks as follows: there is a ball filled with the *classical* color field which gives us the observable rotation curve and far enough from the center the gauge fields become *quantum* one.
[99]{}
Zwicky, F. , Helvetica Physica Acta 6, 110—127 (1933).
D. Clowe, M. Bradac, A. H. Gonzalez, M. Markevitch, S. W. Randall, C. Jones and D. Zaritsky, “A direct empirical proof of the existence of dark matter,” astro-ph/0608407.
F. Combes, P. Boissé, A. Mazure and A. Blanchard, “Galaxies and Cosmology”, Springer, 1995.
J. Binney, S. Tremaine, “Galactic Dynamics”, Princeton University Press, 1994.
M. Milgrom, Astrophys. J. [**270**]{}, 365(1983).
B. Sadoulet, Rev. Mod. Phys., **71**, S197(1999).
Y. N. Obukhov, Int. J. Theor. Phys. [**37**]{}, 1455 (1998), hep-th/9608011.\
V. D. Dzhunushaliev and D. Singleton, “Confining solutions of SU(3) Yang-Mills theory,” Contribution to ’Contemporary Fundamental Physics’, ed. Valeri Dvoeglazov (Nova Science Publishers). In \*Dvoeglazov, V.V. (ed.): Photon and Poincare group\* 336-346; hep-th/9902076.
A. Actor, Rev. Mod. Phys. [**51**]{}, 461 (1979).
E. Corrigan, D. I. Olive, D. B. Farlie and J. Nuyts, Nucl. Phys., **B106**, 15, (1976).
M. Persic, P. Salucci and F. Stel, Mon. Not. Roy. Astron. Soc. [**281**]{}, 27 (1996),
V. Dzhunushaliev, “Color defects in a gauge condensate,” hep-ph/0605070. astro-ph/9506004.
[^1]: Senior Associate of the Abdus Salam ICTP
| |
The use of manganese and iron oxides by late Neandertals is well documented in Europe, especially for the period 60–40 kya. Such finds often have been interpreted as pigments even though their exact function is largely unknown. Here we report significantly older iron oxide finds that constitute the earliest documented use of red ochre by Neandertals. These finds were small concentrates of red material retrieved during excavations at Maastricht-Belvédère, The Netherlands. The excavations exposed a series of well-preserved flint artifact (and occasionally bone) scatters, formed in a river valley setting during a late Middle Pleistocene full interglacial period. Samples of the reddish material were submitted to various forms of analyses to study their physical properties. All analyses identified the red material as hematite. This is a nonlocal material that was imported to the site, possibly over dozens of kilometers. Identification of the Maastricht-Belvédère finds as hematite pushes the use of red ochre by (early) Neandertals back in time significantly, to minimally 200–250 kya (i.e., to the same time range as the early ochre use in the African record).
Recent debates on Neandertal material culture have highlighted the fact that Middle Paleolithic sites occasionally contain pieces of manganese and iron oxides, interpreted as pigments, possibly for personal decoration (1, 2). Some have taken these findings an inferential step further and speculated on the “symbolic implications of body painting” and ochre use for our views on Neandertals (2). From the Upper Paleolithic record, red ochre is indeed well known for its use in cave paintings and in ritual burial contexts. More “mundane” or “domestic” uses of red ochre (derived from hematite, Fe2O3) are known from the ethnographic record of modern hunter-gatherers, for instance, as (internal and external) medication, as a food preservative, in tanning of hides, and as insect repellent (3–9). Archeological studies have identified ochre powder as an ingredient in the manufacture of compound adhesives (10). Thus, the use of iron oxides for “symbolic” purposes should be viewed as a hypothesis that needs to be tested, rather than simply assumed.
For Europe, a recent review (11) mentions more than 40 Middle Paleolithic sites with possible pigments from the Marine Isotope Stage (MIS) 6–3 range. These concern mostly manganese oxide finds and almost all sites date to the very end of the Middle Paleolithic, between 60 and 40 ka (1 ka = 1000 y before present) (11). Some of these late sites yielded considerable quantities of these materials. From Pech de l'Azé I (France), for instance, more than 450 small pieces of manganese dioxide are known, with a total weight of ∼750 g; more than 250 of these finds showed traces of utilization (12). Thus, solid evidence for the use of manganese and iron oxides by Late Pleistocene Neandertals is recorded from at least 60 ka onward. There are claims for an earlier use of “red ochre” in Middle Pleistocene archeological sites in Europe, such as for Terra Amata (France), Becov (Czech Republic) (13), and Ambrona (Spain), but all of these have been contested, for various reasons, including identification and dating issues (14).
Here we report on “red material” of considerably greater antiquity than the Late Pleistocene, minimally 200–250 ka old. This material was recorded during the 1980s from excavations in the Maastricht-Belvédère loess and gravel pit in The Netherlands (50°52'09.40''N, 5°40'27.33''E). Fieldwork at this site focused on an interdisciplinary study of an early Middle Paleolithic site complex of flint (and occasionally flint and bone) scatters, preserved in a primary archeological context in fine-grained sediments of the Middle Pleistocene Maas River. From these sediments, eight archeological sites were excavated, as well as a series of test pits, creating a total excavated surface area of 1,577 m2 (15–17). The Middle Pleistocene river deposits yielded a full interglacial vertebrate fauna with 26 species (18) and a mollusk fauna containing more than 70 land and freshwater species (19). Terrace and loess stratigraphy, as well as mammal and mollusk biostratigraphical evidence, indicate an age before the next-to-last glacial phase, that is, before MIS 6 (20). Radiometric techniques included thermoluminescence dating of heated flint artifacts, which yielded an age of 250 ± 20 ka (21), and electron spin resonance dating of shells, which yielded an age of 220 ± 40 ka, all converging to MIS 7 for the Maastricht-Belvédère interglacial (20). However, amino acid racemization dating of Corbicula shells from the interglacial deposits, as well as biostratigraphically important elements of the mollusk fauna itself, suggest an earlier age (i.e., MIS 9) for the Belvédère interglacial and its associated archeology (22).
In the course of the archeological excavations, one of the sites, site C (excavated between 1981 and 1983), yielded 15 small concentrates of red material, with maximum size of 0.2–0.9 cm and 0.1–0.3 cm thick, with sharp boundaries to the sedimentary matrix (Figs. 1 and 2). The contrast in color between the bright-red concentrates and the yellowish-brown (Munsell soil color 2.5Y5/3) to grayish-olive (5Y5/3) sediment was striking (Fig. 2), facilitating recovery of these small, friable pieces at this site, excavated over an area of 264 m2 (Fig. 3). Although the red material has been interpreted as hematite (15, 23), these finds did not play a role in the history of ochre use, even though Maastricht-Belvédère became one of the flagship sites of Middle Paleolithic archeology, reviewed extensively in numerous textbooks (24). Improved identification methods and the increased focus on ochre use in current paleoanthropology debates justified another systematic study of the Maastricht-Belvédère material.
Results
Binocular microscopy investigation of three of the largest red concentrates at site C (23) revealed a red staining agent surrounding the larger quartz grains of the sedimentary matrix as a very thin coating (Fig. 1 and SI Text). The boundary between the red concentrates and the surrounding matrix was sharp both macroscopically (Figs. 1 and 2) and in thin sections (SI Text). Across this boundary, a significant decrease in grain size was observed from the matrix toward the red concentrates, with the matrix richer in silt-sized mineral particles than the red concentrates. Occasionally the fine red material was clotted together with the clay and silt particles of the sediment. Importantly, individual reddish crystal grains (e.g., hematite) were not visible. Earlier attempts at identifying this red material (23) focused on the largest pieces recovered from site C, with X-ray diffraction (XRD) analysis suggesting the presence of hematite (23). For the present study, only the small concentrates were available, preserved in their sedimentary matrix (SI Text). Four pieces came from site C (Cz11–1, Bz13-6, and Dz20-56, maximum dimension 3 mm, and WW10-8, maximum dimension 5 mm), and one piece was retrieved during the excavations at site F (20/23–1, maximum dimension 3 mm), one of the other sites in the fine-grained fluvial deposits (see below). Analysis of these samples by XRD, environmental scanning electron microscopy (ESEM), energy-dispersive X-ray spectroscopy (EDX), and several rockmagnetic studies clearly indicated the presence of hematite in the samples, along with a strong quartz component of the sediment matrix of the red material (SI Text). These findings confirm the results of the previous XRD analyses of three samples from site C mentioned above (23).
Discussion
With the red material identified as hematite, how did it enter the sediments? Our null hypothesis is that the hematite concentrates were part of the sedimentary environment of the archeological sites, that is, part of a natural background scatter of such finds. The site C matrix consisted of well-sorted, fine to very fine silty sands, with a silt and clay content of at least 15% by weight. Micromorphological studies of the site C sediments indicated low-energy deposition, which buried the archaeological remains very calmly and gradually (15, 25). Laterally, the site C sediments developed into loams. Study of the rich mollusk fauna retrieved from such loamy deposits adjacent to the excavated area showed the presence of stagnant water, suggesting that the site C area was protected from the main channel of the Maas River, a densely vegetated lacustrine niche in a predominantly fluviatile environment, during the climatic optimum of the interglacial (19). Site C contained flint knapping scatters of mainly Levallois debitage, from which large amounts of artifacts could be refitted (Fig. 4) (21.5% of the 3D recorded flint artifacts at site C; 70.4% by total weight), including a Levallois recurrent reduction sequence (15, 26). The excavations entailed 3D recording of all identifiable finds, including small (<0.5 cm) flint chips and pieces of bone and >5,800 charcoal fragments. The size distribution of the flint material was dominated by small (<2 cm) flint artifacts, composing ∼75% of the total 3D-recorded material. Spatial studies showed an absence of winnowing patterns, also supporting the primary context character of the assemblage.
The site C “red ochre” finds constituted a very strong “search profile” during the subsequent 1984–1989 excavations and geological fieldwork in the quarry, during which another 1200 m2 was excavated, distributed over various locales (sites F–N). Excavators were explicitly instructed to look for “red material.” At all sites but one (see below), excavation and documentation procedures were comparable to those at site C, but despite this, no ochre finds were recorded during subsequent excavations at site G (50 m2) or site K (370 m2). At site N, an area of 765 m2 was meticulously excavated and recorded with the explicit aim of studying the “background scatter” of flint artifacts, bones, and other finds present in the interglacial river deposits. The low-density distribution yielded 450 flint artifacts, partly conjoinable, but again no find of red ochre concentrates (16). Some of the excavations at the large, rich site K (10,192 flint artifacts) proceeded faster than at the other sites (17), so there the absence of comparable finds cannot be interpreted as a “real” absence. We did discover three more pieces of red material during the 1984 excavations at site F, located ∼300 m SE of site C. One of the pieces from this site was analyzed in this study and was also found to contain hematite. Site F was excavated over an area of 42 m2. Its excavation yielded 1,215 flint artifacts, of which we refitted 12.8% by numbers and 67% by weight. Apart from these three pieces from site F, no other hematite finds were made during the extensive archeological excavations in the quarry. Furthermore, during the multidisciplinary studies of exposures in the quarry in 1981–1989, thousands of meters of interglacial Maas River deposits were cleaned and examined for geological studies and for presence of various types of finds. Hundreds of meters of such sections were sampled and drawn in detail, and dozens of thin sections were prepared and studied (27), but no traces of hematite were detected during any of these activities. Based on these observations, we reject the null hypothesis that the hematite fragments are part of the sedimentary environment of the Maastricht-Belvédère archeological assemblage. Data independently supporting this interpretation come from the structure of the concentrates themselves, mentioned earlier. The hematite staining is seen to surround the quartz particles of the sedimentary matrix, with the hematite concentrate itself more fine-grained than the matrix, implying that the red material entered the sediments after their formation.
The combined evidence of onsite observations, studies of the nonarcheological deposits, and the character of the concentrates themselves concur with our inference that the presence of these small fragments of nonlocal hematite was related to hominin activities at sites C and F. With the null hypothesis rejected, and given the data presented above, we need to explain the presence of this hematite material surrounding the sediment particles in the site C matrix. We hypothesize that the best explanation is that the fine hematite material was originally concentrated in a liquid solution, and that blobs of this ochre-rich substance became embedded in the sediments during use of the liquid, spilled on the soil surface. To test this interpretation, we performed an experiment to observe the impact of drops of a hematite-rich liquid on the site C sediment (SI Text). Despite the limitations of this experiment, the similarity of the experimentally produced concentrates to the archeological concentrates at both macroscopic and microscopic levels is remarkable (SI Text) and lend support to our interpretation of how the material entered the sediment.
What is the possible context of use of the hematite-rich liquid substance at Maastricht-Belvédère? The site F assemblage contained only one formal tool, and its unmodified flakes yielded no microscopic signs of use. The absence of several larger flakes in conjoining groups indicates that site F is where large blanks were produced for use elsewhere in the landscape. The presence of 15 heated flints and some charcoal particles suggest the former presence of a fire.
At site C, two-thirds of the hematite particles were clustered in the northwest part of the site, partially around the concentration of small (<0.5 cm) charcoal particles. Five hematite pieces were found in the southern part of the site among the flint artifacts recovered there, many of which were heated (15), possibly reflecting a former fire site (28).
Site C yielded some (use-) retouched tools, including three scrapers (15). Faunal remains were poorly preserved (15). Based on use wear analysis of the flint artifacts, butchering activities might have taken place there, and the presence of scrapers suggests possible hide working (29), an activity that could have involved hematite (30, 31). No traces of hematite were detected on these artifacts, however. In summary, the spatial and functional context of the site C hematite finds offer no entry to the former use of the hematite.
What we can state is that ochre use has now been documented in an early Middle Paleolithic context, minimally dating to MIS 7, even though the application of the ochre is unknown. Claims for comparable early use of ochre do exist for a few European sites, as mentioned above, but are thus far unsubstantiated. It is also interesting to note that the late Middle Pleistocene (MIS6-7?) B3 find level at the site of Rheindahlen (Germany) yielded sandstone slabs with traces of use, possibly caused by grinding mineral material (32). Nevertheless, the Maastricht-Belvédère case thus far remains a unique occurrence. Our interpretation of the Maastricht-Belvédère material predicts that more traces of hematite use will turn up in future excavations of Middle Paleolithic sites in comparable archeological as well as geological settings, that is, with a sedimentary matrix that guarantees the survival and visibility of such small pieces and a research context that allows careful excavation.
The nearest hematite sources known are at ∼40 km from the site, in the Ardennes and Eifel areas (33) (Fig. 5). The Ardennes sources, in the Liège-Dinant-Namur area, are located in the catchment of the Maas River, but despite this, hematite has not yet been recorded in stone counts of the river gravels (15) in the Maastricht region. However, one cannot rule out the possibility that very small quantities of hematite were collected from river bars in the late Middle Pleistocene. Hematite is present (albeit very sporadically) in Paleozoic rocks in the Ardennes-Rhine Massif, especially in quartz veins with hematite crystals. Stones and boulders of these rocks are present in Maas deposits, transported from their source areas on ice rafts. Hypothetically, Neandertals might have stumbled on such hematite in a large quartz boulder, although this likely would have been a very rare encounter. Neandertal sites like Spy and Sclayn are amongst the many Middle Paleolithic sites in this Ardennes source area in the Maas basin. Some of these sites yielded artifacts made out of flints from the Maastricht chalk area, testifying to contacts between the two regions in the later Middle Paleolithic (34).
Regarding a possible connection to the sources in the northern part of the Eifel (33), it is interesting to note that two early Middle Paleolithic (MIS 6) sites in the Eifel area yielded small numbers of artifacts made of flint from Cretaceous deposits from the Maastricht area (Schweinskopf site, n = 5; Wannen site, n = 8) (35). These flints were discarded in the Eifel at distances of ∼100 km from their geological sources near Maastricht (34, 35). The hematite material might have traveled in the opposite direction, from the Eifel to the Maastricht area, but better (i.e., significantly larger) samples are needed to test such a hypothesis by establishing a solid provenance for the Belvédère material. The occasional transport of stone artifacts over distances comparable to those discussed here is well documented for the European Middle Paleolithic (34–38), and a hypothetical import of hematite over such a distance fits with our data on Neandertal movements through Pleistocene landscapes.
In Africa, pieces of red ochre became a common phenomenon in Middle Stone Age (MSA) rock shelter sites from ∼160 ka onward (8, 9, 39), but there are a few earlier occurrences in the same time range as those reported here for Maastricht-Belvédère (40). Site GnJh-15 in the Kapthurin Formation, Baringo, Kenya (41) yielded hematite fragments too friable to preserve traces of grinding, ranging from pulverized granular material weighing <3 g to large chunks weighing >250 g. The early MSA site Twin Rivers, thought to date to 200–300 ka, produced pieces of ochre as well (42). The Maastricht-Belvédère material dates to the same time range as these early cases of ochre use in the African MSA, produced by ancestors of modern humans. The Maastricht-Belvédère material shows that for those investigators who view iron oxide use as an archaeological indication of symbolic behavior, there are now also early data for red ochre manipulation by members of the Neandertal lineage to take into account (43). However, in our view, there is no reason to assume that the mere presence of iron oxide at an archeological site, whether Neandertal or modern human, implies symbolic behavior.
Conclusion
The small hematite concentrates reported here constitute the earliest case of use (and possibly transport over several dozens of km) of this material in the Neandertal archeological record. The finds probably entered the matrix of the site as drops from an ochre-rich liquid substance during unknown application activities. The finds provide only a very limited window into manipulation of red ochre by early Neandertals, certainly compared with the unique and detailed information recently published for Blombos Cave, South Africa (39). However, importantly, with identification of the Maastricht-Belvédère material as hematite, the use of red ochre by early Neandertals has been pushed back in time to at least 200–250 ka (MIS 7), that is, to the same time range as documented for the African record, produced by Middle Pleistocene ancestors of modern humans. Future studies can be expected to yield comparable finds from early Middle Paleolithic settings, either during fieldwork or as the result of the reanalysis of old finds. The currently available evidence suggests a sporadic use of red ochre by early Neandertals, minimally from MIS 7 onward.
Methods
In the first study of the largest pieces from site C, one investigator (C.E.S.A.) produced a concentrate of the red crusty material of find Dz23-16 by carefully grinding it to release the reddish material. The sample was then placed in a concave glass dish filled with alcohol, and the finest fraction of the reddish powder was separated from the bulk sample by panning. After further grinding to obtain a suitable grain size, this concentrate was used for XRD, which showed that the red stain was caused by the presence of hematite (23). For the present study, four small samples of reddish material from the Maastricht-Belvédère excavations at site C (Cz11–1, Bz13-6, Dz20-56, and WW10-8) and a fragment from site F (20/23–1) (Fig. S4) were subjected to various analyses to study their physical properties, including ESEM, EDX, XRD, and several types of rockmagnetic analyses. All analyses were performed at the laboratories of the Centro Nacional de Investigación Sobre la Evolución Humana, Burgos, Spain, where thin sections of samples Bz 13–6 and Dz20-56 (site C) were produced as well. More details are provided in SI Text.
Acknowledgments
We acknowledge our many colleagues involved with the Maastricht-Belvédère project, in particular the late Paul Hennekens. We thank three anonymous referees, Alain Turq (Les Eyzies), Paola Villa (Boulder), Mark Dekkers (Utrecht), Ian Watts (Athens), Francesco d'Errico (Bordeaux), Alexander Verpoorte, and Adam Jagich (Leiden) for comments on earlier versions of the paper and/or advice on aspects of the study. We thank Jan Pauptit and Joanne Porck (Leiden) for their work on the figures, Annelou van Gijn and Erik Mulder (Leiden) for help with the microscopic work, Silvia Gonzalez Sierra (Burgos) for her work with the ESEM and EDX, Ana Isabel Alvaro Gallo (Burgos) for collection of the XRD data, and Carlos Saiz Domínguez for work on the thin sections. This study was partly financed by the NWO Spinoza prize awarded (to W.R.) by the Netherlands Organization for Scientific Research (NWO) and supported by a Royal Netherlands Academy of Arts and Sciences Assistant Grant (to T.K.N.).
Footnotes
Author contributions: W.R. and T.K.N. designed research; W.R., M.J.S., D.D.L., J.M.P., C.E.S.A., and H.J.M. performed research; W.R., M.J.S., J.M.P., C.E.S.A., and H.J.M. analyzed data; and W.R., M.J.S., and T.K.N. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1112261109/-/DCSupplemental.
Freely available online through the PNAS open access option.
References
- ↵
- Soressi M,
- D'Errico F
- ↵
- Zilhão J,
- et al.
- ↵
- Wadley L,
- Williamson B,
- Lombard M
- ↵
- Velo J
- ↵
- Roper DC
- ↵
- Basedow H
- ↵
- Peile AR
- ↵
- Watts I
- ↵
- ↵
- Wadley L,
- Hodgskiss T,
- Grant M
- ↵
- ↵
- Soressi M,
- et al.
- ↵
- Trąbska J,
- Gaweł A,
- Trybalska B,
- Fridrichová-Sýkorová I
- ↵
- ↵
- Roebroeks W
- ↵
- Roebroeks W,
- De Loecker D,
- Hennekens P,
- van Ieperen M
- ↵
- De Loecker D
- ↵
- Van Kolfschoten T
- ↵
- Meijer T
- ↵
- Van Kolfschoten T,
- Roebroeks W,
- Vandenberghe J
- ↵
- Huxtable J
- ↵
- ↵
- Arps CS
- ↵
- Gamble C
- ↵
- Mücher HJ
- ↵
- ↵
- Vandenberghe J,
- Roebroeks W,
- Van Kolfschoten T
- ↵
- Stapert D
- ↵
- van Gijn AL
- ↵
- Keeley LH
- ↵
- Phillibert S
- ↵
- Thieme H
- ↵
- Horsch H,
- Keesman I
- ↵
- Roebroeks W,
- Kolen J,
- Rensink E
- ↵
- Floss H
- ↵
- Geneste J-M
- ↵
- Féblot-Augustins J
- ↵
- Féblot-Augustins J
- ↵
- Henshilwood CS,
- et al. | https://www.pnas.org/content/109/6/1889.full |
Written by By Rhiannon Mallay, CNN
On June 7th, the American Library Association (ALA) celebrates the 70th anniversary of the organization’s annual list of the best books published in the US in the previous year. The list is best known for its unique top pick, the overall “best book of the year,” but is also recognized as a useful snapshot of a collection of bestsellers that can tell us about our tastes.
Some list winners have been famous for years: Dorothy Parker’s “Too Much Bread,” Ernest Hemingway’s “A Moveable Feast,” Thomas Wolfe’s “Look Homeward Angel,” and Lydia Davis’ “Mt. Vernon, Idaho,” for example. Of course, winners aren’t always known at the time. In 1934, Albert Camus’ “The Stranger” won, but was no longer published.
Related content Top 10 U.S. books from the last 70 years
Writing about ‘nonfiction books’
By including so many — apparently every other week, the ALA announces its nonfiction list — the list subtly reveals which genres are popular today, like nonfiction (Pulitzer Prize winning “Everything Is Miscellaneous” by Shon Hopwood).
“We open up our facility to all types of fiction and nonfiction books,” says ALA public policy advocate Kristin Brzezinski. “We take care to look beyond mass market, and make a distinction between genre and art.”
In 2015, President Barack Obama signed legislation that created a competition for public libraries to enter a request for proposal for the best book of the year, inviting submissions from both nontraditional libraries and regular libraries. (Nontraditional libraries today include libraries that serve sites like preschools, colleges, and universities, as well as places where young people live and work.)
Alamy
This year, the ALA is celebrating the ALA Literary Awards, in which 16 fiction and nonfiction writers and four nonfiction writers were honored with $10,000 prizes. “Money’s not going to a lot of writers, so the prize is pretty significant,” says Julie Sowerby of the publishing company Cora, which has won the award for fiction twice before (in 1996 and 2004).
The prizes are worth about $50,000 combined, which Sowerby says is a “pretty decent amount” for a modern publishing house. The books highlighted on the ALA’s list “are in very small print,” so most publishers do not make a publishing profit on a book of this type. “We’re not looking for volume,” says Sowerby.
Plenty of winners are popular
Indeed, despite being a widely accepted annual event, the ALA Literary Awards are relatively unheralded: “The awards are usually really small, and the lists are really niche and specific,” says Brzezinski. That’s why it’s notable that the top honors on the 2018 list include such eclectic works.
Related content 10 great books that won all your favorite prizes
This year’s award for fiction went to Anne Lamott’s “Bird by Bird,” winner of the Margaret Mead Prize for Nonfiction. Lamott, best known for her advice about the “that’s them” – that which shouldn’t matter — is often credited with transforming the way we write, although today’s more difficult writing environments “may have shaped it even more,” says Brzezinski. | http://la-nites.com/the-best-books-of-2018-are-winning-prizes-and-books/ |
Support the Programme Manager in the formulation, implementation and monitoring of annual business plans and project budgets. Analyse and report on the programme funding position. Lead the financial monitoring and review of grants/contracts. Collaboratively produce project/programme budgets and forecasts. Support the Programme Manager in the management of financial risk in the programme, escalating and addressing any emerging risks.
Accounting and Financial Control
Provide financial oversight and support to all programme locations, ensuring transactions are fully reconciled and discrepancies identified and corrected. Oversee all financial accounting matters, closing the country books in accordance with agreed deadlines. Ensure that direct and indirect costs are allocated appropriately to projects, identifying and reporting on any shortfalls in both direct and indirect cost coverage. Supervise the production of the payroll cycle, ensuring donor funding allocations are accurate, and calculations for salary, income tax, social security, severance and other government levies are in accordance with legislation. Support the Programme Manager in ensuring compliance with all statutory legislation.
Cash Management
Ensure that adequate banking and cash provisions are in place. Manage the short-term cash flow requirements of the programme. Manage the effects of exchange rate fluctuations between local and contract currencies.
External Reporting and Audit
Lead the preparation of country financial statements and donor financial reports. Review and report on compliance against policies and procedures. Lead the preparation for external audits, preparing schedules and documentation as and when required by auditors.
Staff Management and Development
Ensure that Financial staffing capacity is fit for purpose for the needs of the programme. Develop the capacity and career development of national staff, ensuing financial consistency and quality across the programme.
For further information, and to apply, please visit our website via the “Apply” button below. | https://jobs.accaglobal.com/job/7544118/international-finance-manager-somalia-345/?LinkSource=PremiumListing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.