content
stringlengths
71
484k
url
stringlengths
13
5.97k
Atlas OG07 is the 37th member of the OGIENOID series. Her character design was done by LadyOgien. Created without permission by Kasai OG01, she and her sister, Axis OG06, live in secret. Background[edit | edit source] Atlas was an unplanned addition to the OGIENOID series created by OGIEN Ltd. She was created and activated, without permission, by Kasai OG01. Her physical form was created from a stolen biodroid body, along with a few pieces of Kasai's hardware. Atlas's psychological form was created illegally from Kasai's replicated software and database. Though she shares much of Kasai's data, she is still her own person. As in her time creating certain types of robots/androids/biodroids is highly illegal, Atlas lives in secret as a cooking robot in OGIEN Ltd.'s headquarters. She and her sister were not allowed to grow up together due to fears of overlap and redundancy in their programming. For their safety, they were kept separate. Personality[edit | edit source] Atlas is a rather shy and reserved individual, very obviously influenced by her social anxiety. Growing up, she was not allowed to speak with strangers or other artificial humans, in fear that she would reveal she was not a legal entity. The lack of contact over time caused her to be ill-equipped with communication. Name Origins[edit | edit source] Atlas's name comes from two different origins. The first is the Titan from Greek mythology, punished by Zeus to hold up the heavens until the end of time. The name itself means "to carry", and it bears connotations of great strength. The second meaning comes from the sciences. In anatomy, the atlas or "C1" is the first cervical vertebra of the spine. It is named the atlas because it supports the skull. Along with the axis (C2), this vertebra specializes in allowing greater ranges of motion than normal vertebrae, and it allows the head to nod and rotate. Relationships[edit | edit source] - Kasai OG01 - Atlas's creator and "mother" in name. The two have never met. - Axis OG06 - Atlas's sister. The two grew up incredibly distant from one another, forbidden from interacting. Over time, the two were able to finally spend time with one another and grew incredibly close. Though Axis is the more protective of the two, Atlas would not hesitate to protect her. Trivia[edit | edit source] - Atlas's accent color is Axis's primary color; mint green. - Her favorite season is winter, primarily because of Christmas. - She is rather addicted to sweets, and prefers them over any other kind of food. - Her favorite activity is baking, which has lead her to become a master chef.
https://studio-ogien.fandom.com/wiki/Atlas_OG07
Cerebrospinal Functional Medicine (CFM) is a new medical specialty developed by Dr Young Jun Lee. Dr Lee is a Korean Medicine Doctor who was also the first doctor to receive a Ph.D. in Integrative Medicine. Through persistent studies and research over many years he has found that balancing the TMJ (Temporomandibular joint) can recover structural distortions which leads to stablization of nerves. As a result, his findings have led to the formation of a new discipline of medicine which he termed ‘Cerebrospinal Functional Medicine (CFM)’ and he has published a number of materials within the field. In CFM, it is viewed that many chronic and ‘intractable’ diseases are caused by structural imbalance which leads to problems in the nervous system. Therefore the treatment applied involves correcting structural imbalance thereby recovering the nervous system and functions of organs to normality. Also, the principle within the field recognises that TMJ is the core component in controlling the function and structure of the cerebrospinal column and whole body balance. The core theory of CFM is ‘through balance of TMJ the whole body balance is regulated’ – meaning that TMJ balance must be achieved in order to realign the upper cervical vertebrae which allows balance of the rest of body structure and stabilization of the nervous system. Based on this theory, many years of research and clinical experience has allowed Dr Lee to develop assertive and reliable methods of diagnosis, tests and treatments as a ‘holistic approach by manipulating the TMJ’. The treatment method, referred to as Functional Cerebrospinal Therapy (FCST) consists of using intraoral appliances to balance the TMJ, realignment of upper cervical vertebrae and the rest of body structure by manipulation. For years he was ridiculed by other doctors as he emphasised importance of TMJ. However, he has also successfully collaborated with many medical doctors, including neurosurgeons, dentists and alternative medicine professions who have accepted his theory and actively supported him in advancing this new approach to cure diseases. The work is still ongoing and a growing number of people are now recognising the significance of his findings. His seminars and professional courses provided to doctors are also increasing in popularity. What is FCST(Functional Cerebrospinal Therapy)? Functional Cerebrospinal Therapy (FCST) is a non-surgical treatment method currently widely applied in South Korea which normalises the nervous, hormonal and other body systems through resolving structural problems, based on CFM principles. It was developed by Dr. Lee who himself was a patient 30 years ago, suffering from paralysis in the left arm. According to Dr Lee, the cranium, spine and pelvis are core parts of our skeletal structures and distortion in any of them can affect related muscles, nervous and hormonal systems which can result in provoking a range of symptoms. FCST is a new form of treatment of 21st century which allows the core structures of our body (including cranium, spine and pelvis which can be classified as 3 points of distortion as well as TMJ) to regain balance thereby recovering normal function of nervous and other systems making it possible to cure chronic and ‘intractable’ diseases. Conceptual Framework In order to comprehend the full mechanism, it is important to understand the following concepts which are laid out in the ‘Framework’ drop down menu: - Importance of balanced body structure - Importance of TMJ - Cause of Dystonia FCST FCST clinic offers various therapies that are effective in relieving Dystonia symptoms. Many Dystonia patients may already be familiar with most of them, such as CranioSacral Therapy, Acupuncture and Positive Thinking. However, the therapy that allows full recovery from Dystonia (includes cervical, focal and other types of Dystonias) that is also unique to the clinic is Functional CerebroSpinal Therapy or FCST for short which was developed by Dr Lee. According to Dr Lee, the reason for this is because the therapy resolves the fundamental cause of Dystonia and other neurological disorders. So What is FCST? The therapy essentially resolves the following two structural problems: - Imbalance of Temporomandibular Joint (TMJ) using intraoral balancing appliances. Dr Lee uses 3 different types of balancing appliances (CBA, TBA and OBA). These are different to splints or other intraoral appliances provided from ordinary dentists: CBA – (Cervical Balancing appliance) A disposable custom made intraoral balancing appliance (to be worn only once upto 1 hour) which allows balance of TMJ at the optimal freeway space (also referred to as zero point) to the individual according to the cranial and spine structure until deflection* (see ‘Deflections‘). This appliance is worn during Chuna (similar to chiropractic) therapy for alignment of C1 and C2. TBA and OBA – (TMJ Balancing Appliance) and (Occlusion Balancing Appliance) Made of elastic material to accommodate for different occlusion and TMJ structure in individuals and is effective in balancing the TMJ throughout day and night. Occlusion balancing appliance (OBA) is for patients who also have problems with occlusion (typically for Class 2 and Class 3 malocclusion). Patients are asked to wear this device for as long as possible for faster treatment. Patients may feel pain as occlusion changes but once the optimal occlusion is achieved teeth will no longer move and cause pain. 2. Distorted upper cervical vertebrae C1 and C2, also known as Atlas and Axis, by using Chuna (similar to chiropractic) manipulation to realign them whilst wearing CBA. Other therapies at the clinic Other sub therapies, together with FCST, offered at the clinic help to fasten the recovery process. Ultimately, the whole treatment aims to: - Rebalance the body structure from TMJ to the pelvis - Stabilize the nervous system - Regain mental stability Importance of TMJ This section explains how TMJ affects the spine structure and the nervous system. Subsequently, explanation of how FCST is unique in treating TMJ to other treatments available is provided. i) TMJ and structure Movement of the TMJ is very closely related to the second cervical vertebra, C2 or also known as Axis. One might think that when the mandible opens and closes, its movement is centered around the condyle in the TMJ itself. However, this is not the case. According to the Quadrant Theorem of Guzay, the axis of rotation of the mandible lies exactly at the odontoid of C2. (The odontoid is the upward, toothlike protuberance from the second vertebra, around which the first vertebra rotates.) When the mandible moves downwards, this generates a pulling force, loosening the muscles around C2. Likewise, when moving up (i.e., when closing the mouth), it generates a pressure, which tightens the muscles around C2. This means that in an occlusion with decreased vertical dimension will aggravate muscle tension around C2 when the mouth is closed. Therefore, it is clear that distortion in TMJ will affect the position of the Axis too. Of all 24 vertebrae in the spine (7 cervical, 12 thoracic, 5 lumbar), there is only one vertebra with an odontoid/axis, which is C2. Therefore, the Axis plays a key role in the balance of the entire spine. Together with the TMJ, C2 is the most significant variable affecting the entire spine structure. So what happens next after subluxation of Axis? The rest of the spine collapses like in the domino effect even affecting position of the cranial bones and pelvis. This is explained by Lovetts reactor relationship. According to Lovett Reactor relationship, each vertebra is coupled in motion with another vertebra and the pelvis is coupled in motion to the cranium. C1 + L5, C2 + L4 and C3 + L5 automatically move in the same direction (also known as coupling movement). The other vertebrae pairs, for example C4 + L2 move in the opposite direction. Therefore, impact on one vertebra influences other vertebrae in the spine. Therefore, TMJ distortion causes subluxation of C2 (Axis) which leads to the collapse of the rest of the body structure. ii) TMJ and nerves Nine of the 12 cranial nerves are found near the temporal bones from which the mandible is suspended. Particularly, the 5th cranial nerve (also known as trigeminal nerve) innervates the TMJ and are coupled to C1 and C2 (Atlas and Axis). The cranial nerves together control 136 different muscles (or 68 pairs of “dental muscles”) connecting the entire spine. According to Dr Lee, misalignment of TMJ disturbs the trigeminal nerve and it can lead to problems in the rest of the nervous system. Problems in the nervous system may cause abnormal muscle contractions and pain due to central sensitization and wider range of brain plasticity. Not only this, TMJ distortion which causes subluxation of C1 and C2 can limit the space of foramen magnum (which is an opening at the base of the skull) through which the cerebrospinal fluid circulates. This can negatively impact the body-brain communication and also cause restriction of the jugular foramen, another opening in the base of the skull transmitting veins, arteries, and nerves. Restriction in these openings can mean less efficient brain respiration due to decrease in the cerebrospinal fluid circulation and can also limit proper flow of blood to the brain. The link below is a demonstration of how subluxation of upper cervical vertebrae restricts the jugular foramen. (Click slideshow in the file) In conclusion, TMJ is the most important factor which contributes to collapse of spine structure and disruption of the nervous system. iii) What causes TMJ imbalance? Many people think that TMD (TMJ Dysfunction) is mostly caused by trauma (i.e. injury) to the jaw. However, there are many other causes of TMD including the following: - Chewing on one side consistently - Malocclusion - Neglect of missing tooth - Trauma due to complications of head and neck injury and traffic accident - Bad oral habits - Genetic or congenital problems - Solid foods and chewing gum - Mental stress - Teeth grinding (Bruxism) - Bad habits such as poor posture When patients first experience TMJ problems, they may feel pain around the jaw area, develop headache or problems chewing. However, once TMD becomes chronic the symptoms are not only concentrated on the facial area but patients start suffering variety of symptoms including pains going down to the neck and back and psychological disorders such as depression and anxiety. Below is another study carried out where one TMD patient was sent to different specialists and the following were her diagnosis from each of them: It can be said that TMJ is the only joint in our body which causes such different symptoms between acute and chronic patients. Usually, for other joint related illnesses, the symptoms do not involve much more than pains around the affected joint area. However, studies have found that many chronic TMD sufferers not only experience pains and depression but also indigestion, allergies, chronic fatigue, dry/soar eyes, eczema, difficulties in hearing, loss of coordination, numbness in the limbs and fingers, tinnitus, asthma, cold hands and feet, apnoea, vertigo and many others. This can be explained by the impact TMJ imbalance has on the rest of our body structure and the nervous system as described above. iv) How can TMJ imbalance be resolved? There are various dental treatments, surgeries and other practices in both medical field and alternative medicine today but it may be very difficult to find treatments that are effective for all sufferers. So how is FCST different? Dr Lee’s holistic approach is unique in that he treats TMD taking into account all of the following factors: - Freeway space* (see below) - Occlusion - Position of the Cranium, TMJ, Spine and Sacrum Ultimately, this treatment aims to achieve the optimal balance (left, right, front and back) from top to bottom of our entire body. Once this optimal balance is achieved in the entire body and patients no longer experience Deflections it is viewed that patients do not require further treatment. * Freeway space A freeway space is the space that exists between the upper and lower articulatory members at rest. It is only when we swallow that the teeth make contact in order to create pressure. The space varies from 1-8 mm but most people tolerate a space in the 2-3 mm region. According to Dr Lee, even 1/10 of a mm defect in the freeway space can distort the TMJ (and therefore causing problems with the nervous system). A splint made of hard material, although custom made, does not allow flexibility when there are muscular and/or structural changes in the individuals as a result of wearing the device, making it difficult to settle at the precise freeway space. Therefore, his intraoral balancing appliances are designed to fix this flaw. During treatments he uses special thin sheets of paper for optimal balance (sometimes referred to as zero point).
https://curefordystonia.com/tag/tmd/
The spinal column (columna vertebralis) - the real basis of the skeleton, bearing the entire body. The design of the spine allows it, while maintaining flexibility and mobility, to withstand the same load, which can withstand up to 18 times thicker concrete pole. The spinal column is responsible for maintaining posture, serves as a support for tissues and organs, as well as take part in forming the walls of the chest, pelvis and abdomen. Each of the vertebrae (vertebra), make up the spine, inside has a through hole vertebrate (foramen vertebrale). In the spine vertebral openings are the spinal canal (canalis vertebralis), containing the spinal cord, which is thus protected from external influences. In the frontal projection of spine clearly highlighted two areas of differing broader vertebrae. In general, weight and size of the vertebrae grow in the direction from top to bottom: it is necessary to compensate for the increasing burden carried by the lower vertebrae. In addition to thickening of the vertebrae, the necessary degree of strength and elasticity of the spine provides a few twists of his lying in the sagittal plane. Four different directions of bending, alternating the spine, are located in pairs, bending, facing forward (lordosis), corresponds to a bend facing backwards (kyphosis). Thus, the cervical (lordosis cervicalis) and lumbar (lordosis lumbalis) lordosis respond chest (kyphosis thoracalis) and sacral (kyphosis sacralis) kyphosis (Fig. 3). With this design, the spine works like a spring, distributing the load evenly over its entire length. A total of 32-34 spine vertebrae separated by intervertebral disks and their slightly different device. The structure of a single vertebra is isolated vertebral body (corpus vertebrae) and vertebrae arch (arcus vertebrae), which closes the vertebrate hole (foramen vertebrae). Located on the arc of the vertebral spines of different shape and purpose: paired upper and lower articular processes (processus articularis superior and processus articularis inferior), paired transverse (processus transversus) and one bearded (processus spinosus) process, acting on the vertebral arch backward. The base of the arc is the so-called vertebral notch (incisura vertebralis) - top (incisura vertebralis superior) and bottom (incisura vertebralis inferior). Intervertebral foramen (foramen intervertebrale), formed by two adjacent vertebral notches, open access to the vertebral canal on the left and right. In accordance with the location and structural features in the spinal column distinguishes five types of vertebrae: 7 cervical, 12 thoracic, 5 lumbar, 5 sacral and 3-5 coccygeal. Cervical vertebra (vertebra cervicalis) is different from others in that it has a hole in the transverse processes. Vertebrate hole formed by the arc of the cervical vertebra, a large, almost triangular in shape. The body of cervical vertebra (except I cervical vertebra, which the body does not have) a relatively small, oval and elongated in the transverse direction. I have cervical vertebra, or atlas (atlas), the body is missing, and his lateral masses (massae laterales) connected by two arches - the anterior (arcus anterior) and rear (arcus posterior). The upper and lower planes of the lateral masses are the joint surfaces (top and bottom), by which I connected to the thoracic vertebrae, respectively, with the skull and cervical vertebra II. In turn, II cervical vertebra, characterized by the presence of a massive body in the process, the so-called tooth (dens axis), which originally was part of the body I cervical vertebra. Tooth II cervical vertebra - the axis around which the head with the atlas, so II cervical vertebra is called the axis (axis). On the transverse processes of cervical vertebrae can be found rudimentary edge processes (processus costalis), which are particularly well developed in the VI cervical vertebra. VI cervical vertebra is also acting (vertebra prominens), since its spinous process much longer than that of the adjacent vertebrae. Thoracic vertebra (vertebra thoracica) features a large, in comparison with cervical, body, and nearly round-hole vertebrates. Thoracic vertebrae have a transverse process at its costal pit (fovea costalis processus transversus), which serves to connect the ribs with tubercles. On the lateral surfaces of the body is also thoracic upper (fovea costalis superior) and lower (fovea costalis inferior) edge pits, which include the head of the rib. Lumbar spine (vertebra lumbalis) differ horizontally directed spinous processes with small gaps between them, as well as a massive bean-shaped body. Compared with the vertebrae of the cervical and thoracic lumbar vertebra has a relatively small vertebrate hole oval. Sacral vertebrae are separated before the age of 18-25 years, after which they are fused together to form a single bone - the sacrum (os sacrum). The sacrum is triangular in shape, apex downwards, it is isolated base (basis ossis sacri), top (apex ossis sacri) and the lateral part (pars lateralis), and anterior pelvic ( facies pelvica) and rear (facies dorsalis) surface. Inside the sacrum sacral canal passes (canalis sacralis). The reason the sacrum articulates with the V lumbar vertebra and the top - with the coccyx. The lateral part of the sacrum are formed by the transverse processes and fused ribs rudimentary sacral vertebrae. The upper sections of the lateral surface of the lateral parts of the ear-shaped articular surface are (facies auricularis), whereby the sacrum articulates with the pelvic bones. The front surface of the pelvic sacrum is concave, with visible marks of fusion of vertebrae (transverse lines are of the form) forms the back wall of the pelvic cavity. Four lines that mark the place seam sacral vertebrae, ends on both sides of the anterior sacral foramen (foramina sacralia anteriora). The back (dorsal) surface of the sacrum, also has four pairs of posterior sacral foramen (foramina sacralia dorsalia), rough and convex, with passing the center of the vertical comb. This medial sacral crest (crista sacralis mediana) is the trace of seam spinous processes of sacral vertebrae. Left and right of the intermediate sacral crests are (crista sacralis intermedia)), formed by fusion of the articular processes of the sacral vertebrae. Fused sacral vertebrae transverse processes form the lateral sacral crest pair (crista sacralis lateralis). Intermediate sacral crest pair ends at the top of the usual superior articular process I sacral vertebra, and below - modified the lower articular process V sacral vertebra. These processes, called sacral horns (cornua sacralia), serve for articulation with the sacrum coccyx. Sacral horns restrict the sacral gap (hiatus sacralis) - exit the sacral canal. Coccyx (os coccygis) consists of 3-5 immature spine (vertebrae coccygeae) with (except I) form bone oval bodies, finally ossify at a later age. The body I have coccygeal vertebra towards the side protuberances, which are the rudiments of the transverse processes; above this vertebra are modified superior articular process - coccygeal horns (cornua coccygea), which are connected to the sacral horns. By birth the coccyx is the rudiment of the caudal skeleton.
http://anthropotomy.com/skeleton-and-bones-connection/spine/the-structure-and-shape-of-the-vertebrae
A FREE English Dictionary Database from Spellcheck.co.uk. You can check the meaning of one word, or a whole page. Enter your query in the search box, or choose a letter from the menu below. Try our SpellChecker here , or our English Thesaurus here . A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Search: Exact Word Within Words Within Descriptions Results: Axis (n.) Also used of the body only of the vertebra, which is prolonged anteriorly within the foramen of the first vertebra or atlas, so as to form the odontoid process or peg which serves as a pivot for the atlas and head to turn upon.
http://www.spellcheck.co.uk/dictionary/display.php?word=Axis&action=view&id=11527
These regions are called the cervical spine, thoracic spine, lumbar spine, sacrum and coccyx. There are seven cervical vertebrae, twelve thoracic vertebrae and five lumbar vertebrae. The number of vertebrae in a region can vary but overall the number remains the same. The vertebral arch is posterior, meaning it faces the back of a person. Together, these enclose the vertebral foramen, which contains the spinal cord. Because the spinal cord ends in the lumbar spine, and the sacrum and coccyx are fused, they do not contain a central foramen. The second cervical vertebra, C2, is also known as the “axis” because it allows the skull and atlas to rotate to the left and right. Thoracic: The 12 vertebrae in the chest region form the spine’s thoracic region. Thoracic vertebrae are larger and stronger than cervical vertebrae but are much less flexible.
https://graphdiagram.com/spinal-vertebrae-chart/
This is one of those cases that at first reading seems inherently unlikely – but, bizarre as it sounds, has a perfectly rational medical explanation. It took place in the 1830s but was only reported in any detail three-quarters of a century later. This article was contributed to the Buffalo Medical Journal by Dr Roswell Park, the founder of the world’s first institution devoted solely to cancer research: There came into my possession some twenty years ago, perhaps longer, the subjoined statements regarding the nature of a very unusual accident, with still rarer sequels, which befell Dr. James P. White, one of the founders of the Buffalo General Hospital, during the year 1837. James Platt White was an influential gynaecologist, a founding professor of the University of Buffalo and a prominent member of Buffalo society in the mid-nineteenth century. He was the first American medic to use a live birth as a teaching aid in the classroom. In December of that year something happened to the stage-coach in which he was riding, near Batavia, and he was violently thrown, and in such a way as to seriously injure his head and neck. I have not been able to learn any of the details either of the event or of his subsequent symptoms. So all we have so far is that Dr White injured his head and neck in a stagecoach accident. Of the next six weeks of his life, nothing is known. But after that something truly extraordinary happened to him: he coughed up part of his own spine. We know this from a short statement published in the Medical News in November 1886 by one Joseph Pancoast. It was written as a certificate confirming the authenticity of a highly unusual pathological specimen: “A front segment of the atlas vertebra, a little more than an inch on the superior margin, a little less below, with the facette which received the odontoid process.” The atlas vertebra, known today as C1, is the topmost bone of the spine. It is named after Atlas, the Titan who in Greek mythology supported the sky on his shoulders. It’s a structure of crucial importance, protecting part of the brain stem. Its mobility also allows us to turn our heads and nod. The odontoid process or peg is a protuberance from C2, the second vertebra of the neck. The ‘facette’ (now usually spelled facet) is the joint between the two vertebrae. It is probable that the transverse ligament retained its hold on the two extremities of the remaining fragment of the atlas, thus protecting the spinal marrow from injury. So this lump of bone was not the entire C1 vertebra – just a large chunk of it. Dr White retained just enough of the bone to protect a critical part of his spinal cord from potentially fatal injury. This bone in possession of Professor Pattison I repeatedly saw and carefully examined; he exhibited it to his class, and it was mislaid or lost. What a shame! This would have been quite an artefact. The bone was in our possession in 1838-39-40, or thereabouts. I then understood and believed (since confirmed by conversation with Professor White) that it came from his throat, coming out through the mouth as a consequence of ulceration, the result of an accident while riding in a stagecoach on the morning of December 17th, 1837. The bone was discharged at the expiration of forty-five days after receipt of the injury. If there was ‘ulceration’ at the back of the throat it is likely that Dr White was in considerable discomfort. There are very few comparable cases on record, but in all of them the patient had great difficulty eating or drinking, was in severe pain and bedbound. But can you imagine what it must have been like suddenly ‘discharging’ a large piece of your spine through your mouth? Of his condition during the forty-five days previous to the extrusion of the fragment there is no account, neither is there of the time elapsing before his restoration to his usual activity; but inasmuch as he died in 1881, having passed the subsequent part of his life in a most active professional career, it is legitimate to conclude that he suffered little, if at all, from the consequences of his injury. In 2005 this case prompted an article by an eminent orthopaedic surgeon working at White’s old hospital in Buffalo, Eugene Mindell. After considering all the available evidence of White’s injury, Mindell concluded that he had suffered an injury known as a Jefferson fracture, in which the atlas vertebra is shattered by a sharp impact. A few fragments of bone burst through the wall of the pharynx, causing an open wound which then caused an infection of the exposed portion of vertebra. Eventually the infection had caused necrosis, when the dead portion of bone (known technically as a sequestrum) had come free and been coughed up (yuck). Finally, scar tissue had formed (or the two adjacent vertebrae C1 and C2 fused together) and the wound healed. It’s pretty amazing that Dr White was so little affected by this accident that he was able to return to work and live a normal life for more than thirty years afterwards. An ‘interesting and remarkable case’ indeed.
http://www.thomas-morris.uk/interesting-remarkable-accident/
and, of course, Proatlas synonyms and on the right images related to the word Proatlas . Definition of Proatlas Proatlas Proatlas Pro*at"las, n. [Pref. pro- + atlas.] (Anat.) A vertebral rudiment in front of the atlas in some reptiles. Meaning of Proatlas from wikipedia - which is continuous with that of the maxilla, which follows behind. proatlas The proatlas is a small paired bone sitting in between the neural arch of the... - SS.192-212. Bystrow, A. ****imilation des Atlas und Manifestation des Proatlas . Zeitschrift für Anatomie und Entwicklungsgeschichte (Zeitschrift für die... - largest teeth are found at the corners of the skull. Anatosuchus has a proatlas vertebra, eight cervical vertebrae and probably 16 dorsal vertebrae, though... - skull with lower jaws and hyoid apparatus, six cervical vertebrae with proatlas , anterior part of interclavicle, partial right clavicle, right posterior... - exoccipitals, dorsally and laterally, which mark the areas of articulation of the proatlas . The parietals form a parietal crest, in contrast with the slightly broader... - solely from the holotype MCT 1730-R, a nearly complete skull and lower jaw, proatlas , an intercentrum, the axis and the third cervical vertebra, housed at the... - Hyposaurus, focusing mostly on the vertebrae. From partial skeletons a proatlas , atlas, axis, a third to ninth cervical vertebrae, and at least 16 dorsal... - small platforms which may have articulated with neck bones such as the proatlas , a rudimentary vertebra in front of the atlas bone. The 2018 description... - teeth from the upper jaw, thirty-six loose teeth from the lower jaw, a proatlas , a centrum of a cervical vertebra, a neck rib, four dorsal vertebrae, thirty-six... - sauropodomorph. There are ten cervical vertebrae in the neck of Xingxiulong. The proatlas , an atrophied vertebra positioned in front of the atlas, is bounded in... Loading... Recent Searches ... Hystricine I e Ichthyological Ichthyopterygia Icteritous Icterus icterus Identically Idioplasm Idiorepulsive Idle-pated Idler Idolist Idyllic Ignorant Related images to Proatlas Loading...
https://www.wordaz.com/Proatlas.html
The Atlas vertebra, also called the C1 vertebra, is the first vertebra in the spine. It is the one that holds up the head and is named for the Greek figure who supported the world on his shoulders. The word "orthogonal" means perpendicular, or 90 degrees. This technique focuses on correcting the position of the Atlas vertebra in order that the head can sit on a level foundation, creating 90 degree angles between the Atlas and the skull. Specific x-rays are taken and analyzed to diagnose the position of the Atlas, and a special adjusting table is used to correct the atlas. It utilizes a compression wave force at a specific 3-D vector to shift the position of the Atlas, which is imperceptible during the adjustment. Correcting the Atlas can cause changes to any area of the body because of its relationship to the brain and all of the spinal nerves as well as several other important structures. To locate an Atlas Orthogonal chiropractor in another part of the country or world, visit www.globalao.com > Patients Enter Here > Find a Doctor. Click on the state or country in which you wish to search.
https://www.chiroblueheron.com/atlas-orthogonal-blue-heron-chiropr
Exploitation against children and abuses faced. WHO & HOW It affects children the most, and it's important to address this injustice because children are the most vulnerable group and getting educated at a very young age is important. WHY Their own families and known people, because of the curse of poverty. ROOT CAUSES Poverty, Lack of education, Lack of Awareness are few root causes. COMPASSIONATE INSIGHT To overcome the injustice of exploitation of children in the form of child labour and abuses faced, I will address the injustice by making sure to provide education to all the children in the community. GOALS Provide education to all the children and to give them proper care and attention against abuses. KEY STEPS Reach out to the officials. Reach out to the community leaders. Meet the families then meet the children. Talk to a counsellor. And finally get proper education for the children. NAME OF TEAM: We Don't Deserve This. Save Us !!
https://www.thesocialcaptains.org/post/we-don-t-deserve-this-save-us-the-incubated-project
In retrospection of the bleak events of the previous months surrounding this lingering and yet unresolved pandemic, our response as a nation may be described in two ways. For a good number of us, the challenge has been met with heroic and creative initiatives, working with whatever is at hand. However, for the rest, the challenge is a seemingly insurmountable burden, with our efforts being stifled by constraints in social movement and by the severe inadequacy of resources. Our response – and the difficulties that came along with it – revealed how we sorely lacked the means to actualize effective mitigation measures. It will be remembered that in the critical first few weeks of this Covid war, it became vital for us to test suspected infection cases in order to isolate those who may potentially spread the virus. It also became vital for us to have adequate medical-treatment capacity for those confirmed to be infected. Arresting the plague therefore depended primarily on our capability to quickly enact preventive or diagnostic actions; and corrective or curative interventions. But the limits of our resources eroded our chances of testing as fast as we should; and of preparing as many beds, equipment and medical personnel as needed. The lack of money also had serious impacts on those of us who tried their best not to get sick. The constraints in movement were admittedly necessary to stem the spread, but not too many of us can afford staying long at home without working. The Philippines is a country where a great majority of people are still trying to live on a day-by-day basis: almost everyone has to work in order to earn just enough to provide what one’s family will need in a 24-hour period. Our poverty cannot and will not give us the luxury to recollect and be rejuvenated for a few months, or even a few days. Even our own government is poor. At this moment, by its own admission, it has already used up all the funds it can give for those who were unexpectedly unemployed by the quarantine; and by its own blunt assessment of our gloomy situation, it is prepared to gamble on a rise in Covid cases in favor of an opening of the economy, if only to neutralize and alleviate the repercussions of our impoverishment. Our poverty is a curse. Our constant struggle for material needs has slowly extinguished our quest for life’s deeper meaning and purposes – we simply don’t have enough time for it. But in such a time when the entire world is fighting and competing in a game of “survival of the fittest country”, then the poverty of a poor country has become a death-sentence. When the Titanic hit the iceberg and there were only enough lifeboats for the rich, then the poor were doomed. How did we get to be so poor? How did we come to this perilous historical precipice in which even an invisible virus can actually bring about our total downfall, only because of our hopeless destitution? The Church in its social teachings has taught on the ramifications of the past that created the ethical dilemma that poor countries are now faced with. The Philippines belongs to an economically distinct segment of the community of nations known as the “Global South” – or what used to be called as the “Third World”. The South refers to the former colonies which today are sovereign states that have either won or have been granted their geopolitical independence from their colonizers within the latter half of the twentieth century; the Philippines is one such country who exercised self-government only in 1946. In need of economic stimulation – and burdened as well by a lack in technological infrastructure, by fragile or flawed democracies, and by inflexible and corrupt bureaucracies oftentimes controlled by oligarchies of the local elite – they turned or have been induced to turn to their much more affluent previous colonial masters, also known as the developed “Global North” for assistance in catalyzing industrial growth. The North – through the international institutions and large transnational corporations established in these countries – are investing heavily in the undeveloped South under the pretext of “aiding their development” by sourcing the latter’s resources as inexpensive raw materials and labor to efficiently create products and services at significantly higher profit margins. Moreover, the North – supported in the South by collaboration with the local oligarchies and by protection of the governments through favorable legislation – are pushing for the continued dependence of the South on their products and services under the guise of “globalization” by aggressively promoting the need for their outputs’ compulsory industrial use, or by subtly marketing their social structures and culture characterized by excessive consumption and ubiquitous materialism. The historical irony of a superficial geopolitical independence resulting in an unfortunate and deeply-ingrained socio-economic dependence – also termed as a “new colonialism” or neocolonialism – has caused an imbalance of trade and a widening disparity of wealth and advantages between the rich “former colonists” and the poor “former colonies”. The South thus caught in a vicious cycle of systemic exploitation and ever deepening impoverishment are not only economically disadvantaged, but also politically disadvantaged in the arena of participative decision-making on key international issues. Poverty in the South has never been dramatically overcome, consequently increasing vulnerability, stifling opportunities and causing serious uncertainties in the employment of its workforces, and interestingly driving many of them to look for brighter futures in the North. This model of causation which provides clear yet often-disputed antecedents for the global problem of inequality between countries, forms the basis for the Church’s consistent contention that the effects of such inequality in the disadvantaged South, must be attributed not only to the actions of its privileged few but also to the actions and complicity of the advantaged North. According to the proceedings of the 1980 conference of the Ecumenical Association of Third World Theologians (EATWOT), poverty is not a spontaneous occurrence and that the injustice that causes it, cannot simply be dismissed as the inevitable “side-effect” of history or of a system of geopolitical relations for which nobody can be blamed. Pope Francis warns in his 2013 apostolic exhortation Evangelii Gaudium: The need to resolve the structural causes of poverty cannot be delayed, not only for the pragmatic reason of its urgency for the good order of society, but because society needs to be cured of a sickness which is weakening and frustrating it, and which can only lead to new crises. Welfare projects, which meet certain urgent needs, should be considered merely temporary responses. As long as the problems of the poor are not radically resolved by rejecting the absolute autonomy of markets and financial speculation and by attacking the structural causes of inequality, no solution will be found for the world’s problems or, for that matter, to any problems. Inequality is the root of social ills. (Evangelii Gaudium 202) Poverty has been and has always been inflicted upon those who have the least resistance, and that those who have benefitted from this exploitation, are seriously challenged to take responsibility and to cooperate if not initiate the efforts toward structural change. Brother Jess Matias is a professed brother of the Secular Franciscan Order. He serves as minister of the St. Pio of Pietrelcina Fraternity at St. Francis of Assisi Parish in Mandaluyong City, coordinator of the Padre Pio Prayer Groups of the Capuchins in the Philippines and prison counselor and catechist for the Bureau of Jail Management and Penology. The views expressed in this article are the opinions of the author and do not necessarily reflect the editorial stance of LiCAS.news.
https://philippines.licas.news/2020/09/15/a-vicious-cycle-of-poverty-and-suffering/
In Northern Uganda, prolonged poverty and the impact of civil conflict continues to affect young people, forcing them to survive in a subsistence environment with little to no prospect of gaining employment. Latest figures from the Government of Uganda show that the incidence of poverty remains highest in the Northern region, with 43.7% living below the Poverty line compared to 4.7% in the Central region. The NGO Chance for Childhood has been documenting the links between poverty and the likelihood of coming into conflict with the law in the region since 2013, as well as looking at opportunities to promote restorative justice arrangements that address the root causes of committing petty crime. In 2015, Chance for Childhood launched its four-year Right2Change initiative to help Children in Conflict with the Law (CICL) access alternatives to detention, and pilots a model of community-based structured diversion. Over time, it has become apparent that girls can face different reasons for coming into conflict with the law, and different vulnerabilities have a greater impact on girls than on their boy peers in this context. Girls also experience the criminal and community justice systems differently. Girls make up almost 10% of the population of CICL in Uganda. However, the criminal justice system in Uganda is also used for the ‘safe custody’ of girls and girls may be detained simply because they are victims of crime, for example of forced marriages, child trafficking and commercial sexual exploitation. Other forms of discrimination encountered by girls in the criminal justice system include blaming the victim for her own rape, or being sexually assaulted by the Police when reporting a crime. International legal frameworks demand that juvenile justice systems should be directed at rehabilitation and reintegration. These legal frameworks also place emphasis on the unique risks and vulnerabilities faced by girls and the significance of diversion programmes, so that children are not processed through (and stigamtised by) the criminal justice system. A criminal response to welfare needs A key finding affecting CICL of both sexes is the misallocation of state resources, which are used to criminalise instead of protect vulnerable children. If funding and services were instead used in a more humane and just way, then the rights violations children suffer, which can lead to their social exclusion and vulnerability to criminality, could be redressed and a child’s pathway into coming into conflict with the law could be arrested. However, the present situation means that the needs of a child in conflict with the law remain unmet, and the child is instead stigmatised as a criminal, and blamed for the wider failure of society and the government, with all its consequences. The UN Committee on the Right of the Child notes in General Comment No. 10 (2007) that “criminal codes contain provisions criminalizing behavioural problems of children, such as vagrancy, truancy, runaways and other acts, which often are the result of psychological or socio-economic problems. It is particularly a matter of concern that girls and street children are often victims of this criminalization”. The criminalisation of children and young people with welfare needs and the reliance on imprisonment is unable to address the causes of offending. The tragedy is one for the individual child as well as society. For example, in England, better cooperation between courts and Youth Offending Teams and increased diversion from courts, was found capable of reducing imprisonment and delivering total savings of over £60 million. Many studies worldwide cite the harsher treatment of women and girls in the criminal justice system due to contravening gender norms. This includes for non-violent crime such as status offenses and acts such as theft and prostitution. Status offences (such as truancy, running away from home, violating curfew laws or possessing alcohol or tobacco) are often used against girls to control their behaviour. Meanwhile, girls who have turned to sex work as a survival strategy or are victims of trafficking for sexual exploitation are prosecuted, instead of the adults who are exploiting them. There are calls for status offences to be identified as welfare issues, for these offences to be abolished and the conduct addressed through child protection mechanisms. Risk factors Studies on the background characteristics of women in the justice system in Uganda could not be found. However, a history of trauma is almost universal among incarcerated adults in the USA (over 85%). Posttraumatic stress disorder (PTSD) is four to 10 times more prevalent among incarcerated women than in community samples. Another study in the USA found that crimes that women are convicted for “are closely related to the structural situation of women in this society. The characteristics are histories of abuse, of being responsible for children, and being limited to low skill/low income jobs. Incarcerated women grew up in abusive families to a much higher extent than men and a majority of them had experienced domestic violence… as single parents with usually low skills it is for most difficult or impossible to support their children”. Girls may be susceptible to peer pressure in different ways to boys. Girls experience more emotional stress from problem relationships because they are socialised to focus on relationships. This is particularly true in adolescence when relationship conflict can result in feelings of rejection and depression in girls. The resulting insecurity “can lead girls to associate with antisocial peers and romantic partners, increasing their vulnerability to delinquent behaviours”. Poverty For both boys and girls in Uganda, theft and other crimes can be a response to poverty. One study revealed that the majority of offences committed by children and young people in Uganda are related to their very survival and many had been forced to steal in order to eat. Young people aged 18-30 account for 64% of the total unemployed population in Uganda. The unemployment rate is higher among the better educated and among young women. However, even where youth are employed, 60% of paid young employees take home less than the average monthly wages/salaries. The disparity in median monthly wages by gender is significant at Shs 66,000 (USD $20) for females and Shs 132,000 (USD $40) for males. Criminalisation as a result of poverty or other factors further prohibits livelihood options and children are caught in a vicious cycle. Young people in Uganda said they are denied jobs after being released from prison because they lack skills and are burdened by social stigma. In addition to being a push factor in causing criminality, poverty can also negatively affect children’s experiences from within the criminal justice system. Where police corruption is prevalent and bribes are needed, for example for early release from detention, juveniles may be unable to pay and secure their freedom. A lack of livelihood options mean many young women resort to sex work and subsequently become more vulnerable to HIV, other sexually transmitted diseases, to abuse, and to coming into contact with the law. A major issue affecting young women and financial sustainability is access to land. Lack of land ownership and economic insecurity increases the dependence on and subordination to men, and makes women and girls vulnerable to the high rates of sexual and gender-based violence (SGBV) and to other abuses such as child marriage, which in turn impact a girl’s right to education and her future prospects. Interrelated vulnerabilities Various vulnerabilities cause children to abandon their childhood and seek coping strategies to ensure their survival. Or children are forced into deplorable situations of abuse and neglect, for example by being trafficked for sexual exploitation. The common root of this lack of agency and choice in a child’s life and the subsequent violation of their human rights is vulnerability caused by a range of factors – many of which coexist and are interrelated. For example substance abuse to cope with abuse and hunger; children in street situations linked to poverty, witchcraft accusations or conflict at home; SGBV both resulting from child marriage and causing it, when a girl feels that it is her only option for escape from an untenable home life. While girls in Uganda are generally vulnerable to human rights abuses due to gender inequality, specific groups of girls are more greatly affected. These include girls with disabilities, girls in conflict affected areas, girls who are out of school, girls subject to child marriage, child mothers, orphans, girls subject to defilement and SGBV, girls affected by HIV/AIDS, girls who have been trafficked for exploitation, lesbian, gay, bisexual, transgender and/or intersex (LGBTI) children, and girls in street situations. Many of these vulnerabilities co-exist for an individual child, and vastly diminish the range of available coping strategies and livelihood options. Such girls are therefore pushed further along a path which forces them into conflict with the law (such as through sex work). The majority of children in case studies in a global Save the Children study on CICL had dropped out of school, either to work to support their families or themselves, or because their parents were unable to pay the costs of their education. A lack of education may be the result of other vulnerabilities/abuses (e.g. child marriage, children in street situations) and may lead to further forms of vulnerability/abuse (illiteracy, lack of access to higher education). Orphans and vulnerable children (OVCs) in Uganda are more likely to engage in child labour.. However, many such children end up resorting to theft and street begging. A child’s coping strategies and livelihood options on the street are likely to result in coming into conflict with the law as a child is forced to “choose” between stealing and going hungry; having sex with a policeman or being arrested. Children and girls in detention Girls in custody are more likely to self-harm than boys. International guidelines note that girls placed in an institution deserve special attention as to their personal needs, and that they are especially vulnerable due to their small numbers as well as their gender, including to abuse from officials such as the police. According to the Uganda Bureau of Statistics, of the 42,000 individuals in prison in 2014, 23,000 are on remand awaiting trial (55%). Women and girls may be disproportionately given pre-trail detention as most female offenders have a low income and may find it hard to provide a financial guarantee, or to evidence secure employment or secure accommodation. Female pre-trial detainees are more likely than their male counterparts to be held with convicted prisoners because there are fewer facilities for detaining women. Incarceration of children is proven to be ineffective. In the UK, 72% of children released from custody go on to re-offend within one year. In France, recidivism figures rise to 90% for children incarcerated a second time. Girls’ treatment within the community justice system UN agencies note a ”justice vacuum” in Uganda caused by inaccessibility of the formal justice system in much of the country. Informal justice mechanisms have filled a large space in this vacuum and can therefore be capitalised on as existing systems for encouraging diversion and other community responses to juvenile offending. However, informal justice systems, especially custom and religious-based, are likely to uphold rather than to challenge the harmful norms and values of the society around them, including attitudes and patterns of discrimination. It is important that rape and sexual violence should not be dealt with by community justice systems, but referred to ordinary courts. Girls may receive less support in the community or family, and be less willing to ask for help. In the UK, girls are less likely than boys to have the support of their family, leaving them isolated or dependant on the support of the local authority, their “corporate parent”. In addition, due to a lack of sufficient facilities for girls, their placement a significant distance from a child’s home area decreases the chances of maintaining family and community links. Responses A common theme for girls in conflict with the law, is their limited choices and survival strategies. Expanding the choices available to a child is therefore “the next logical step”. As there may be fewer choices available to girls than boys, particular efforts should be made to promote gender equality in programming. This may include supporting a girl’s choice to remain in education during and after a teenage pregnancy, or for the perpetrator of sexual violence in the home to be forced to leave through the application of the rule of law, instead of the girl child running away or becoming dependent on another abusive relationship as a way out. A key point is that girls and boys need to be empowered to make different and expanded choices made available to them, rather than having these choices made for them. Meanwhile research on the protective factors associated with females and offending, identifies factors such as high self-esteem, assertiveness, healthy lifestyles, supportive and enduring relationships with families and peers, access to services, positive female role models, and alternative education provision. Particularly important for girls is recognising the significance of relational ties for girls’ development. The fostering of positive relationships, including with family members, peers, romantic partners, therapists, and juvenile justice professionals, can play a significant role in helping girls heal from trauma and resist delinquency. Research on some of the most vulnerable young people in Uganda in conflict affected areas urges that development efforts should acknowledge their significant potential and seek to create substantive roles for youth to engage in peace-building and civic activities, allowing them to build confidence, leadership skills, and empowerment. Examples of best practice in diversion include the Uganda Fit Persons model which provides community support for children who have either been diverted, given a community sentence or reintegrated into their families and communities. Fit Persons are trained and respected individuals who support and follow the child in their reintegration process. In cases where families are unable or unwilling to be a guarantor for the child, the Fit Person is able to step in and even provide temporary foster care while searching for longer-term care options. The model recognises the significance of care and protection issues: that diversion and community-based alternatives need to be provided to children facing care issues; and that addressing care and protection issues is integral to solutions for children who have come into conflict with the law. An emphasis on diversion is prevalent within the juvenile justice sector worldwide, and awareness is raising on the issue in Uganda, with the drafting of National Diversion Guidelines for Juvenile Justice. However, diversionary measures can only be successful when part of a holistic system that addresses all the needs and vulnerabilities of the girl (and boy) child including a continuum of care and protection interventions that address multiple challenges and risks before a child comes into conflict with the law, and expanding the choices available to the child.
https://libertyandhumanity.com/themes/womens-rights/uganda-new-insights-to-support-girls-in-conflict-with-the-law/
New research on human trafficking finds that "prevention activities are still very limited and those that exist are neither co-ordinated nor properly evaluated." Barbara Limanowska, who is a consultant for the Office of the High Commissioner for Human Rights in Bosnia Herzegovina, argues in her latest report that it is time for all involved in anti-trafficking measures to seriously examine the practices implemented and to focus on addressing the root causes of trafficking in an empowering way. Barbara's research findings are elucidated in a comprehensive report entitled: "Trafficking in Human Beings in South Eastern Europe: Focus on Prevention". The report analyses current trends and highlights the challenges facing anti-trafficking strategies in SEE, concluding with the recommendation that success depends on the adoption of prevention as the new approach to trafficking. AWID: The latest SEE Report highlights prevention as a core approach, including research on the demand side of trafficking. In your opinion, why do you think there has been a shortfall in demand-focused anti-trafficking strategies to date? How can we move forward on this important issue? BL: The meaning of the research on the demand side of trafficking depends very much on the approach to trafficking as such. There is a group of professionals, researchers and activists, who tend to equal trafficking to prostitution. They understand demand for trafficking simply as the demand for prostitution and therefore focus their research and action exclusively on the clients of prostitutes. However, the definition of trafficking is broader and describes trafficking as recruitment, or transfer of persons (including men and children), by means of the threat or use of force or other forms of coercion for the purpose of exploitation. Exploitation means here not only the exploitation of the prostitution of others or other forms of sexual exploitation but also forced labour or services, slavery or practices similar to slavery, servitude or the removal of organs. Therefore also research on demand for services/labour of trafficked persons should be seen in a broader context of labour exploitation and should mean all kind of research on demand for cheap unprotected labour, especially labour of migrant workers in the countries of destination. I do understand that some researchers might be more interested in researching the "exiting" issue of clients of prostitutes, than the complicated and politically sensitive problem of the profit gained by the destination countries in general and citizens of those countries, for example farmers or construction companies employing illegal migrants - victims of trafficking, in particular. However, I do not understand the obsession of some governments with the subject of prostitution (especially Sweden and the US), combined with their reluctance to look at other forms of the trafficking related demand. It seems that moralistic debates are more convenient, and cheaper, for the governments than serious political discussion about the problem of international trafficking in the context of migration and migrant labour. AWID: You differentiate between "repressive" and "empowering" anti-trafficking activities in the report. What are the main differences between them, and do you think that both strategies should run parallel, or should there be a greater focus on empowering strategies? BL: In the report I am stating that the term "repressive strategies" relates to activities which focus on the suppression of negative (or perceived as negative) phenomena related to trafficking, such as illegal migration, labour migration, illegal and forced labour, prostitution, child labour or organized crime. Such strategies are designed to stop illegal or undesirable activities and are mainly enacted by law enforcement agencies that implement restrictive state polices and punish those who are found guilty of crimes related to trafficking. They are fully legitimate and necessary for the purpose of protecting state security but, in the same time such actions often run counter to the protection of victims of trafficking. Moreover, the actions against the state (illegal border crossing, smuggling, etc.) are often understood and presented as crimes so closely related to trafficking that these repressive strategies become referred to as anti-trafficking strategies, which are supposed to benefit the victims. "Empowering strategies", on the other hand, focus on enabling people, especially potential victims of trafficking, to protect themselves from trafficking by addressing the root causes of the crime. Such strategies might include measures to overcome poverty, addressing discrimination and marginalization in the process of seeking employment and/or labour migration, as well as measures to allow people to make informed decisions and choices that might help them to overcome problems and prevent trafficking. Activities may include supporting and empowering high risk groups, providing educational activities for vulnerable young people to develop necessary life skills, adjusting education to the needs of the labour market, protecting the rights of migrant workers (including the distribution of information about safe/legal migration and supporting control over the process of migration by migrants), formalizing informal sectors in the countries of destination, addressing the issue of demand and providing information about labour laws in the countries of destination, and, protecting, supporting and empowering victims of trafficking, including social inclusion and strengthening the protective environment for child victims of trafficking. For a number of years, it has been more common for State agencies and some international organizations to use repressive strategies, rarely incorporating empowering strategies into their actions. Therefore, the strategies used were, in the first place, of a legislative and prosecutorial nature, while long-term prevention and protection of the rights of the victims were seen as second, or distant, priorities. Empowering strategies have tended to be used by human rights organizations and values-based NGOs, as well as a limited number of State agencies. Organizations that are using empowerment strategies to prevent trafficking have been advocating for governments to adopt a human rights approach and to actively engage in meaningful dialogue with civil society actors. They have been stressing the need for inter-Ministerial and inter-agency cooperation and have been trying to ensure presence of a human rights perspective in the law enforcement approach, as well as the inclusion of preventive measures into the anti-trafficking strategies. The experience of the NGOs showed that strategies focusing only on repressive measures are not victim-centered and often resulted in further victimization of trafficked persons. In order for anti-trafficking strategies to be effective and to protect the victims, there has to be a general understanding and acceptance of the empowerment approach to preventing trafficking that is firmly based on human rights principals. AWID: Human trafficking has attracted a large number of donor agencies. What are some of the problems you have noted regarding the work of donor organizations in the region, and what should be their priority? BL: There are several issues that have to be raised in relation to the role of the donor agencies. First of all, anti-trafficking activities are supported regardless their effectiveness and the costs of the programs. Trafficking is the only issue that I know about, that donors are willing to finance without expecting any concrete results. Anti trafficking programs are not properly monitored and evaluated, nobody is checking if the programs are really necessary, if they fit into a broader country or regional strategy and are not duplications of already existing projects. After many years of anti-trafficking work in the SEE region, there is still no knowledge about the best approaches and the effectiveness of the used methods. In some situations it seems that trafficking is used as an excuse to shift attention and use funds of development or social change organizations to support anti-migration activities in the countries of origin. It happens quite often that the resources that should go to civil society are going to the law enforcement agencies to build their capacity (data based of migrants, technical capacity, etc.) under the banner of anti-trafficking programs. Another problem is the lack of involvement of local NGOs in anti-trafficking work. Anti-trafficking programs are usually planned for a short time and do not have any capacity building component included. Funds are donated to the big international organizations, which subcontract local NGOs to implement concrete projects. Such policy very often leads to a negative selection of local NGOs - there are less NGOs focused on protection of the rights of the victims involved in anti-trafficking work now that a couple of years ago. Those who still work are less interested in human rights principles and women's rights, more in good cooperation with international and governmental agencies. NGOs, due to the attitude of the international organizations, see anti-trafficking work more as an income generation activity than an implementation of the human rights based mission. AWID: Stronger political commitment from SEE governments hasn't necessarily translated into more effective anti-trafficking strategies. What are the main challenges and how can they be overcome? BL: While the institutional response to trafficking in SEE is well developed, the work of the institutions involved is not very effective and not well coordinated. The main problem is the lack of clarity of the roles of existing structures and the unclear division of responsibilities of institutions taking part in anti-trafficking work. Another serious problem is lack of flexibility of established structures that are not able to react to the changes in trafficking trends and the needs of the changing anti-trafficking response. Lack of information about current trends in trafficking among the anti-trafficking institutions, lack of a proper identification system adjusted to the new trends in trafficking, and lack of a referral system for local victims of trafficking are the biggest problems of the existing system. At the same time it seems that the responsible institutions are not fully aware of those problems. They seem not to be concerned about the lack of reliable information about current trafficking trends and the lack of knowledge about the changing scope of trafficking in the region. The structures are static and viewed as "once and for good" established institutions rather than flexible instruments that should be monitored, evaluated and adjusted as the situation in trafficking changes. There is no self-regulatory mechanism included in the anti-trafficking system that could help in the process of adjustment and re-structuring, when necessary. For example, while almost all institutions engaged in anti-trafficking work acknowledge that the system of assistance to trafficked persons does not meet their needs and that many victims refuse assistance because of that, there are no plans to change this situation. The victims' perspective is not included in the anti-trafficking response. Regardless of the low effectiveness of this method, the prosecutor's perspective, with the focus on the role of the victims as witnesses and offering them deals depending on their usefulness for prosecution, is predominant in the assistance programs. There is a need to change the approach to combating trafficking in the SEE region, to recognize the new situation and develop a comprehensive human rights-based system for counter-trafficking activities (including prevention, protection and prosecution) relying on government-owned, flexible structures, acknowledge the changing modalities of trafficking and the fact that that current assessments are based on limited information and that there is need to improve information gathering, research and dissemination systems, and, acknowledge the need to set up standards and procedures for anti-trafficking work, including the monitoring and evaluation of implemented programs and accountability of the institutions involved. AWID: An important undercurrent emerging from your research is the strong link between poverty, gender equality, development and trafficking. How do you think we can facilitate better cooperation between institutions so as to properly address the structural causes of human trafficking? BL: In general, trafficking is still perceived and treated as an isolated social and criminal phenomenon that can be addressed separately from other problems. Although we know about the root causes of trafficking - poverty, unemployment, discrimination, violence in the family, and demand in the countries of destination - and understand that socio-economic factors are strongly linked to vulnerability to trafficking, this knowledge has not yet been translated into policies and strategies. The issue of trafficking remains largely ignored in the Poverty Reduction Strategies developed in the region. Plans of Action on gender equality, child rights, social support or HIV/AIDS rarely mention trafficking and do not integrate actions against trafficking into their programs. In addition, international organizations tasked to deal with economic development and poverty reduction, such as the World Bank and UNDP, while addressing employment, discrimination and the prevention of violence, do not perceive vulnerability to trafficking as a special issue and have not included anti-trafficking prevention into their development programs in any systematic way. "Mainstreaming" of trafficking into the development and gender agenda has not yet begun. Although there has already been some discussion about the social and economic situation of high risk groups and the need to address the root causes of trafficking, including the consequences of economic transition, privatization, structural adjustment programs and the planned changes in social welfare systems, there seems to be a lack of understanding and interest on the part of the development agencies to include the issue of trafficking into their programs. While a gender impact assessment is a mandatory component of all World Bank programs, it does not touch on trafficking and has not brought about any adjustments to the poverty reductions strategies or World Bank programs in the region. A theoretical framework for addressing root causes of trafficking developed by human rights organizations does exist. There are also some prevention programs in the region focused on addressing the root causes of trafficking, such as empowerment, re-schooling, employment and job skills development programs for vulnerable groups in countries of origin. There is need for a broader debate on trafficking within the context of poverty reduction strategies, sustainable development, policies on prevention of different forms of discrimination, social policy models and, last but not least, migration policies. Structural reforms must take into account the trend to relegate women from the public sphere, including the economy, and the high levels of unemployment among young people. They should be accompanied by social policy measures to support vulnerable groups. There is also a need to develop new types of income generating activities for high-risk groups that could form alternatives to migration. Prevention of trafficking is not predominant enough in the plans of action against trafficking and is not coordinated with other action plans that affect the same groups - child protection, gender, HIV/AIDS prevention, etc. Not enough attention is paid either to the relationship between various related social issues, such as education and child rights, gender discrimination and inequality in the labour market. There is generally a lack of close co-operation and co-ordination between different institutions and different governmental action plans, to the detriment of trafficking prevention work. There is also no connection made between trafficking, labour markets and forced labour. The enforcement of minimum labour standards in the countries of destination would reduce the economic incentive to employ irregular migrants and to exploit trafficked persons, while a reduction of unemployment among the high risk groups in the countries of origin would reduce irregular migration and, thus, trafficking. Labour market-oriented anti-trafficking strategies do not target trafficking directly but remove the economic incentive and could be very effective as a long-term prevention strategy. It must be stressed that without a stronger emphasis on prevention and the involvement in anti-trafficking work of institutions that are able to address the root causes of trafficking, a successful attack on trafficking is not possible. Notes: *Download a copy of the full report from: www.seerights.org *Barbara Limanowska works as a consultant on the issue of trafficking in human beings for various international agencies. In the framework of the SEE RIGHTs project, which was a joint initiative of OHCHR, UNICEF and OSCE-ODIHR she has written three reports about trafficking in human beings in the Balkan region: Trafficking in Human Beings in South Eastern Europe. Current situation and responses to trafficking in human beings in Albania, Bosnia and Herzegovina, Bulgaria, Croatia, the Federal Republic of Yugoslavia, the Former Yugoslav Republic of Macedonia, Moldova and Romania (UNICEF, UNOHCHR, OSCE/ODIHR, 2002), Trafficking in Human beings in South Eastern Europe. 2003 Update on situation and responses to trafficking in human (UNICEF, UNOHCHR, OSCE/ODIHR, 2003) and Trafficking in Human Beings in South Eastern Europe, 2004 ? Focus on Prevention (UNICEF, UNOHCHR, OSCE/ODIHR, 2005). Currently she works as a consultant on anti-trafficking for the Office of the High Commissioner for Human Rights in Bosnia Herzegovina. *Research for this report was carried out in Albania, Bosnia and Herzegovina (BiH), Bulgaria, Croatia, the former Yugoslav Republic of Macedonia (FYR Macedonia), Moldova, Romania, Serbia and Montenegro, and the UN Administered Province of Kosovo between January 2004 and March 2004. See: http://www.uncjin.org/Documents/Conventions/conventions.html. The UN Protocol to Prevent, Suppress and Punish Trafficking in Persons, especially Women and Children, supplementing the United Nations Convention against Transnational Organized Crime, p.2. Published in: Jones, Rochelle, "Prevention as the New Approach to Human Trafficking," AWID, Resource Net Friday File Issue 240, 19 August 2005. The Advocates for Human Rights Site Map About the Site 330 Second Avenue South, Suite 800, Minneapolis, MN 55401 USA Phone: (612) 341-3302 Fax: (612) 341-2971 Email: [email protected] Although Stop Violence Against Women endeavors to provide useful and accurate information, Stop Violence Against Women does not warrant the accuracy of the materials provided. Accordingly, this Web Site and its information are provided "AS IS" without warranty of any kind, express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular use or purpose, or non-infringement. Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply to you. We reserve the right to make improvements and/or changes in the format and/or content of the information contained on the Web Site without notice.This information is provided with the understanding that Stop Violence Against Women and its partners are not engaged in rendering legal or other professional services. If legal advice or other expert assistance is required, the services of a competent professional should be sought. Copyright © 2018 The Advocates for Human Rights. All rights reserved. Permission is granted to use this material for non-commercial purposes with proper attribution to The Advocates for Human Rights.
http://www.stopvaw.org/Prevention_as_the_new_approach_to_human_trafficking
There is a popular saying “children are the hope and future of a nation and their overall growth must be the supreme concern of every country”. Yet, millions of children aged between five and 17 in Nepal are deprived of their right to quality education, food and health services. They are employed in hazardous work. Families living below poverty line are forced to send their kids to perilous work to sustain their households. Children in large numbers still are out of school. Many of them suffer from exploitation and end up on the streets. Child labour refers to any work that deprives children of their childhood and their right to education, health, safety and moral development. The practice of child labour has been condemned as a violation of their basic rights and also for its negative impact on the mental and physical health of children. Even though the constitution says children cannot be employed in factories, mines and other dangerous work places, there are many children who have still employed at brick kilns, restaurants and public transportation sector. It really feels terrible when I find a child working as a helper in a public vehicle, calling for passengers. It is sad that these children are often treated unfairly. They are exposed to highly risky situation and polluted environment, which causes serious health problem. They are always prone to accident. Similarly, in many rural areas of Nepal, lots of children, particularly from indigenous communities, drop out of school to work. When they are supposed to go to school, they have to go the jungle to collect firewood. They return in the evening carrying loads of fodder and firewood on their shoulders where they should be hanging their school bags. The root cause of child labour is extreme poverty. Similarly, inadequate education and lack of awareness of society and parents are other reasons. Eradication of poverty and education and awareness are a must if we were to eliminate child labour. Laws and policies related to child labour should not be limited to papers only to ensure children’s right to education, food and health. There remains a huge gap between various commitments made to eliminate child labour and efforts to translate them into effective action. As per its international commitment, Nepal aims to eliminate all forms of child labour by 2020. Time is running out.
https://thehimalayantimes.com/opinion/child-labour-problem
The latest report from the World Bank paints a depressing picture of poverty in India, with nearly five percent of the country’s population living on less than $1.90 US per day. In this blog, we look at the causes of poverty in India, such as Lack of affordable health care and medical assistance, poor education standards, and malnutrition. Poverty In India Caused By Corruption Despite efforts from the Indian government to eradicate poverty, India’s impoverished population continues to grow. Rising debt, along with an increase in crime and other significant issues, have led to a growing demand for economic change by the lower classes in order to alleviate extreme poverty. A person is defined as living in poverty if he or she lives in a household that is unable to afford basic food or shelter. In India, 65 % of the population lives below the poverty line, and 20 % live in extreme poverty. This has been caused by corruption from officials and bribery. Poverty In India Caused By A Lack Of Good Jobs It shows that poverty in India is caused by a Lack of good jobs and low wages, which then causes hunger, disease, and malnutrition. We recognize the difference between developed and developing countries; however, a global trend is that people are becoming poorer. As I mentioned earlier, out of 1.3 billion people in India, there is almost a quarter live in poverty. Due to things like Lack of jobs, resources, access to education, and opportunity, they are unable to break away from this cycle of poverty. Many families live in single-room dwellings, which are very close together. Poverty is the Lack of an adequate standard of living that threatens a person’s health and well-being and social exclusion. The concept of poverty can be applied to the individual or family or apply to an entire country. Its causes are often related to factors including unemployment, discrimination, poor governance, and policies. Poverty In India Caused By Weather/Climate Change This term refers to a condition of deprivation, which includes having limited access to basic human needs, such as food and shelter. Significant numbers of people in India, up to 67% according to some estimates, are living in poverty. The main factors that contribute to this are economic inequality, underdeveloped infrastructure, and increasing catastrophes caused by climate change. At present, the biggest poverty issue in India is not the pace of GDP growth but climate change which adversely affects crop yields. Cold-stunned crops like rabi chickpea (gram) in Haryana and south Punjab, rice in Assam and Bengal, groundnut (peanut) in Maharashtra, and Bt cotton or paddy are all examples of such sensitivity. Since Indian agriculture is highly vulnerable to weather shocks, any rise in temperature due to rapid climate change could have an adverse impact. The growing season for agriculture is getting prolonged for many crops, and the time taken for harvesting will increase, both of which will have multiple effects on the food supply. Poverty In India Caused By Social Injustice The poverty status in India has been a significant concern. According to the World Bank, a staggering 32.7 percent of people live below the poverty level in India. Social injustice and political factors in the country are two significant causes of widespread poverty in India. Poverty is an economic situation where people lack the resources to obtain the basic necessities of life. Most of the developing nations have been facing significant problems of poverty. According to World Bank, it was stated that 1.3 billion people are living on less than USD1.25 a day, and nearly 2 billion people are living on less than USD2 a day. Poverty in India is mainly due to social injustice, gender inequality, and Lack of employment opportunities for rural masses. Poverty In India Caused By A Lack Of Food And Water India has one of the highest rates of poverty in the world, with a quarter of its population living on less than $0.5 per day. Poverty in India is primarily rural and more prevalent among women and children. India’s widespread poverty is caused mainly by the Lack of food, clean water, proper clothing, and shelter. Poverty is the Lack of elements considered essential by basic standards of living. It includes access to safe drinking water, adequate health care and shelter, nutritional food, clothing, and energy services. The most extreme form of poverty is starvation. More than 1 billion people, or nearly one-sixth of the developing world, live below India’s official poverty line of $1 a day on purchasing power parity. Causes of poverty are hard to understand but we all need to work together to control these causes. Poverty In India Caused By A Lack Of Government Support The average monthly income for an average Indian citizen is $30, which is far from enough to cover basic living expenses. There are many types of poverty in India: Poverty of education, health and nutrition, congestion, unemployment, illiteracy, and Lack of capital. However, all of these root causes boil down to the Lack of government support. In rural areas, farmers struggle to earn enough to pay for their children’s education. In urban areas, there are too many people put into small spaces causing disease and illness to spread quickly. In India, there are still vast areas of poverty due to a Lack of government support. Over the last 50 years, India has seen a rise in people living below the poverty line, with 43.5% of people below it, according to a 2011-2012 survey. The average monthly income was calculated as 100 rupees (US$2 in rural areas and US$3 in urban areas). A majority of poor households earn less than 1,500 rupees per month (the equivalent of US$30), which is not enough to provide for basic needs. Causes of poverty are important to know so that we can control them. Poverty In India Caused By Inequality The poor in India suffer from the consequences of a powerful undercurrent that threatens to drown them-inequality. The extremely wealthy and the inferior face different economic situations because of the differences in their wealth. The rich have excess money while the poor are left with just enough to survive. Oppressed by this system, the poor in India try to break free of it any way they can. Hence, oppression weighs down on them, making their position worse and worse with time. The country of India is currently facing issues from the brutally ineffective system of caste-based discrimination and social hierarchy going as far back as 500 B.C., known as the Caste System. This ancient practice resulted in thousands upon thousands of people creating a poverty-stricken community of nearly bottomless depths. Based on your birth, or “caste ranking,” everyone in this community was expected to follow rigid rules to make sure they remained where they were assigned to be forever, resulting in a poverty-stricken community that hasn’t seen any progress since its creation. Poverty In India Caused By Lack Of Education A devastating 60% of India is living in poverty. The immediate cause is a lack of education and, in some cases, help to enter the job market once they have completed their studies. Each year more than 4 million children finish primary school but never go on to secondary school. Many drop out and soon fall into poverty, marrying young and having lots of children. Poverty in India is primarily a result of illiteracy. Many people living in poverty do not have the skills and education needed to earn a good living. The cycle of poverty it perpetuates can often be seen as families living in villages without proper shelter, clothing, and food.
https://soulandland.com/poverty/main-causes-of-poverty-in-india/
COVID-19 Restrictions in the US: Wage Vulnerability by Education, Race and Gender [open pdf - 0B] From the Abstract: "We study the wage vulnerability to the stay-at-home orders and social distancing measures imposed to prevent COVID-19 [coronavirus disease 2019] contagion in the US by education, race, gender, and state. Under 2 months of lockdown plus 10 months of partial functioning we find that both wage inequality and poverty increase in the US for all social groups and states. For the whole country, we estimate an increase in inequality of 4.1 Gini points and of 9.7 percentage points for poverty, with uneven increases by race, gender, and education. The restrictions imposed to curb the spread of the pandemic produce a double process of divergence: both inequality within and between social groups increase, with education accounting for the largest part of the rise in inequality between groups. We also find that education level differences impact wage poverty risk more than differences by race or gender, making lower-educated groups the most vulnerable while graduates of any race and gender are similarly less exposed. When measuring mobility as the percentile rank change, most women with secondary education or higher move up, while most men without higher education suffer downward mobility. Our findings can inform public policy aiming to address the disparities in vulnerability to pandemic-related shocks across different socioeconomic groups."
https://www.hsdl.org/?abstract&did=853859
IRNA – Iran presented a statement at the high-level meeting of the General Assembly on the Appraisal of the United Nations Global Plan of Action to Combat Trafficking in Persons from September 27-28. Read out by an Iranian envoy to the United Nations, the statement also reiterated Tehran’s resolve to take necessary measure to counter the ‘horrible crime’. Deputy Permanent Representative to the United Nations Eshaq Al-e-Habib further noted that Iran’s parliament has also taken up the Law on Combating Human Trafficking for debate to devise a strong domestic legal regime to that effect. Full text of the statement is as follows: Mr. President, I would like to commend the distinguished Permanent Representatives of Qatar and Belgium for facilitating negotiations on Political Declaration. The Islamic Republic or Iran is committed to prevent and fight any manifestation of human trafficking and reiterates its resolution to take all necessary measures to counter this horrible crime. To this end, the Law on Combating Human Trafficking was adopted by the parliament on 2004 and has once been revised to fill the gaps and strengthen the domestic legal regime. We continue our efforts for effective enforcement of the legislation including through training of judicial and law enforcement departments. In the legal fight against trafficking in persons, it is essential to address all interrelated root causes that make people vulnerable to trafficking. Millions of people, women and girls as well as young men and boys have fallen prey to exploitation and trafficking due to poverty and unemployment. In the meantime, foreign interventions and armed conflicts have seriously aggravated their vulnerability to trafficking. Interventionist and destabilizing policies around the world particularly in Africa and the Middle East have served as breeding ground for criminal networks to engage in trafficking of people who are in the most vulnerable situations. Trafficking in persons follows the principle of supply and demand. The supply side cannot be stopped as long as uncontained demand for trafficked forced labor, prostitution or removal of organs exists. The complex synergy between trafficking in persons and certain organized crimes such as drug trafficking and smuggling of migrants calls for scaled up international cooperation, including through better information sharing and the provisions of capacity building and technical assistance for developing countries. Meanwhile, the importance of education and raising awareness on human trafficking in countries of origin, transit and destination cannot be overemphasized. End users of services, provided by trafficked persons, are in much need of training as those who are at the risk of being trafficked. Mr. President, In conclusion, I would like to underscore the importance of availability of impartial and reliable data on trafficking in persons at different levels. Member States whose destructive foreign policy options have left millions of peoples at the risk of exploitation and trafficking have no more authority to produce politicized reports that disregards the responsibilities rest on them. We question their competency and integrity, an in the meantime recognize the work of the United Nations Office on Drugs and Crime (UNODC) in authoring the biennial Global Report on Trafficking in Persons as a follow up to the global Plan of Action. We also reaffirm the central role of the UNODC in promoting the partnership in support of other pillars of the Plan of Action, namely prevention, protection and prosecution.
https://theiranproject.com/blog/2017/09/29/iran-hails-un-plan-combat-human-trafficking/
Three Days Advocacy Training ANCTIP Secretariat ``OFRD`` conducted the second round of Advocacy Training – CTIP to the 14 members of ANCTIP from 24 to 27 April 2019. Major components of TIP Law, policy, forms and level of advocacy, models of advocacy, advocacy strategy, steps for strategy formulation, and preparing the advocacy action plan were the main topics for Advocacy Training. The Advocacy Training was very useful for the capacity building of ANCTIP members for practical advocacy. Combating Human Trafficking in Afghanistan Definition of Human Trafficking Human trafficking is the act of recruiting, harboring, transporting, providing or obtaining a person for compelled labor or commercial sex acts through the use of force, fraud or coercion. It’s important to note, though, that human trafficking can include, but does not require, movement. You can be a victim of human trafficking in your hometown. At the heart of human trafficking is the traffickers’ goal of exploitation and enslavement. Types of Human Trafficking Sexual exploitation and forced labor are the most commonly identified forms of human trafficking. More than half of the victims are female. Many other forms of exploitation are often thought to be under-reported. These include domestic servitude and forced marriage; organ removal; and the exploitation of children in begging, the sex trade and warfare. Causes of Human Trafficking The causes of human trafficking are complex and interlinked, and include economic, social and political factors. Poverty alone does not necessarily create vulnerability to trafficking, but when combined with other factors, these can lead to a higher risk for being trafficked. Some of those other factors include: corruption, civil unrest, a weak government, lack of access to education or jobs, family disruption or dysfunction, lack of human rights, or economic disruptions.
http://anctip.org/
Age Group: Junior Cycle / Senior Cycle / Transition Year Themes: - Inequality/ - Poverty/ Subjects: - CSPE/ - English/ - Politics and Society/ - WWGS Global Citizenship Education/ Root Causes Activity Global Citizenship Education involves exploring the root causes of poverty, injustice and inequality. Everyday we are exposed to the consequences but rarely the root causes. WHY is always the most important question. Here are the print outs (in pptx) for an activity you can do with your students: Ranking Reasons-2 Instructions: - Print out 6 sets of the Ranking Reasons (based on a class of 30) - Give each group a set and ask them to rank the root causes with the biggest contributor to inequality at the top and the least at the bottom. Give them a blank sheet and ask them to list any other root causes of inequality. - Each group explains why they have ranked their reasons in that order.
https://www.worldwiseschools.ie/resource-item/root-causes/
Introduction to Brazil’s High Murder Rate in 2020 The murder rate in Brazil is one of the highest in the world, and it has been increasing steadily in recent years. In 2020, the country saw a staggering 30,000 homicides, making it the deadliest year on record. This is an alarming statistic and one that should be taken seriously by the Brazilian government. Despite the shocking numbers, Brazil is still considered one of the safest countries in Latin America. Most homicides occur in specific areas and among specific demographic groups. Most victims are young black or mixed-race males living in the favelas (urban slums) or peri-urban areas of large cities. The causes of Brazil’s high murder rate are multifaceted and complex. The country’s high levels of inequality, poverty, inadequate access to health services and education, as well as its weak rule of law and inefficient criminal justice system, all contribute to the problem. The Brazilian government has taken steps to address the issue. In 2019, the government launched the National Public Security Plan to strengthen the rule of law and reduce violence. The plan includes initiatives such as increasing the number of police officers, creating specialized units to combat organized crime and investing in public safety programs. But despite these efforts, the murder rate in Brazil remains alarmingly high. Much more needs to be done to address the root causes of the problem and reduce the number of homicides in the country. Until then, Brazil will continue to suffer from one of the highest murder rates in the world. Causes of Brazil’s High Murder Rate Brazil is one of the most dangerous countries in the world, experiencing incredibly high rates of violent crime and murder. The country’s murder rate has been steadily increasing for the past decade, with a particularly sharp rise in 2019. This alarming trend is due to a complex mix of economic, social, and political factors. To begin with, Brazil’s staggering economic inequality significantly contributes to its high murder rate. Brazil has experienced a rapid expansion of its middle class in recent years, but this has done little to address the country’s longstanding issues of poverty and inequality. Economic disparities often lead to violence and crime, as people living in poverty are more likely to resort to violent crime to make ends meet. Furthermore, Brazil’s weak law enforcement and judicial system have contributed to its high murder rate. Police forces are notoriously corrupt and underfunded, and many regions lack the resources to combat crime effectively. This lack of enforcement has led to a culture of impunity, where criminals are less likely to be brought to justice for their crimes. Finally, Brazil’s political climate has significantly impacted its crime rate. The country has experienced a period of political unrest and instability over the past decade, leading to an increase in violence. Political tensions have been further exacerbated by the growing influence of powerful drug cartels, which have been linked to numerous murders in Brazil. These factors have resulted in Brazil’s alarmingly high murder rate. To reduce the number of violent crimes in the country, the government must take steps to address the underlying causes of its crime problem. This includes investing in more vigorous law enforcement and judicial systems, addressing issues of inequality and poverty, and tackling the power of criminal organizations. Effects of Brazil’s High Murder Rate The high murder rate in Brazil is a cause for alarm and a significant source of concern for the safety of those living within its borders. Brazil is the fifth most populated country in the world and has one of the highest murder rates. This troubling statistic has a wide-reaching impact on Brazilian society, from the economy to public safety. The high murder rate in Brazil has a significant economic impact. The cost of criminal justice, medical care, and social services associated with these murders is estimated to be billions of dollars. Additionally, the violence has discouraged many foreign investments, as businesses are often unwilling to put their money into a country with such a high and persistent risk of violence. The violence has also affected the public safety of Brazilians. Many citizens have limited trust in the police or government to protect them, leading to a culture of fear and insecurity in many parts of the country. This fear is particularly pronounced in areas with high levels of organized crime, leading to a lack of confidence in the justice system. The culture of violence has also had a profound effect on the education system in Brazil. Schools in areas with high murder rates tend to be underfunded and understaffed, leading to a deterioration of educational quality. Furthermore, school violence is frequent, with students carrying weapons to defend themselves. This has hurt academic achievement levels, as students cannot focus on their studies when they fear for their safety. Finally, the high murder rate in Brazil has had a devastating effect on the country’s mental health. The trauma of living in an environment of violence can lead to long-term psychological damage and increase the risk of developing depression, anxiety, and other mental health disorders. Additionally, the prevalence of violence can lead to a culture of apathy and resignation among citizens, leading to a general decline in morale and a reduction in trust in the government and its institutions. The high murder rate in Brazil is a complex problem with far-reaching consequences. From the economy to public safety, education, and mental health, the effects of this violence are felt on multiple levels. It is essential that the government takes action to address this issue and works to create a safer and more prosperous future for all Brazilians. International Response to Brazil’s High Murder Rate The high murder rate in Brazil is an issue that has been gaining international attention for some time. The country has one of the highest murder rates in the world, with over 50,000 homicides reported in 2019 alone. As a result, Brazil is often referred to as one of the “most dangerous places to live.” The problem is further compounded by the fact that Brazil’s justice system has a notoriously low conviction rate for those accused of murder, further perpetuating the cycle of violence. In response to this alarming trend, the international community has tried to reduce the violence in Brazil. Many countries, including the United States, have pledged financial and technical assistance to help Brazil combat its high murder rate. This assistance includes funding social programs, such as job training and education, and providing support for police and prosecutors through training and equipment. In addition, the United Nations has worked with Brazil to develop specific initiatives to prevent violence and strengthen the rule of law. At the same time, there has been a focus on strengthening Brazil’s criminal justice system. For example, the government has increased the resources and personnel available to prosecutors and has implemented initiatives to reduce judicial system delays. These measures have been designed to ensure that those accused of murder receive a fair trial and that perpetrators are brought to justice more quickly. Ultimately, the international community is committed to helping Brazil reduce its high murder rate. Through a combination of financial and technical assistance and initiatives designed to strengthen the justice system, countries worldwide are working together to ensure that Brazil is a safe place to live. Potential Solutions to Lower Brazils High Murder Rate Brazil has one of the highest murder rates in the world, and the government is determined to do something about it. Fortunately, several potential solutions could help reduce the number of homicides in the country. One potential solution is to focus on prevention instead of punishment. Brazil already has very tough penalties for those convicted of homicide, but these laws are not enough to deter people from committing murder. Investing in preventative measures such as better education, vocational training, and job opportunities can help to reduce the risk factors that lead to violence and thus reduce the murder rate. Another potential solution is to increase the presence of law enforcement in communities with high violence rates. Having a visible police presence in these areas can help deter people from committing crimes and provide a sense of security to those who live there. Additionally, Brazil could look into ways to reduce the number of weapons in circulation. This could include tighter gun sales restrictions and increased funding for buy-back programs that allow people to turn in their firearms for cash. Finally, Brazil should also look into targeting the root causes of violence. This could include investing in mental health services, offering more comprehensive drug rehabilitation programs, and providing more support for victims of crime. While these solutions may not solve Brazil’s high murder rate overnight, they could help to reduce it over time. If Brazil takes a holistic approach to reducing violence, it can make a real difference in the country’s crime levels. The conclusion of a blog post is an integral part of the post, and it can help to give the reader a clear understanding of the main points you are trying to make. It should summarize the main points you discussed in the blog post and provide the reader with an experience. It should also include a call to action, such as prompting the reader to take on a specific activity or to leave a comment. Finally, the conclusion should be written professionally, witty, and cleverly to leave a lasting impression on the reader.
https://sambatasteofbrazil.com/unveilingtheunfortunatetruththerisingmurderrateinbrazilin2020/
Poverty is central to health and development in low-income and middle-income countries. State your definition of poverty prior to studying public health. Based on this definition, what would be the focus of poverty alleviation solutions? Based on the relational and spiritual definition of poverty, discuss how the focus of solutions would change to include a holistic approach. Identify an example of a health program or solution that integrates a relational definition of poverty. Poverty can be defined in many ways by a person due to their lifestyle, morals, values, finances, education, etc. Poverty means to be without something or lack of. Prior to studying public health, I believe the definition was the same but the I was leaning more toward financial poverty and not having the basics needs to sustain a healthy and adequate life. In the article Poverty is a Lie (2015), poverty is defined as “a mindset that goes far beyond the tragic circumstances”. I believe this means that a human believing that they are in poverty can be changed by thinking and doing more to change it. As an African American woman raised in a church going family, I know their importance when it comes to alleviating poverty in the community. Our church was always taking donations and having bake sales to raise money for community programs for job assistance, education assistance, and even meals for individuals and their families. Being able to go to a church and letting go of your pride to ask for help can be a life changing moment for a lot of people. Psychological research is crucial to illuminating and interrupting the dam- aging consequences of economic hardship, understanding interpersonal and institutional responses to poverty and economic inequality, and building support for effective pro- grams and policies (Bullock, 2019). Here in San Antonio, Texas, there are several health programs that help alleviate poverty. One program is the faith-based initiative. This program helps people in need of work, shelter, financial assistance, and even childcare assistance. The program implements religion and faith to help change the way people feel about themselves and the need for better lifestyle choices. Haven for Hope is another facility here in San Antonio that provides free medical care, shelter, food, and after care for those in desperate need. Haven for Hope and our partners address the root causes of homelessness by offering programming tailored to the specific needs of the individual (Haven for Hope, 2019). The goal is to meet individuals where they are and support them as they move toward self-sufficiency (Haven for Hope, 2019). References: Bullock, H. E., & Quinn, D. M. (2019). Psychology’s Contributions to Understanding and Alleviating Poverty and Economic Inequality: Introduction to the Special Section. American Psychologist, 74(6), 635–640. Examination of Poverty Post EXAMINATION OF POST ON POVERTY 2 Hi, your post was excellent since you not only clearly explain the concepts but also give personal examples that enable the reader to understand the ideas on a deeper level. In particular, you relate poverty to your background to demonstrate how the definition of poverty goes beyond the lack of basic notes. In the same way as you, reading about relational and spiritual poverty has expanded my view of poverty. I like how you take a psychological viewpoint to illustrate that relational and spiritual poverty stems from the lack of support systems and social connections (Feldman, 2017). Based on this definition, the nest way to solve this form of poverty is through social programs that connect people on a deeper level. You give the example of a faith-based program that assists people in various in your post. Excellent job! EXAMINATION OF POST ON POVERTY 3 References Feldman, G. (2019). Towards a Relational Approach to Poverty in Social Work: Research and Practice Considerations. The British Journal of Social Work, 49(7), 1705-1722.
https://nursingwritingservice.com/examination-of-poverty-post/
CCAFS theme leader discusses the root causes of food insecurity in East Africa In this newly released video interview, made by Francesco Fiondella at the International Research Institute for Climate and Society (IRI), CCAFS theme leader James Hansen discusses the causes of the current drought plaguing the Horn of Africa. He points out that even if the lack of rain is a root cause of the crisis, it is still only one of many factors that has lead to the ongoing drought. Other factors are on a more long term basis, such as poverty, which leads to vulnerability to climatic shocks and population growth, where many depend on rainfed agriculture for their livelihoods. The environmental issues plaguing the area namely water and soil degradation also exacerbate the situation further. Another important factor is the lack of investments and an overall neglect of the agricultural sector from the international donor community, James Hansen adds. The decline in support to agricultural development has been noticeable since the beginning of the 1990:s and seems to be based on shifts in ideology more than on actual research findings. The results have been disastrous for the rural communities, whom have become trapped in poverty and dependent on external support. This has further lead to increased vulnerability to environmental shocks such as the current drought. Even though there is a resurge in reinvesting in the agricultural sector, the efforts need to be sustainable and directed well, in order to reverse the current liability and chronic poverty that contribute to vulnerability to climate shocks, James argues in his interview. Somalia is hit harder than the other countries Since the climate is changing the frequency of extreme events will also change. We will therefore see more droughts in East Africa. Here policies can play an important role in managing the crisis. James Hansen mentions that the countries Kenya, Ethiopia and Somalia are all experiencing relatively the same level of drought but the crisis is much more severe in Somalia with a huge number of lost lives, cattles and livelihoods. This can be explained by the weak and ineffective government in the country where fewer policies are effective in mitigating the effects of the drought. In Ethiopia for example, James Hansen explains, there are very strong safety net programs and overall both countries have a better organized human response community, which they seem to have learned from previous droughts. This means that the food response teams can respond more effectively and proactively and thus preventing the crisis to turn into a humanitarian disaster. What CCAFS is doing in East Africa The CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) is very active in East Africa, with research work being carried out in Kenya, Ethiopia, Uganda and Tanzania. In southern Ethiopia, CCAFS will be working with pastoralists on exploring index-based insurance for livestock for pastoralists, which will enhance their ability to manage climate risks. In southern Kenya CCAFS will enable rural farmers to use long term forecasts for rainfall for their upcoming growing season meaning that they can plan ahead before planting new crops. CCAFS will also look into how to best package climate information and how it is effectively communicated to rural farmers and participants. To get more information on climate information services to manage climate risks in East Africa please read the newly released working paper The State of Climate Information Services for Agriculture and Food Security in East African Countries. For West Africa CCAFS has also recently released The State of Climate Information Services for Agriculture and Food Security in West African Countries. James Hansen on Food Security in East Africa from IRI on Vimeo.
https://ccafs.cgiar.org/news/ccafs-theme-leader-discusses-root-causes-food-insecurity-east-africa
ANNA CHARTERIS, CHRISTINA GERMAN, ZACHARY HANSRANI, ERIN LI, JAINETRI MERCHANT Human trafficking affects approximately 40.3 million people worldwide, and has come to be known as ‘modern slavery’ (Zimmerman & Kiss, 2017). It is defined by the United Nations (2000) as the recruitment, transport, transfer, harbouring, or receipt of persons by coercive means (such as force, abduction, fraud) for the purpose of exploitation. International and national policies driving globalization have facilitated the flow of goods, services, ideas, and people across and within borders. The resulting interconnectedness between people and nations has concurrently exacerbated the illicit trade of persons in the form of human trafficking and labour exploitation (Kavinya, 2014., Zimmerman & Kiss, 2017). The interdependence of immigration and globalization has been apparent in human history. However, growing economic demands have accelerated migration rates and the global scope of immigration in recent years (Ciariene & Kumpikaite, 2008). Research on globalization and migration has not sufficiently delved into how the pre-existing and post-trafficking vulnerability of immigrants makes them susceptible to exploitation, leading to adverse health outcomes. This article will examine the economic drivers of human trafficking, its impact on physical and mental health, the challenges with victim return and reintegration into society, and how future policies and governance could address the issue. We argue that human trafficking is a product of poverty and a response to changing economic and social dynamics due to globalization. It is crucial to understand the driving mechanisms behind human trafficking in a time when growing economies seek cheap labour and individuals stricken by poverty seek the prospect of a better life. Human trafficking has been called “one of the dark sides of globalization” (Cho, Dreher, & Neumayer, 2014). Economic globalization has led to an increase of international trade flow of goods and services, extending to the forcible trade of humans (Hall & Lawson, 2014., Heller, 2016). It has been argued that an increased demand for cheap labour is the main driving force of human trafficking, as companies in wealthy countries desire cheap labour to produce goods for consumers. However, research does not support the notion that increased economic power is directly correlated with increased human trafficking. In fact, evidence suggests that wealthy countries are more likely to enforce anti-human trafficking policies (Heller, 2016). The primary driver of human trafficking is global economic neoliberalism, which reinforces global inequality and economic hierarchies (Regilme Jr, 2014). Neoliberalism, understood as free-market economic policy, “makes wreckage of efforts at democratic sovereignty or economic self-direction” in the Global South (Brown, 2006. pp. 691), and increases income inequality in Organization for Economic Co-operation and Development (OECD) countries (Regilme Jr, 2014. pp. 283). In this sense, neoliberalism acts as a distal force behind the economic drivers of human trafficking. Impoverished individuals who are not the beneficiaries of neoliberalist policies are increasingly desperate for income to fulfill their basic needs, and are therefore more susceptible to exploitation (Brown, 2006). Exploited individuals are consequently at greater risk of multidimensional oppressions resulting in adverse health outcomes (Zimmerman & Kiss, 2017). Physical Threats to Health The literature on human trafficking centers around the idea that women and children are the most affected groups. However, it is important to realize that the crime also affects men and boys. Trafficked individuals are involved in a wide array of occupations. It is common to think of sex trafficking, but it is important to understand the other industries that may use trafficked labour, such as food packaging, chemical manufacturers, slaughterhouses, and construction (Turner-Moss et al., 2014). This means that trafficking can cause a wide range of symptoms. Human trafficking affects every system of the body. For example, some of the most common physical symptoms recorded by trafficked women who were returning to their home in Moldova include: headaches, stomach pain, memory problems, back pain, lack of appetite, and tooth pain (Oram et al., 2012). When including a study on male victims, symptoms such as fatigue and vision loss were added to the list (Turner-Moss et al., 2014). Many of these symptoms are associated with occupational hazards. Occupational hazards not only include close proximity to dangerous chemicals or contagious diseases, they also include physical violence. In fact, a study in the United Kingdom found that trafficked workers reported high rates of being hit or kicked intentionally, being hurt with a gun or a knife, and even intentionally burned (Turner-Moss et al., 2014). These disturbing images help create a realistic picture of how trafficked workers experience their day-to-day lives. While the working conditions of trafficked victims are unsafe, it is important to understand that the working conditions are not the only cause for concern: it is often the employers themselves who are a form of occupational hazard. Other than abuse from superiors in the workplace, there is also limited access to healthcare and limited, if any, safety measures in place. The concealed identity and vulnerability of trafficked workers deprioritizes their needs (Richards, 2014). As a competitive global market has made for power imbalances, human health is being devalued to an afterthought while revenues and profits remain at the forefront. The physical health of trafficked individuals is being compromised as their worth is being placed in their productivity, as opposed to their well being. Threats to Mental Health Survivors of human trafficking commonly experience mental health disorders including anxiety, mood, and dissociative disorders. Post-traumatic stress disorder (PTSD) is the most prevalent disorder amongst trafficked individuals (Pascual-Leone et al., 2016). Osterovschi et al. (2011) observed that around 48% of Moldavian women who have returned to Moldova after being trafficked are diagnosed with PTSD within one to five days after their return. They experience symptoms such as recurrent thoughts/memories of terrifying events, difficulty sleeping, feeling detached/withdrawn, guardedness, and feeling hopelessness. Another relatively undiscussed condition is Stockholm Syndrome. Stockholm Syndrome is a condition arising from captors inducing extreme terror into their victims, rendering them powerless with no means of escape (Adorjan et al., 2012). Victims often develop positive feelings towards their captors, and negative feelings towards the police (Adorjan et al., 2012). Due to the nature of trafficking this condition is difficult to assess and relatively undocumented or discussed in any qualitative or quantitative research. Nevertheless, it is possible to draw parallels between trafficked victims and hostages, since victims in both situations are taken against their will. Individuals may develop mental illness prior to their trafficking experience (Pascual-Leone et al., 2016). Individuals who suffer from pre-trafficking mental illness often experienced other forms of violence, such as war or domestic disputes (Pascual-Leone et al., 2016). These individuals were initially in a vulnerable position and were therefore susceptible to human trafficking and exploitation. If individuals develop mental illness during trafficking, they likely do not have the ability to access or afford counselling and mental health services (especially in the cases where victims of trafficking were already financially strained). Furthermore, victims may not have the ability to improve their socioeconomic status after being trafficked. This is especially pertinent with regards to perpetuating a cycle of re-trafficking, given the fact that many victims become trafficked due to desperation and poverty (Adams, 2011). Return and Reintegration of Human Trafficking Migrants The return and reintegration of victims is perhaps the least understood phase of trafficking, as this transition is mediated by many interacting factors. The health implications and trauma associated with one’s trafficking experience persists even if victims return to their countries of origin. Additionally, the process and conditions under which a victim returns to their country significantly affect their overall well being. Although reintegration facilities offer immediate support, their inability to provide long-term care makes victims of trafficking susceptible to re-trafficking. The lack of research on the causes of re-trafficking and poor reintegration programs supports the need for a deeper understanding of the influence of poverty and inequality on victims, the gaps in current guidelines and action plans, and the structure of anti-trafficking policies. As the need to curtail the re-trafficking cycle in vulnerable populations is urgent, countries with high trafficking rates must alter their migration and reintegration policies by adopting valuable lessons from anti-trafficking pilot projects. Moving Forward: Preventing and Eliminating Human Trafficking To end human trafficking, efforts need to be made to address the root causes of the crime. Here, we lay out a framework for potential policies and interventions that can reduce, and potentially eliminate, all forms of human trafficking. Addressing poverty, poor education, high unemployment, and political instability could reduce human trafficking (United Nations General Assembly, 2000). Most human trafficking occurs with a flow of migrants from poorer, less developed nations to wealthier nations (Adams, 2011). Governments of ‘source’ countries should therefore prioritize providing their citizens with opportunities for prosperity in their own nations. A focus on sustainable development, and promoting the sustainable use of resources could help to create more employment opportunities. Additionally, investments in education can give youth the skills to obtain jobs and break cycles of poverty. Financial desperation is a key component of how victims of human trafficking find themselves being exploited (Adams, 2011); addressing poverty is an effective way of preventing human traffickers from capitalizing on an individual’s low socioeconomic status. Moving Towards Human Rights-Based Policies and Interventions An international problem requires international cooperation and governance. International policies, such as the United Nations Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Girls (commonly referred to as the Palermo Protocol) and regional policies, such as the Council of Europe’s Convention on Action Against Trafficking in Human Beings, emphasize a human rights-based approach (Adams, 2011). This approach places the primary focus on successful reintegration into society through providing access to physical and mental health services, such as healthcare and counselling. Many state policies, however, focus on criminal justice and punishing the trafficker, with only a secondary focus on helping the victim (Adams, 2011). In order to have policies and interventions that effectively reduce human trafficking, these policies should change to emphasize human rights, putting the victim’s wellbeing and recovery first, and criminal justice second. International Advanced Anti-Money Laundering Techniques Human trafficking is a lucrative crime, with estimated annual proceeds of $32 billion (Adams, 2011). In order for the funds of crime to be useful, they must be “cleaned” whereby the origins of criminal proceeds are masked, making funds appear legitimate; a process called money laundering. Therefore, an effective way to prevent human trafficking is to remove the ability to “clean” criminal proceeds, rendering the funds of human trafficking virtually useless. This can be accomplished through stricter, more advanced anti-money laundering (AML) procedures internationally, across all financial institutions and higher risk sectors, including real estate and casinos. Currently, AML procedures can be slow, inefficient, and costly, with an estimated $50 billion spent each year to detect less than 1% of laundered funds (Barefoot, 2018). Advances in modern technology, such as machine learning and artificial intelligence, could facilitate more advanced detection of money laundering, including predictive models to trace origin funds, while substantially reducing these costs (Barefoot, 2018). In addition to the implementation of advanced technologies to predict, detect, and prevent money laundering, more open system-to-system collaboration between financial institutions, higher risk sectors such as real estate, and law enforcement is required. Currently, communication and collaboration between financial institutions and law enforcement regarding money launderers is siloed, with linear reporting channels that hinder the ability to prevent money laundering in real time (Barefoot, 2018). Moving forward, financial institutions and law enforcement need to have real time reporting with system-to-system collaboration, so that law enforcement officials can keep up with the rapid flow of laundered funds and ever evolving fund concealment schemes (Barefoot, 2018). New AML processes, including greater collaboration, real time reporting, and the implementation of advanced technologies, can facilitate greater detection of laundered funds while simultaneously reducing the costs associated with detection, thereby removing the financial incentive of human trafficking. Conclusion The link between globalization, migration, and human trafficking is apparent. The trans-national flow of goods, services, and people has made it easier for perpetrators of human trafficking to transport victims across borders. The search for better economic opportunities in impoverished populations makes these populations vulnerable to trafficking. Trafficking victims with compromised physical and mental health continue to face obstacles even if they manage to return to their country of origin. The subsequent cycle of re-trafficking is largely dependent on the lack of availability of economic, educational, and social support. This has raised an important topic among researchers: will tighter border controls and migration policies curtail trafficking rates, or do policies need to address influences of illiteracy, poor economic opportunities, corruption, and poor government infrastructure as the root causes of trafficking? Overall, more research is needed to effectively evaluate the ways in which neoliberal political agendas drive national and international policy, creating the conditions for human exploitation. References Adams, C. (2011). Re-Trafficked Victims: How a Human Rights Approach Can Stop the Cycle of Re-Victimization of Sex Trafficking Victims. The George Washington International Law Review, 43, 201-234. Adorjan, Michael, et al. “Stockholm Syndrome as Vernacular Resource.” The Sociological Quarterly, vol. 53, no. 3, June 2012, pp. 454–474., doi:10.1111/j.533-8525.2012.01241.x. Barefoot, J. A. (2018). Regtech could help stop human trafficking. American Banker. Bick, D., Howard, L.M., Oram, S., & Zimmerman, C. (2017). Maternity care for trafficked women: Survivor experiences and clinicians’ perspectives in the United Kingdom’s National Health Service. PLoS One. 12 (11) https://doi.org/ 10.1371/journal.pone.0187856 Canadian Medical Association. (n.d.). Psychiatry – A Recent Profile of the Profession. Retrieved April 7, 2018, from https://www.cma.ca/Assets/assets-library/document/en/advocacy/25-Psychiatry.pdf Cary, M., Oram, S., Howard, L. M., Trevillion, K., & Byford, S. (2016). Human trafficking and severe mental illness: An economic analysis of survivors’ use of psychiatric services. BMC Health Services Research, 16(1). doi:10.1186/s12913-016-1541-0 Ciariene, R., & Kumpikaite, V. (2008). the Impact of Globalization on Migration Process. Social Research, 13(3), 42–48. https://doi.org/10.1007/978-3-319-01125-7_4 Corfee, FAR. (2016). Transplant tourism and organ trafficking: Ethical implications for the nursing profession. Nursing Ethics. 23(7): 754-760. Cho, S., Dreher, A., & Neumayer, E. (2014). Determinants of Anti-Trafficking Policies: Evidence from a New Index. The Scandinavian Journal of Economics, 116(2), 429-454. doi:10.1111/sjoe.12055 Dahal, P., Kumar Joshi, S., & Swahnberg, K. (2015). “We are looked down upon and rejected socially”: A qualitative study on the experiences of trafficking survivors in Nepal. Global Health Action, 8(1), 1–9. https://doi.org/10.3402/gha.v8.29267 Danna, D. (2012). Client-Only Criminalization in the City of Stockholm: A Local Research on the Application of the “Swedish Model” of Prostitution Policy. Sexuality Research and Social Policy, 9, 80-93. Davy, D. (2016). Anti-Human Trafficking Interventions: How Do We Know if They Are Working? American Journal of Evaluation, 37(4), 486-504. Efrat, A. (2016) Global efforts against human trafficking: the misguided conflation of sex, labor, and organ trafficking. International Studies Perspectives. 17: 34-54 Gupta, J., Reed, E., Kershaw, T., & Blankenship, K. M. (2011). History of Sex Trafficking, recent experiences of violence, and HIV vulnerability among female sex workers among coastal Andhra Pradesh, India. International Journal of Gynecology and Obstetrics, 114, 101-105. doi:10.1016/j.ijgo.2011.03.005 Hall, J. C., & Lawson, R. A. (2013). Economic Freedom Of The World: An Accounting Of The Literature. Contemporary Economic Policy, 32(1), 1-19. doi:10.1111/coep.12010 Heller, L. R., Lawson, R. A., Murphy, R. H., & Williamson, C. R. (2016). Is human trafficking the dark side of economic freedom? Defence and Peace Economics, 1-28. Doi:10.1080/10242694.2016.1160604 International Labour Organization. (n.d.). Economic and social empowerment of returned victims of trafficking in Thailand and the Philippines. Retrieved April 6, 2018, from http://www.ilo.org/global/topics/forced-labour/WCMS_082047/lang–en/index.htm IOM. (2010). The Causes and Consequence of Re-trafficking : Evidence from the IOM Human Trafficking Database. Geneva. IOM. (2015). Enhancing the Safety and Sustainability of the Return and Reintegration of Victims of Trafficking. Paris. Retrieved from https://ec.europa.eu/anti-trafficking/sites/ antitraffickingfilesenhancing_the_safety_and_sustainability_of_the_return_And_ reintegration_of_vots.pdf Jordan, A. (2004). Human Trafficking and Globalization. Center for American Progress(Vol. 21). https://doi.org/10.1111/j.1751-9020.2008.00178.x Le, P. D. (2017). “Reconstructing a Sense of Self”: Trauma and Coping among Returned Women Survivors of Human Trafficking in Vietnam. Qualitative Health Research, 27(4), 509–519. https://doi.org/10.1177/1049732316646157 Legislation. (2016). Justice.gc.ca. Retrieved 20 March 2018, from http://www.justice.gc.ca/eng/cj-jp/tp/legis-loi.html Kavinya, T. (2014). Globalization and its effects on the overall health situation of Malawi. Malawi Medical Journal : The Journal of Medical Association of Malawi, 26(1), 27. McDonald, W. F. (2014). Explaining the under performance of the anti-human-trafficking campaign: experience from the United States and Europe. Crime, Law and Social Change, 61, 125-138. Office of Women in Development USAID. (2007). The rehabilitation of victims of trafficking in group residential facilities in foreign Countries: A Study Conducted Pursuant to the Trafficking Victim Protection. Development. Ontario Passes Legislation to Protect Human Trafficking Survivors. (2017). news.ontario.ca. Retrieved 20 March 2018, from https://news.ontario.ca/owd/en/2017/05/ontario-passes-legislation-to-protect-human-trafficking-survivors.html Ostrovschi, N. V., Prince, M. J., Zimmerman, C., Hotineanu, M. A., Gorceag, L. T., Gorceag, V. I., . . . Abas, M. A. (2011). Women in post-trafficking services in Moldova: Diagnostic interviews over two time periods to assess returning women’s mental health. BMC Public Health, 11(1). doi:10.1186/1471-2458-11-232 Oram, S. et al., (2012). Physical Health Symptoms reported by trafficked women receiving post-trafficking support in Moldova: prevalence, severity, and associated factors. BMC Women’s Health, 12 (20), Retrieved from:https://bmcwomenshealth.biomedcentral.com/ articles/10.1186/1472-6874-12-20. Pascual-Leone, A., Kim, J., & Morrison, O. (2016). Working with Victims of Human Trafficking. Journal of Contemporary Psychotherapy,47(1), 51-59. doi:10.1007/s10879-016-9338-3 Psychotherapy Fees. (n.d.). Retrieved April 06, 2018, from http://therapytoronto.ca/fees.phtml Richards, T. A. (2014). Health Implications of Human Trafficking. Nursing for Women’s Health. doi: 10.1111/1751-486X.12112 Regilme, S. S. (2013). Bringing the Global Political Economy Back In: Neoliberalism, Globalization, and Democratic Consolidation. International Studies Perspectives, 15(3) 277-296. doi:10.1111/insp.12020 Scheper-Hughes, N. (2005). The last commodity: post-human ethics and the global traffic in “fresh” organs. In A Ong & S. Collier (Eds.), Global Assemblages: Technology, Power and Ethics as Anthropological Problems (pp. 145-167). Malden, MA: Blackwell Publ.
https://juxtamagazine.org/2018/06/26/health-at-risk-health-implications-of-human-trafficking-in-the-context-of-globalization-and-migration/
Interior design is the process of shaping the experience of interior space, through the manipulation of spatial volume and surface. It is a multi-faceted profession involving the planning, designing and management of the design of interior spaces. Interior design is a professional service that combines creativity, technical knowledge, and business skills to create an environment that is both functional and appealing. Interior designers must be able to think spatially, be aware of the latest trends in design and have excellent visualization skills. They must also be able to understand the needs of their clients and be able to work within the constraints of a project budget. Interior design services can be used for a variety of projects, including homes, offices, retail stores, restaurants and other commercial spaces. The interior design process typically begins with a client consultation, during which the designer will discuss the client’s needs and requirements. The designer will then create a concept design, which will be presented to the client for approval. Once the concept is approved, the designer will create a detailed design package, which will include floor plans, elevations, furniture and color schemes, and lighting plans. The final step in the interior design process is the installation of the finished design. Interior designers often work with contractors and other tradespeople to install the finished design. There are no formal qualifications required to become an interior designer, but most designers have a degree in interior design or a related field. There are also several professional associations and certifications that interior designers can obtain to demonstrate their competence and professionalism. How to learn interior designHow to learn interior design Interior design is the art and science of understanding people’s behavior and creating environments that accommodate their needs. It is the process of designing the interior of a space, including What are the 7 elements of interior design?What are the 7 elements of interior design? There are 7 elements of interior design that should be considered when designing a space, whether it is a home, office, or any other space. These elements are line, form, What is grunge interior design?What is grunge interior design? If you were alive in the 1990s, you know about grunge music and the fashion that went with it. Oversized flannels, ripped jeans, and combat boots were all the rage,
https://antonioxpacheco.com/category/interior-design/
Principles of Interior Design. Like any other bestselling novel writer, a good interior designer adheres to important interior design principles that, when applied properly, can enhance emotions only a top-calibre design can deliver. There are six basic principles of interior designing, which are as follows. Principle of balance Balance is all about achieving equilibrium that gives happiness to the eye. Traditional interior design generally incorporates symmetrical balance, a technique used to attempt or to make both sides of the room mirror one by one. The current trend that Interior designers in Miami Beach use is Asymmetry, a deviation from symmetrical balance where the room is designed with different fixture design and arrangement still maintains the same visual weight Principle of Scale The other principle is the principle of scale that deals with the harmonious proportion of a room to its decorative fixtures. The relationship of the institutions and the space in terms of size to obtain the desired design output, like creating the room appear bigger and making the room appear smaller. Principle of Rhythm Rhythm is the set of the visual movement of the room’s design. Repetition is a basic technique of the rhythm principle that utilizes the same aesthetic elements, such as maintaining the fixture or the design’s characteristics and quality Principle of Contrast Contrast is the important design principle that will help your room’s focus ‘pop out’ in a visually appealing manner. Contrast also helps in reducing your design to look monotonous.
https://interior-design-daily.com/2021/08/01/principles-of-interior-design/
Hufft is an architecture, interior design and fabrication studio with offices in Kansas City, Mo., and Bentonville, Ark. We strive to bring meaning to everyday life through work that strengthens the connection between people and place. Hufft was founded on the belief that an open dialogue between designers and builders leads to innovative design and projects that are ultimately more valuable for the people they serve. Throughout our history, we have developed a rigorous approach to design that seeks to identify, understand and use our client’s biggest challenges as a catalyst for innovation. From our roots in residential design, Hufft has built an expanding portfolio of commercial, institutional and hospitality work. Today, we are a nationally-recognized, award-winning design firm that employs an interdisciplinary staff of architects, interior designers, artists and craftsmen. Our studio is organized to foster multi-disciplinary collaboration that expands traditional notions of the architectural practice, seamlessly integrating designers and craftsmen. People, Places, Concept We work in diverse locations, across a broad range of project types and scales. However, we approach each project through a consistent framework: People, Places, and Concept. This simple structure negotiates the complex relationships between building and site, program and budget, users and the environment into a cohesive whole that transcends mere problem-solving. This approach is tailored to the needs of each project, helping our clients identify and realize their goals and aspirations, while simultaneously uncovering the specific characteristics that make each project unique. People Buildings play an integral role in creating strong human connections. We believe that all people deserve access to well designed, healthy, and inspiring environments that are specific to the needs and values of their community. Starting with a deep understanding of client, community, culture, and program, this human-centered approach drives the physical development of our projects. The result are buildings that are appropriately scaled and spaces that are comfortable and delightful. Our focus on people considers all scales, from planning to developing details that are functional, elegant and playful, bringing utility and joy to the people who use them. We consider how people interact with buildings at all scales, from site planning to the details and interiors we use every day. Stair railings, door handles, tabletops, material connections…these elements bring richness to everyday experiences. The result are details that are elegant and playful, bringing utility and joy to the people who use them. Places We believe that buildings have a unique ability to strengthen the connection between people and the landscape, creating a sense of place that adds to our collective identity. Our approach explores the distinct characteristics of a place, highlighting the uniqueness of a geographic region, or the beauty of a specific view. From rural sites to urban environments, a thoughtful choreography between building and context creates meaningful, connected experiences between people and places. We strive to make buildings that are not only culturally, but environmentally sensitive by responsibility utilizing our natural resources and thoughtfully considering the impact each project has on the environment. Concept Our process brings the exploration of people and places into a cohesive, meaningful concept for each project. This concept gives the project clarity and establishes a formal and material logic that drives design decisions, from organizing principles to details and connections. The concept for a project is informed by a collaborative exchange of ideas between the client, architect and builder. The concept is the catalyst for design. Leading to the innovative use of form, materials, color, texture and pattern to create rich and unexpected environments that bring joy to the people who use them. At the core of this process is exploring the potential of digital design and fabrication. We utilize a range of design tools, both analog and digital, 2 and 3 dimensional. Project concepts are explored in 3D massing studies, computer-generated environmental analysis programs and simple hand drawings and sketches. This allows us to test multiple concepts, at varying scales, uncovering unique formal, spatial and programmatic relationships of a given project.
https://hufft.com/who-we-are/
This course exposes students to the basics of design and the fundamentals of design theory. Students will learn to understand and appreciate design by exploring and applying the various elements and principles of design including colour and colour theory. Introduction to scope interior design The aim of this subject is for students to understand the relationship of space and function and get introduced to the concept of space and its aesthetic qualities. The course also includes lessons on the design process and various theories of design. Material and Construction Techniques This course helps familiarize students with the various materials and products applied in interior spaces such as natural stones, bricks,clay, bamboo, timber, wood and wood products along with their physical and behavioral properties, manufacturing process etc. The course also covers various construction techniques, market trends etc. Product Workshop The aim of this course is to let the students have hands-on experience working on various materials such as clay, bamboo, mount board, foam board,cardboard etc. to create scale models, artifacts and accessories for interior spaces. Graphics This course equips students with the essential skill of manual drafting where they can present their drawings through sketches, technical drawings, views etc. It includes explanations on drafting tools and their various uses, and then covers how to complete increasingly more difficult drafting conventions such as symbols, annotations, architectural lettering, scales etc. Communication Skills The aim of this subject is to enhance verbal presentation skills and inculcate public speaking techniques and life skills among students. Seminar presentation techniques, method of communication and application, book reviews, evaluative research, articles and reports will form the main parts of this subject. Evolution of Design II The aim of the subject is to understand the progression of historical development in art, architecture and interiors in western and Indian contexts. Topics include introduction to furniture history, study of Egyptian, Greek, Roman, Medieval, Renaissance and Baroque Works and design ideology of International Master architects and designers belonging to various schools of thought. Also covered is the impact of socio-cultural, religious and environmental factors on design. The evolution of art and architecture in an Indian context with emphasis on use of natural light, features, elements, motifs,construction technology and materials is also studied. Second Year Interior Design II In this course students are made to design complex residential spaces and develop problem-solving strategies in the interior space. In addition, students are taught idea generation and concept building techniques with an emphasis on daylight and illumination. Application and usage of contemporary materials and market trends will also be part of the course. Furniture Design I The aim of this subject is to develop a scientific understanding of furniture, its joinery and various material possibilities. Structure, ergonomics,anthropometry applied to furniture and systems in wood and wood derivatives along with compatible material. Furniture terminology, hardware, joinery, fixing detail. Idea generation by designing simple objects like stools, chairs, coffee tables etc. Material and Construction Techniques II The aim of this subject is to impart detailed knowledge of materials, their characteristics, technique of using materials and making working drawings. Application of wood to staircases and special types of doors. Introduction to frame structure with columns, beams, slab and cantilevers. Introduction to types of glasses and its various applications. Interior Working Drawing I The aim of this subject is to enable students to prepare working details of interior projects with specifications. Interior Services I The aim of this subject is to understand the basic principles of drainage and water supply in buildings and to learn about the sources of water supply.Hot and cold-water distribution system. Types of pipes and their joints and fixing details.Fixtures and fittings. Basic principles of sanitation and disposal of waste materials from buildings. Standard sanitary fittings, traps, pipes and their joints. Interior Environment Control The aim of this subject is to acquaint students with various interior elements which affect human comfort. Effect of climate on human comfort: Definition of Climate/Weather and effects on structure. Sun control, shading devices, material, color and texture choices for interior spaces. Solar passive designs. Daylight factor, size of opening with respect to daylight and its sources, lighting criteria. Third Year Interior Design III The aim of this subject is applying knowledge of various streams like culture, art and craft, building technology, services, furniture detail, use of contemporary materials etc. Need based approach to design for creating spaces relevant to contemporary society. Integrating assorted commercial activities, developing display systems and communication mechanisms. Use of environmental friendly materials,services, contextual environment knowledge for designing small, large commercial activities like shops, banks, showrooms etc. Research Project Here students undertake an in-depth study of a particular topic related to Interior Design, giving them an opportunity to dig deeper anddevelop their skills in a particular subject. Estimation, Costing & Specification Writing This course introduces students to the necessary knowledge & skills required for estimation, costing, rate analysis etc. required for handling residential and commercial interiors. Interior Working Drawing II This course enables students to prepare working details of interior projects using computer aids. Students will learn to make sheets showing service layouts: Electrical layout, Illumination layout, AC layout with specification, Fire fighting layout, Computer networking layout etc. Interior Services II The aim of this subject is to understand the basic principles of thermal and acoustical insulation to interior spaces. Use of thermal and acoustica insulation materials and its properties. Behaviour of sound. Acoustics consideration for conference room, meeting room, ceilings. Fire safety systems- fire retarding materials,fire rated doors. Professional Practice The objective of this course is to make students aware of the responsibilities of a designer, professional code of conduct and other technicalities of the profession.
https://academyofinteriors.com/3-year-specialization-program/
An interior designer is someone who helps create the overall look and feel of a space. They work with clients to determine their needs and wants, and then create a plan to achieve those goals. Home architects in Toronto often have a background in art or design, and they use their creativity and knowledge to create functional and beautiful spaces. There are many benefits to hiring an interior designer. One of the most obvious is that they can save you time and money. A designer can help you avoid making costly mistakes, and they will have access to resources that you may not be able to find on your own. In addition, a designer can provide you with expert advice and guidance throughout the entire process, from concept to completion. When you’re ready to take the plunge and hire an interior designer in GTA, keep the following in mind: -Establish a clear budget and timeline for your project. This will help you get designers and avoid any misunderstandings later on. -Be clear about your style. This will help you find a designer whose aesthetic is compatible with your own. -Communicate your needs and expectations clearly. This will help ensure that your designer understands your vision for the space. An interior designer is someone who plans, researches, coordinates, and manages projects that create interior environments. They work with clients to determine their needs, develop concepts, select finishes, and oversee construction. Interior designers are present in all aspects of the design process, from initial feasibility studies to project completion.
https://solosabores.com/tag/interior-design-firms-gta/
The complexity of the design process requires that at various points along the way designers communicate aspects and outcomes of the process to clients and consultants. Like professionals, students must present in-process projects to team members, instructors, and guest critics. Visual presentations must vary to accommodate the process of design and to communicate both process and outcome. In Interior Design Illustrated, Francis Ching identifies three basic stages of design process: analysis, synthesis, and evaluation. According to Ching, analysis involves defining and understanding the problem; synthesis involves the formulation of possible solutions; and evaluation involves a critical review of the strengths and weaknesses of the proposed solutions. Interestingly, these three basic stages of design process are used by design practitioners in a variety of disciplines. Industrial designers, graphic designers, exhibition designers, and others often engage in a similar process. Of course, the design disciplines vary a great deal in terms of professional practice and final outcome. For this reason, actual interior design process and project phases are quite distinct and are more elaborate than the three basic stages may indicate. For purposes of contractual organization, the process of design engaged in by architects and interior designers in the United States has been divided into five basic project phases: These phases are derived from the American Institute of Architects (AIA) Owner-Architect Agreement for Interior Design Services and the American Society of Interior Designers (ASID) Interior Design Services Agreement. Both of these documents serve as contracts for design services and reflect the current design process and project management in the United States. Figure 2-1 is a description of design phases and related visual presentation methods. (1) PROGRAMMING, (2) SCHEMATIC DESIGN, (3) DESIGN DEVELOPMENT, (4) CONSTRUCTION DOCUMENTATION(5) CONTRACT ADMINISTRATION. These phases are derived from the American Institute of Architects (AIA) Owner-Architect Agreement for Interior Design Services and the American Society of Interior Designers (ASID) Interior Design Services Agreement. Both of these documents serve as contracts for design services and reflect the current design process and project management in the United States. Figure 2-1 is a description of design phases and related visual presentation methods. Peña, Parshall, and Kelly, writing in Problem Seeking, identify the actual design process as taking place in the first three project phases. They state that “programming is part of the total design process but is separate from schematic design.” The authors go on to link schematic design and design development as the second and third phases of the total design process. This chapter is intended as an exploration of the three phases of the design process identified by Peña, Parshall, Kelly, and others and as a study of the drawings and graphics used to communicate, document, inform, and clarify the work done during these phases. PROGRAMMING The experienced, creative designer withholds judgment, resists preconceived solutions and the pressure to synthesize until all the information is in. He refuses to make sketches until he knows the client’s problem....Programming is the prelude to good design. (Peña, Parshall, and Kelly, 1987) Programming, also known as predesign or strategic planning, involves detailed analysis of the client’s (or end user’s) needs, requirements, goals, budgetary factors, and assets, as well as analysis of architectural or site parameters and constraints. Information gathered about the user’s needs and requirements is often documented in written form, whereas architectural or site parameters are often communicated graphically through orthographic projection. These two distinct forms of communication, verbal and graphic, must be brought together in the early stages of design Some firms employ professionals to work as programmers and then hand the project over to designers. It is also common for project managers and/or designers to work on project programming and then continue to work on the design or management of the project. It could be said that programmers and designers are separate specialists, given the distinctions between programming (analysis) and design (synthesis). However, many firms and designers choose not to separate these specialties or do so only on very large or programming-intensive projects. In practice, programming varies greatly from project to project. This is due to variation in project type and size and to the quantity and quality of information supplied by the client (or end user). In some cases clients provide de signers with highly detailed written programs. In other situations clients begin with little more than general information or simply exclaim, “We need more space, we are growing very fast” or “Help, we are out of control.” In situations such as the latter, research and detective work must be done to create programming information that will allow for the creation of successful design solutions. It is difficult to distill the programming process used in a variety of projects into a brief summary. Clearly the programming required for a major metropolitan public library is very different from that required in a smallscale residential renovation. It is important, therefore, to consider what all projects relating to interior environments share in terms of programming. All projects require careful analysis of space requirements for current and future needs, as well as analysis of work processes, adjacency requirements, and organizational structure (or life-style and needs-assessment factors in residential design). Physical inventories and asset assessments are required to evaluate existing furniture and equipment as well as to plan for future needs. Building code, accessibility, and health/safety factors must also be researched as part of the programming process. In addition to this primarily quantitative information, there are aesthetic requirements. Cultural and sociological aspects of the project must also be identified by the designers. All of these should be researched and can be documented in a programming report that is reviewed by the client and used by the project design team. When possible, it is important to include a problem statement with the programming report. The problem statement is a concise identification of key issues, limitations, objectives, and goals that provide a clearer understanding of the project. With the programming report complete, the designers can begin the job of synthesis and continue the design process. Residential projects generally require less intensive programming graphics. Programming is a significant element of the residential design process; however, the relationships, adjacencies, and organization of the space are often simplified in relation to large commercial and public spaces. For this reason the following discussion focuses primarily on commercial design, where a significant amount of visual communication of programming information is often required. Clients, consultants, and designers require graphic analysis as a way of understanding programming data and information. Diagrams, charts, matrices, and visual imagery are comprehended with greater ease than pages of written documentation. It is useful to develop ways of sorting and simplifying programming information so that it can be easily assimilated. Successful graphic communication of both the programming process and the programming report can help to create useful information from overwhelming mounds of raw data. A sample project created to illustrate the drawings and graphics used in the various phases of the design project is referenced throughout this chapter. Figure 2-2a contains written programming information regarding the sample project. Figure 2-2b is a floor plan indicating the given architectural parameters of the project. PROGRAMMING ANALYSIS GRAPHIC Many designers find it useful to obtain early programming data and incorporate it into graphic worksheets. Using a flip-chart pad, brown kraft paper, or other heavy paper, the programmers can create large, easy-to-read graphic documents. These sheets are created so that they may be understood easily by the client and can therefore be approved or commented on. Often the eventual project designers find these sheets useful as a means of project documentation. The book Problem Seeking (Peña, Parshall, and Kelly, 1987) provides an additional tech nique for the graphic recording of information generated in the early stages of programming, using a device known as analysis cards. Analysis cards allow for easy comprehension, discussion, clarification, and feedback. The cards are drawn from interview notes and early programming data. Based on the notion that visual information is more easily comprehended than verbal, the cards contain simple graphic imagery with few words and concise messages. The cards are most successful if they are large enough for use in a wall display or presentation and if they are reduced to very simple but specific information. Figure 2-3 illustrates program analysis graphics for the sample project. See Figure C-6 for a color version of a programming analysis graphic. PROGRAMMING MATRICES Matrices are extremely useful tools in programming, incorporating a wealth of information into an easily comprehended visual tool. An adjacency matrix is commonly used as a means of visually documenting spatial proximity, identifying related activities and services, and establishing priorities. Adjacency matrices vary in complexity in relation to project requirements. Large-scale, complex projects often require highly detailed adjacency matrices. Figures 2-4 and 2-5 illustrate two types of adjacency matrix. A criteria matrix can distill project issues such as needs for privacy, natural light, and security into a concise, consistent format. Large-scale, complex design projects may require numerous detailed, complex matrices, whereas smaller, less complex projects require more simplified matrices. Criteria matrices are used in residential design projects and in the programming of public spaces. Smaller projects allow for criteria matrices to be combined with adjacency matrices. Figure 2-6 illustrates a criteria matrix that includes adjacency information. Special types of matrix are used by designers on particular projects. Programming graphics, such as project worksheets, analysis cards, and a variety of matrices, are widely used in interior design practice. These are presented to the client or end user for comment, clarification, and approval. Many of these graphics are refined, corrected, and improved upon during the programming process and are eventually included in the final programming report. SCHEMATIC DESIGN With the programming phase completed, designers may begin the work of synthesis. Another way of stating this is that with the problem clearly stated, problem solving can begin. The creation of relationship diagrams is often a first step in the schematic design of a project. Relationship diagrams serve a variety of functions that allow the designers to digest and internalize the programming information. Relationship diagrams also allow the designer to begin to use graphics to come to terms with the physical qualities of the project One type of relationship diagram explores the relationship of functional areas to one another and uses information completed on the criteria and adjacency matrices. This type of one-step diagram can be adequate for smaller commercial and residential projects. Largerscale, complex projects often require a series of relationship diagrams. Diagrams of this type do not generally relate to architectural or site parameters and are not drawn to scale. Most specialized or complex projects require additional diagrams that explore issues such as personal interaction, flexibility, and privacy requirements.
https://www.parcintro.com/2016/05/the-design-process-and-related-graphics_29.html
There is no single J. Schwartz Design “look.” The innate design abilities of every J. Schwartz Interior Design associate is complemented by a series of life experiences and appreciation for the arts, resulting in an ongoing quest for creativity and function in the home interior. As full partners in the design process, the J. Schwartz Interior Design team asks questions and listens to your answers – then we discuss all interior design ideas thoroughly, from budget to scheduling. Every member of the J. Schwartz Design team cares personally about understanding how our clients live – and want to live – in their personal environments through room design. Step 1: Programming We maintain highly professional standards by consistently delivering on collaboration, creativity and value. When it comes to reflecting your personal style with room decor, we begin with questions: - How do you live in your home and how do you want to live in your home? What works and what doesn’t work for your lifestyle needs? - How long will you be staying in your home, or new home? - Who are the family members who will be sharing your home? - Do you have photos or idea books to share images of homes and spaces that appeal to you? - What is on your “perfect home” wish list? - What is your budget? - What are the priorities within your budget? - What is your project scope? - What is your time frame? - Have you ever worked with an Interior Designer before? - What are you looking for in a designer? What do you expect? Step 2: Existing Conditions and Furnishings Inventory Documentation As experienced interior designers, the goal of the J. Schwartz Interior Design team is to enhance the function, safety, and aesthetics of interior spaces. Our goal at this stage is to get a handle on how different colors, textures, furniture, lighting, and space will work together to meet the functional needs of your lifestyle alongside your vision for a comfortable, beautiful space. - Complete photographic documentation of interior Existing Conditions - Complete and exacting measurement of entire house, apartment or project-specific areas - Electronic documentation of Existing Conditions - Inventory of existing furniture (if any) to be reused, reupholstered or repurposed - Existing Conditions as they appear in the plan (to scale) Step 3: Schematic Design Concepts and Preliminary Budgets At J. Schwartz Design, we’re not about showcasing the latest interior design trends – we’re about personal and unique room designs that exude timeless beauty and function. In this phase, we typically review: - Inspirational montage images - Preliminary design concepts in the plan (to scale) - Pros and cons associated with each concept - Preliminary line-item project budget ranges associated with each concept (specific to both furniture and remodeling ranges) - The work required in this phase - Client sign off Step 4: Design Development and More Refined Budgets With an agreed-upon design concept for guidance, your J. Schwartz interior designer will move forward with a finished version of the plan that will bring your interior decorating and room design ideas to life. The following steps may specify final selections on materials, finishes and furnishings, including lighting, flooring, wall coverings, furniture and artwork. - Further develop design concepts in plan (to scale) - Relevant interior elevations - Selected three-dimensional images as appropriate - Pros and cons associated with each concept - More refined budget ranges associated with each concept - The work required in this phase - Client sign off Step 5: Interior Renovation / Remodeling Documentation and Contractor Identification Experienced interior designers understand what builders need to provide the highest quality professional services and create an interior space for your personal style – and J. Schwartz Design’s interior designers are no exception. It’s important that together, we establish home design priorities and a project scope consistent with your budget. We prepare: - Construction drawings and project scope document - Contractor Pricing Set - Assistance with determining project budget Step 6: Construction Contract Administration Creating a plan for a functional and stylish interior space is the first half of the project; partnering with the right contractor is key to ensuring you get as much or more joy out of your home or room design as possible. Our services ensure your confidence in the successful completion of your exciting new home interior project: - Serve as a liaison between client and builder - Administer the Contract for Construction through to successful project completion - Interpret our drawings for design intent and assist with site-specific details - Periodically visit site at appropriate milestone intervals and inspections Step 7: Interior Furnishings Specification, Sourcing, Installation and Administration We bring it all together. When it comes to transforming new or existing spaces, our talented interior designers are passionate about creative and inventive home decorating – resulting in functional living spaces that reflect your individual taste yet will never become trendy or out of style. Every unique home interior project calls for our team to design, specify, locate, fabricate, source or install any combination of the following elements:
http://www.jschwartzdesign.net/services/interior-design/
Short Bio: Incoming Interior Design Educators Council (IDEC) President Ankerson, is Department Head and Professor of Interior Architecture & Product Design in the College of Architecture, Planning and Design at Kansas State University since 2011. Previously served as associate dean and professor at the University of Nebraska-Lincoln, and architectural and interior design practitioner. Ankerson is passionate about the value of design to improve conditions for all, and about teaching, learning and investigation in the design disciplines. She is author of the comprehensive digital works “Illustrated Codes for Designers: Residential” and “Illustrated Codes for Designers: Non-Residential” and leader of the collaborative 20th Anniversary Nuckolls Lighting Fund Grant award, Lighting Across the [Design] Curriculum. Title of Presentation: Design + Make: Process and Product Intertwined Synopsis of Presentation: Design education today must prepare global design citizens who foster synergy, embrace successful collaboration, and recognize interconnectedness; with an awareness of the responsibility of individual and collective actions in personal, social, and environmental arenas; and, with the critical collaborative leadership skills to serve them throughout their careers. In an increasingly complex world, designers today and into the future must possess a variety of abilities and the confidence for an intentionally rigorous pursuit of design quality. We are faced with preparing emerging professionals who are poised to recognize and address challenges in creative ways; who embrace change and are change agents; who are leaders infused with both the confidence that comes from embedded and discovered knowledge as well as the wisdom to apply it; who are prepared and fluent in both traditional and digital processes; and who are passionate proponents of the impact of design. As well, we are preparing learners who will examine, question, explore, articulate, and create; who embrace design thinking in all aspects of life. Description of Facility/Program: Interior Architecture & Product Design is a five-plus year professional master’s degree at Kansas State University, intertwining the process of inquiry; design; and, the making of space and form; rigorously pursuing excellence intrinsically, and contributing to the betterment of the human condition. We educate based on belief that the act of “making” is integral to develop a process of understanding; that design inquiry through evidence and design research and analysis are crucial; that haptic experiences foster depth in consideration and design; that insights and opportunities are presented through richly investigated circumstances are fruitful; and, that learning from and sharing knowledge throughout society and across culture is imperative in addressing significant issues and the betterment of human life.
http://innovatekansas.org/2014/04/22/kathy-ankerson-bio/
5 Mistakes Designers Make While Sourcing Material, Furnishing & Products? Learning how to improve your design by re-evaluating well-known common mistakes is paramount to the evolution of a designer’s prowess. Changes to interior spaces should avoid these basic errors without exception to create a strong foundation of good spatial design. - Impractical budget The importance of project’s budget, usually established by Clients or by project feasibility analyses by stakeholders, should be the most prominent guiding factor for a project. Impractical, unreasonable, or fantastical budgetary constraints remain the largest contributors to project delivery failures. A proper and feasible budget should be created and wholly vetted in order to avoid wasting money, time and resources throughout the project’s delivery and beyond. Project planning during the design process can maintain strict controls on budgets and prices through material selections and contractor requirements. Calculation of costs throughout the design process, be it schematic design(s) or detailed design(s) creates a multi-tier checks-and-balances system for budget and cost-control. A review of the budgetary costing at each stage with the clients or relevant stakeholders can allow for quick course correction on the associated design. Supplier prices and vendor catalogs should be regularly updated with the latest products and price lists. It is also prudent to maintain a contingency provision for overruns, exigent circumstances, or unexpected disruptions in availability or supply chains. - Importance of Client feedback Communication with the client is paramount during the designing process. This is a design purely for the client and not for anyone else’ personal preference. It’s important to ask the client the smallest detail possible even if it may sound silly at times. Any clarification early on can avoid expensive errors or implementations later on when a pivot or rework becomes impossible. Communicating with the client post design process for additional comments and feedback is also essential to confirm the client’s requirements and avoid further adjustments or changes that might delay the overall project. This period of constructive feedback should be a learning experience to grow and get better at designing. - Unestablished Focal point Every room needs a focal point.. Creating one such focal point per space draws our attention and attracts the eye almost instantaneously. A focal point also pulls elements together to create a holistic story of harmonious design. The layout of the space is built around the focal point as it slowly flows and blends into the remainder of the space. This focal point not nonly balances the room but pulls the attention away from a general monotony introduced by layouts that cannot be changed. - Scale in Design Designers often fail to properly integrate scale and proportion into a space by selecting the wrongly sized furniture and pieces. For a balanced and well laid out look, it is important that actual usable dimensions and the functional use of the areas are understood and implemented into the design when furniture and fixtures are chosen. Symmetry or asymmetry, whatever is the design style chosen, should remain consistent throughout the design story. Scale is the yet another source of common errors wherein designers ignore the effect of overly bulky or smaller items or fixtures on the overall thematic approach of the design. Although variety in design, and occasionally daring and experimental designs, is welcome, it should not be at the expense of functionality to the end-user. - Overdoing design trends Design has to have a personal approach. Whether it is a residence, hotel, café, or school, designs works towards playing with human emotions. Personalization based on the type of human interaction remains the pivotal concern towards any design approach. A good designer first-and-foremost asks what the space means to the client and what can be done to enhance that space. Overdoing design, be it through a variety of trends or ideologies, detracts from the personal and functional approach of the design intent. Trends also change from time to time and often from one generation to another. The personality of the client, his/her use, and prevalent trends should govern an astute designer’s approach to provide the client what he/she wants but may not be able to express.
https://wecarvespaces.com/2021/07/13/5-common-mistakes-designers-make-while-sourcing-materials-furnishing-and-products/
Lists of the PhD students and of the PhD graduates who have earned the degree at the Department of Design, organized by cycle. PhD students & graduates Cycle 37 November 2021 – October 2024 The research focuses on the sustainability for the high jewelry sector. In particular, it investigates the relationship between the high jewelry brand Bulgari and sustainability issues. The aim of the research is to provide guidelines to facilitate sustainable practices in the jewelry supply chain of Bulgari. The research involves the environmental, social and economic sustainability analysis and the identification of critical issues and opportunities for Bulgari. Within the current participatory media landscape fostering the infodemic, the research explores the ways information visualizations can -with or without the intention to- mislead and deceive during social crises. The focus is on information visualizations circulating on social media platforms about Covid-19 and the Russian invasion of Ukraine. The research encourages the discussion around the un-neutrality of information visualizations by developing a theoretical framework which defines and classifies the types of ``visualization disorders``. What is a ‘political space’? What could it be? And what role does a spatial designer play in defining and envisioning such a place? Trying to answer these questions, the research aims to generate a definition of ‘political space’ in Western urban cosmo-local contexts and to open a discussion around the role of spatial design practitioners in the field. The PhD project aims to explore and frame the emerging themes and initiatives related to the city of proximity based on the spread of new hybrid communities and enabling platforms that are rapidly impacting the urban environment and defining new ways of living in the cities by enhancing new relational hybrid proximities. The PhD aims to investigate transformative social innovations through Systemic Service Design and Relational Design by supporting and guiding collaborations and relationships. Disruptive socio-political, cultural, and technological changes are transforming everything known. According to this scenario, are fashion consumption cultures still surviving? In which direction has the relationship between fashion products, services and consumers and territories evolved? What are the drivers of change and innovation? How does design relate to innovation processes in fashion retail? Which ones are his interlocutors? What are the skills and competencies needed for the designer to take part in meaningful social, cultural, economical and technological innovation processes in fashion retail? The scope of the research is to design innovative products from a technological and ethical point of view by investigating problems of our present that will shape our future. The first goal, focused on privacy, is twofold: to protect facial biometric data and create awareness of the improper use of facial recognition technology, a problem which, if neglected, could freeze the rights of the individual. We are working on a textile woven with a protective algorithm to prevent the citizens' biometric data from being collected. The research focuses on enabling design practitioners and entrepreneurs in accelerating and scaling up “emerging materials for the ecological transition” (understood as biobased, waste-based and biofabricated), from their ideation, design and development to their meaningful use in sustainable product design applications. Furthermore, through a Material-Driven Design approach, it explores their materials experience, their features and how these all influence the degree of acceptance by users. The research topic is about answering the need for people-centered chronic care, by enhancing the capacity of care ecosystems providers for resource integration/ activations in the context of service design. The research aim is to use and bring the service design practices in order to define and integrate the resources to provide a change for and within the care ecosystem. The scope of this research is to demonstrate that through the design strategies, the textile industry can intercept the needs of users interested in green themes, who also pretend performances by textiles; in this way, it will also be possible to involve ``non-green consumers``. Durability is not only a matter of materials and technologies; we consider the value of design as a product life extender also based on the ``non-material properties`` of the clothes. The intervention areas will be material strategies such as product life-cycle or non-material such as emotional durability to encourage behavioural change. With the development of Connected Autonomous Vehicle (CAV), the revolutionary technology that is considered as a driver of social sustainability, humans will become less involved in driving activities, while in-car human behavior will be diverse. As such, the human-car relationship is bound to change disruptively, and among which, emotion is a fascinating topic. This PhD research aims to explore UX design strategies to achieve a meaningful human-vehicle emotional connection. Thanks to the media revolution, time became portable and elastic, in a culture dominated by the use of smartphones, time dissolved the boundaries between day and night, work and leisure, space and time, and the way of people's shopping and socialising. This research aims to investigate a time-based approach to develop the strategies of retail space design under the emerging smart city and digital scenarios with considering human centric and social impact to improve the quality of human-spatial interaction. The research explores how sense-based performance in interior spaces could be understood, measured, and evaluated, in which the field of Design could create tools for measuring and customizing the experience of sense in the interior spaces. These tools will be represented as design KPIs for human behavior performance in interiors based on human senses that designers and companies could follow/use from the beginning of the design process not only in the user experience phase. Her research is filling the gap of the lack of having interior design KPIs built on sense-based performance compared to the existing KPIs related to sustainability, saving energy, etc. The PhD core is the humanisation of technology, detecting the user as the protagonist, to promote sustainable behaviour when interacting with AI- Infused Objects, considering the UX aspects of the object. Ubiquitous computing elicits an endless range of internet-enabled devices, offering the potential to the user to be more diligent in energy use, except that it creates an ever-growing web of data-consuming objects that stay on forever. The PhD aims to set design guidelines for sustainable connected objects. Visions of a future led by modernity standards are embedded with determinations of gender, race, and technology. This one-sided pursuit for innovation consolidates the status quo by imposing its values systemically. Trend research sets the bases for any project in most design methodologies, also based on neo-colonial ideals. Decolonizing this practice is imminent to obtain a plural future that represents multiple realities, breaking the imperial predominance that this discipline seizes today. Her research topic is ``Service Design to promote a systemic and transformational perspective of well-being,`` especially for multi-level (individual and collective) and sustainable (long-term impact) well-being in the context of public services. Her research aims to use Service Design to adopt and reach toward a systemic perspective of well-being to facilitate public services transformation. The research topic concerns the thematic of “improving Users’ Experience of museum exhibitions through Digital Technologies”. Up to date, the main focus of her research refers to the possibility to define guidelines and implement tools that can be suitable for supporting the Co-Design between different actors, of digital, interactive, immersive, and multisensory experiences in the context of GLAM exhibitions. In relation to the needs emerged during the Covid-19 pandemic, the research aims to analyze and define a taxonomy of future workplace patterns and their role in the urban context as a place of collaboration and collision. The doctoral research focuses on the development of the sense community between local citizens, companies, and employees from a spatial design perspective. Nowadays the digital technology in museums and temporary exhibitions is one of the most important aspect able to improve the narrative and engage the audience. In particular this research proposal is focused on the design of the sound intended both as a content (museums and temporary exhibitions on sounds, music or movie subjects) and as a tool of narratives (the sound as a way to express contents). This research aims at studying how the sound system in museums can be an interesting field of experimentation for exhibition designers and interaction designers in envisioning new models of immersive cultural experiences. The research project aims to achieve better hospice care by building a new collaborative service framework. It also attempts to use this collaborative caregiving process as a service encounter to build closer interpersonal relationships and strengthen community proximity. Cycle 36 November 2020 – October 2023 The historical framework of Italian design is not only linked to icons and masters, but is made up of a plurality of professionals, projects and companies that are united by the polytechnic culture, that complex cultural milieu between humanistic knowledge, science, technology and business. The aim of my research project is to study the polytechnic culture by highlighting the work of designers who have been part of it and who have not been sufficiently studied until now. This research aims to create an “organic waste network/platform” able to supply waste and biodegradable products that could be turned into “raw materials”. In this regard, the research will focus on the identification, classification and mapping of a series of waste products with properties that enable them to be reintegrated in scalable production processes and to create new sustainable materials suitable for design and consumption products. Efficient design can be used to solve problems in our society and technology, as a ubiquitous part of our society, is a cultural expression that is embedded in it. The PhD focuses on how user interfaces and data have a discursive role in portraying and interpreting society, while constructing a system in which designers don’t have the right tools to consider the political power of the platforms they design, and not all users have the right knowledge to act on what they inadvertently produce. The research proposal explores design for Social Innovation and Spatial Design aiming at increasing social well-being in heterogeneous city-frames, made up of different users and communities and at supporting creative temporary actions for the long-term regeneration of commons. Expected result is the development of guidelines for policy makers and for citizens to promote new city-making scenarios, through legacy of actions as a distinctive quality. The research fits into the area of intersection between Communication Design and Gender Studies, with a particular focus on the role of designers as socio-cultural actors. In a framework characterised by the growing attention to gender issues, the Phd's research means to offer long-term responses to the appeals and policies that the the European Union has made in recent years (2017/2210(INI); 2008/2038(INI)) to fight gender stereotypes in the media. The doctoral research is aimed at studying the orientation processes in general terms and, in particular, in urban public places. The research question stems from the observation of the inadequacy of current orientation systems, mostly entrusted to signage artifacts that do not always respond efficiently to the needs of users. The research project aims to explore how the circular economy framework is used by community-led projects on the local scale, and identify its mechanisms to activate new participation systems and, thus, establish long-lasting behaviors in activism. The goal of the research is to design and implement a participation model for facilitation and support of community-led projects, as well as build a theoretical framework for the social dimension of circular economy in the urban context. Different scholars affirm that we are living the transition from the Information Age to the Imagination Age, in which creativity and imagination are going to become the most relevant drivers and skills for every human activity, to address a shrinking landscape of wicked issues, to live in a fast changing technological society. The research will investigate the role of imagination in shaping design models and approaches suitable to achieve sustainable transition towards responsible futures. The main goal of the PhD is to develop a material selection method for the packaging sector capable to meet both the need and the request of different stakeholders (like producer, environment and final users) and to use the Strategic Design theories and tools to foresee already in the selection stage long-term strategies for the company related to the introduction and use of a new sustainable material. This doctoral research aims to propose a framework for the theory and practice of Speculative Services by combining Speculative Design and Service Design and integrating the System Oriented Design. In the context of transformation to an inclusive society, the Speculative Services approach enables policymakers and civics to understand, explore, discuss and reflect on the topic of social exclusion to promote the inclusive development of society. To develop a complementary methodology to support Knitwear latest technological innovations including predictive modelling softwares. The research proposes to reconcile the contributions of the design, systems and complexity science, and public policy fields to articulate a non-linear framework of policy-making that addresses the limitations of existing ‘stage-based’ models. Subsequently, the research seeks to understand the elements of the ‘policy craft’ and the organisational transformation necessary to systematically embody and enact the framework. Federico’s PhD research explores the potential of design capabilities as catalysts for generative power within urban environments, intending to create social value. The aim is addressing the role of design in fostering productive and bilateral collaborations between public administration and citizens. The research objective is to frame citizen engagement within the design discipline and empower urban networks with novel capabilities to co-create socially sustainable interventions. Although digital technologies and sustainability are assuming a key role for the design of new artifacts, their integration in the design practice is still challenging. My PhD research aims to investigate the link between design, materials and additive manufacturing for sustainability. Thanks to a design engineering experimental approach, new materials from wastes and emerging design strategies for 3D printing will be analyzed to develop new tools for their integration in the design practice. The research project is part of a study context that analyzes the relationships between objects and the body. It want to respond to the growing tendency of hybridization between the body and artifacts that amplify its performance, monitor its conditions or, in general, interact with it. Particular attention is paid to those devices that aim at the well-being of the human body and, specifically, the research investigates the role of the fashion designer in their design. My research is co-funded by the Museo Nazionale Scienza e Tecnologia Leonardo da Vinci in Milan and it bridges Science and Technology Studies, Museum Studies, and Design. It aims at understanding how narratives about media objects in the collections are constructed and negotiated in and out of the museum, in relation to the objects’ multiple roles in technoscientific heritage, as design objects and historical artefacts, and in communities of people, companies, and institutions. The research aims at understanding how the digital sphere could be experienced and exploited to augment the cultural value of fashion heritage, investigating how the diffusion of culture could be promoted and fostered in the fashion sector in light of the digital transformation and the pervasiveness of networks and social media platforms. The application of Digital Technologies and Virtual Reality to the valorisation of architectural heritage will break through the limitations of traditional protection methods and make architectural heritage live through digital preservation. My research will take Chinese architectural heritage as an example to find a new way for the protection and development of architectural heritage. With the improvement of automation, due to the lack of communication with human road users (HRU), people are worried about sharing streets with AVs, which hinders users' acceptance of this technology and leads to security risks.Therefore, this research aims to explore the design potential of HMI in AVs to increase user acceptance of the technology. Cycle 35 November 2019 – October 2022 Many organizations in recent years have used design thinking as a competitive leverage. However, very often this adoption refers to a partial absorption of design within the company. There are still many doubts about how to really integrate it, avoiding that it becomes a temporary adoption with short-term impacts. PhD research will investigate what are the inertial factors that do not get design thinking off the ground, being not able to fully integrate it inside organisational processes. How smart technologies are radically changing the spatial design of new cruise ships The significant premise of the research is the transition in the cruise sector from 'Fun ships' to 'Smart ships'. The general aim of the doctoral thesis is to investigate the development of the interior design of the common spaces in cruise vessels, with a focus to the supply of digital technologies and smart materials in these areas and their contribution to the information and the entertainment of the customers, in the perspective of a better sustainable process in vessel design. Within a digitalized global ecosystem, artificial intelligence and other emerging technologies transform cities into metabolic services systems, that can be controlled by a data-driven technological infrastructure. The design discipline is fundamental to materialize new collaborative city scenarios, where to rethink the human-artificial relation. Within this emerging scenario the doctoral research has the aim of imagining a new experimental service design model of action. Further research is needed to determine if and how the identified factors are interrelated and how the factors can be Improved in practice. This study will explore the relationship between multiple factors that affect the sense of home of the elderly with dementia from the perspective of environmental factors, and develop a more comprehensive design strategy to shape the sense of home in long-term care institution based on environmental factors. Emergent technologies are generating profound transformations in the fashion industry, driving a paradigm shift. Fashion SMEs are struggling in adopting such technologies, reducing their opportunities to develop more competitive and customer-driven actions. The PhD research will identify skill gaps and barriers that are limiting fashion SMEs in catching digital revolution opportunities and, accordingly, will develop methods and approaches to support innovation. I work on Design for Sustainability, mainly focus on sustainable furniture. My research centers on products Life Cycle Design(LCD) and Sustainable Product-Service System(S.PSS), trying to find a solution for an unsustainable development model in the traditional furniture industry. The research is based on Chinese and Italian background because of the representative role of the two countries both in the developing model and the motivation of sustainability. Speculative Design researchers have used Design Fiction proposals and methodologies in an attempt to provoke discursive debate about future technological and socioeconomic challenges. Still, there is a room for development in the area of Design Fiction application from a participatory Human-Centred paradigm. The aim of the PhD is to investigate the Design Fiction practice as well as exploring and proposing an alternative framework for design futures using Design Fiction as a tool for innovation. The aim of this study is to clarify how design can lead innovation in this complex world settings. Bridging business and innovation management, humanities-social scientific knowledges with design research, he is going to challenge to explore fundamentals of design to lead innovation. And also, he tries to give clearer explanations about designers’ new functions, as mediators between the different knowledge cores to produce innovation. Vanessa’s PhD research regards the topic of 'Civic Design', intending to build a relevant theoretical framework to be translated into tools and/or procedures to be applied into real projects. In particular, the research is exploring the meanings of the concept of 'civic', what it means to 'design civically' and how to do it, especially from the point of view of public administrations. The choice of material is a key factor in the environmental impact of both products and services; for the same reason it can also become the turning point in terms of innovation and sustainability for future productions. My PhD research is based on the redefinition of the concept of sustainability through the lens of materials and will investigate the frontiers that the project applied to the material can achieve in the definition of a new materiality. In a world where Artificial Intelligence is becoming tangible, primarily according to technology-driven principles, my research aims to look for solutions to meaningfully integrate AI in our ordinary life. The approach will be cross-disciplinary, taking into account design pillars as well as innovatively intersecting Artificial Intelligence, Environmental Psychology and Emotional Design theories, to allow our domestic environment to bring positive emotions instead of frustration. Her PhD research investigates both the innate and learned processes of knowledge, drawing on the study of neuroscience and cybernetics, with the desire to identify in which phase of the process narrative can be inserted. The will is to create a theoretical model of learning through narration, which allows artificial intelligence (AI) to fit into a specific social context, through the construction of an evolutionary experiential knowledge. The aim of the doctoral research is to investigate the emerging phenomenon of training in service design. In order to do so, the research will inquiry how the discipline is being taught and the implication of the growing demand for training in Service Design on the competences required and comprehending how changes in the education field (such as digital technologies) could benefit the process of learning. Thus, it will explore future development on the training in Service Design. The relationship between human beings and non-human agents is still extremely precarious, especially in those human settlements where urbanization and anthropization reduced the possibility of interactions with different living entities. The PhD research investigate the role of Design and its methods, approaches and tools to foster people awareness in perceiving human and non-human agents as an interdependent collective based on mutual help and care. Cycle 34 November 2018 – October 2021 The general aim of my research is to investigate the relationship between aesthetics of interaction and user’s awareness in intelligent-systems-users interaction. In particular, I aim to study how the sensory language of interactive systems can be designed to foster user’s critical faculty, with special regard to ethical judgement and behaviour-change in the face of socially sensitive issues (e.g., energy consumption). Experimenting the role of design in the development of a Sustainable Apparel Supply Chain Model (SASCM). The proposed doctoral research focus is the role of design in guiding the possible directions for the future development of FDfS sector: the passage through a holistic paradigm that, considering the supply chain as a continuum, can imagine a designers' mindset transformation for guiding in the development of an alternative, circular, and sustainable apparel supply chain model within the fashion context. The research will develop a methodological framework aimed at raising awareness among non-expert users of biased machine learning models. The research will explore the relationship between communication design and explainable machine learning applied to automatic classification of text or picture. Being machine learning models for text and images classification frequently biased due to mental models and personal experience of experts that train them, the research will investigate how data visualisation and communication design can funnel the perception of reliance and doubt. My research proposes the development of a Colour Design Training Itinerary as a complete educational framework (intended learning outcomes, contents, methodology, teaching and learning activities, and assessment strategies) that sets out different levels of action for the teaching and learning of colour in the design discipline. This is being done with special attention to observation and direct experience as a way to inspire the consideration of colour phenomena within the design practice. Her doctoral research aims to understand and decode ‘atmosphere’ as a particular spatial condition and to elaborate an interpretation of the atmospheric phenomenon in the specific field of temporary exhibition spaces. The project objective is to set up a codified design methodology and approach that contributes to the integration of the concept of atmosphere within exhibition environment design and to build a reference lexicon to establish a better understanding of the exhibition space. Communication Design can provide new answers to the search for a conceptual resolution between the ideas of 'places of memory' and 'memory of places', focusing on the notion of mnemotope. The term lexically resolves this source of tension and mnemotopes can be considered culturalized object of territorial interpretation. Design for mnemotopic communication, founded on map-based systems, can interpret memory of places and succeed in translating and reactivating territorial stratifications. Touchpoints are one of the essential aspects of service design. In our daily life, services are delivered via multiple touchpoints. The turning point is the time at which a situation starts to change significantly, and it could happen under innovation. Nowadays, cities are operating based on ‘take-make-disposal’ system, and the waste is hastily disposed to landfill or incineration. The urgent action is needed to implement a model that fosters technological, social and organizational innovation for sustainability. In the light of this, then circular economy is seen as a potential solution to design out waste. My doctoral research will focus on how can we use service design to disrupt sustainable business models and system structures. Modern design, no longer the design of “objects”, has transformed into a kind of design strategy. China is confronted with the imbalanced urban-rural development, but the modern design has not yet studied in-depth China’s rural society or urban-rural relationship. The monotonous design practice lacks its due audience in the countryside, and the existing theories and analysis of rural design are far from satisfying. His PhD research is focuses on studying the possibility of sustainable interactive strategy of China’s urban-rural resources from the perspective of design. To be more specific, to research on the inclusive and sustainable space design that strengthens the interaction between urban and rural areas. The research will develop a methodological framework aimed at innovating processes of understanding, analysis and development for policy making in urban ecosystems. In order to do so, the research will investigate the emerging field of “design for policy”, specifically by looking at the role of digital data and technologies for their interpretation and visual representation. The research project intends to investigate and to redefine the role of design as a central element for the development of a renewed Italian fashion system, not only thanks to new technologies but also to a broader political vision. Design therefore as a motor to redefine a set of processes, capable of giving new life to the fashion sector and thus favouring the re-birth of Made in Italy epicentres capable of generating new ecosystems (linked to the educational, the production and the cultural systems) and thus realize a renewed Italian creative economy. By electing the urban space as a metaphor of the space of the mind, it’s clear how in many authors of the psychoanalytic world, as Ernest Hartmann, the processes of representation and metaphorization are in themselves processes of growth, development and change. Hence, the internalization of the point of view of other authors (e.g. Massimo Recalcati, Massimo Schinco), can contribute to the introduction of new visions and to the evolution of tools and methods in the design of public spaces. The doctoral research contributes to the discussion about relationships between Spatial and Service Design and how these two disciplines can interact to achieve more complexity within the context of Public Interiors: empowering the spatially contained environments inside civic buildings, institutions, cultural buildings, mobility infrastructures; threshold between the urban public and the private context, enhancing it actions of spatial design, building relation to services and programs. I am researching on the topic of collaborative design-based learning in culturally plural Higher Design Education settings because I want to find out how this formative experience fosters the students’ acquisition of intercultural collaborative competences. The ultimate aim of my research is to help design teachers and instructors to understand how the implementation of teaching strategies could contribute to integrate these competences within design-based learning courses. The material selection process affects design decisions to the very early beginning. The use of a specific material instead of others implies consequences at every level of the production flow. A proper material selection. This activity is not as simple as it may seem and it is often considered as tedious and time-consuming task to execute. Moreover, it becomes more articulated when the theme of material substitution occurs. The PhD research investigates the importance of design as enabler of technology in development of digital products for behavioural change and in envisioning scenarios able to face new societal challenges. A research through design approach based on design fiction will foresee the use of persuasive technologies for investigation of future scenarios. These scenarios will be explored with users through digital artefacts developed by exploiting new manufacturing processes. Even though the mass digitization of museum content is useful for creating digital libraries and making the content publically available, it is still not serving the real purpose of Cultural Heritage (CH) preservation and promotion. The aim of this project is to efficiently use the digitized museum artefacts to design new modes of interaction between people (visitors, scholars and curators) and cultural assets by linking the latter with interdisciplinary information and implementing augmented reality (AR) as a tool to promote them. While the design work on public services for the implementation of policies is now considered a matter of design, scholars advocate for a deeper study of the design work on informing, formulating and reframing policies. This research seeks to understand the role of design in the early stages of the policy-making process by depicting in this process the design work and methods used by public sector innovation teams from different continents. The fundamental goal of interior design education is to prepare students for the profession of interior design by teaching skills and knowledge. In recent years, the interior design profession has changed significantly. Because interior designers take many different approaches to meet the demands of rapidly changing society and diverse requirements of clients, interior design education has been specialized in diverse ways (Interior Design, 2004). The purpose of his doctoral research is to explore ways of comparing interior design education in Italy and China. The aim of study may have significant contributions to both countries and the people who are associated with interior design fields such as professors, students, professional designers and their employers. This study could be a good pilot study for testing different methods related to the construction of comparison between educational programs in different cultural backgrounds. Serious games are “games that do not have entertainment, enjoyment or fun as their primary purpose”. Serious Game designers use people’s interest in video games to capture their attention for a variety of purposes that go beyond pure entertainment. Serious game emerges playful experience, which is recognized as a way of achieving innovation and creativity. It helps people see things differently or achieve unexpected results. A playful approach can be applied to even the most serious or difficult subjects. The research now focuses on bridging the gap between serious intention and game-play experience though a game design method called ``purpose shifting``.
http://phd.design.polimi.it/overview/phd-students-and-graduates/
Construction Drawings and Details for Interiors has become a must-have guide for students of interior design. It covers the essentials of traditional and computer-aided drafting with a uniquely design-oriented perspective. No other text provides this kind of attention to detail. Inside, you'll find specialty drawings, a sensitivity to aesthetic concerns, and real-world guidance from leaders in the field of interior design. Updated content is presented here in a highly visual format, making it easy to learn the basics of drawing for each phase of the design process. This new Third Edition includes access to a full suite of online resources. Students and designers studying for the National Council for Interior Design Qualification (NCIDQ) will especially appreciate these new materials. This revision also keeps pace with evolving construction standards and design conventions. Two new chapters, 'Concept Development and the Design Process' and 'Structural Systems for Buildings,' along with expanded coverage of building information modeling (BIM), address the latest design trends. - Includes online access to all-new resources for students and instructors - Provides real-world perspective using countless example drawings and photos - Focuses on interior design-specific aspects of construction documentation - Serves as a perfect reference for the contract documents section of the NCIDQ exam Written by designers, for designers, Construction Drawings and Details for Interiors remains a standout choice for the fields of interior design, technical drawing, and construction documentation. From schematics through to working drawings, learn to communicate your vision every step of the way. - English English ROSEMARY KILMER, ASID, IDEC, LEED® AP, and W. OTIE KILMER, AIA, are Professors Emeritus of interior design at Purdue University, Indiana, and lead their own design practice, Kilmer and Associates. Rosemary Kilmer has served on the Board of Directors for the NCIDQ exam. She and W. Otie Kilmer each have over three decades of teaching and design experience.
https://www.pubmatch.com/book/197681.html
Night Fever 6 unveils outstanding and inspirational destinations that are setting the direction of contemporary hospitality design. Divided into chapters illustrating key trends in the field, the book showcases 100 hospitality interiors from across the world, on a total of 500 pages. Projects are selected based on their original concept, creativity, innovative approach or the project's unmistakable wow-factor. Each interior is presented in two to six pages, through an engaging explanatory text about the design and a curated selection of stunning photography and elucidatory drawings. History of Interior Design is a comprehensive survey covering the design history of architecture, interiors, and furniture in civilizations all over the world, from ancient times to the present. Each chapter begins with background information about the social and cultural context and technical innovations of the period and place, and illustrates their impact on interior design motifs. Throughout the text, cross-cultural influences of styles and design solutions are highlighted, demonstrating how interior design has evolved as a continuing exchange of ideas. Interiors Beyond Architecture proposes an expanded impact for interior design that transcends the inside of buildings, analysing significant interiors that engage space outside of the disciplinary boundaries of architecture. It presents contemporary case studies from a historically nuanced and theoretically informed perspective, presenting a series of often-radical propositions about the nature of the interior itself. Internationally renowned contributors from the UK, USA and New Zealand present ten typologically specific chapters including: Interiors Formed with Nature, Adaptively Reused Structures, Mobile Interiors, Inhabitable art, Interiors for Display and On Display, Film Sets, Infrastructural Interiors, Interiors for Extreme Environments, Interior Landscapes, and Exterior Interiors. Goods 3 features fifty iconic interior products from well-known designers and manufacturers from around the world. Each product is shown on eight pages from initial design sketch to its use in recently-designed interiors. Step-by-step descriptions with many drawings and photographs show how the furniture and products are designed and made. The built environment affects our physical, mental and social wellbeing. Here renowned professionals from practice and academia explore the evidence from basic research as well as case studies to test this belief. They show that many elements in the built environment contribute to establishing a milieu which helps people to be healthier and have the energy to concentrate whilst being free to be creative. Businesses and schools today are looking for ways to spur the kind of creative thinking that leads employees and students to generate innovative ideas. Many are finding that the physical spaces in which people work and learn can provide a strong impetus to follow a creative train of thought. Space for Creative Thinking puts this trend into the knowledge-work context, discussing the underlying design concepts that factor into making a space that stimulates original thinking. The book follows this outline of theory with twenty compelling examples, which range from offices and schools to research facilities. Each case study is presented through photographs, as well as interviews with both designers and users. Smart Spaces showcases interiors where brilliant solutions have been found that justify the adage a place for everything and everything in its place. Clever storage options and accessories for every room of the house, whether living room, bedroom, kitchen or bathroom. Some are artfully hidden behind walls or staircases, others are in plain sight and multipurpose all are notable for resolving space problems. The exposed lightbulb is the must-have design item of the 21st-century interior. Be inspired by the host of cutting-edge ideas to use this modern product in your own home. With this essential guide, Charlotte and Peter Fiell, 21st-century design experts and authors of such Taschen classics as Design for the 21st Century, 1000 Chairs and 1000 Lights, walk you through lighting with bare bulbs to achieve and complement a range of styles, from opulent to industrial, rustic to minimalist. Portfolio Design for Interiors teaches the aspiring interior designer how to create a professional portfolio. Using real examples of outstanding student portfolios, authors Harold Linton and William Engel demonstrate how to analyze, organize, problem-solve, and convey diverse types of visual and text information in various forms of historic, contemporary, and innovative styles. Best of German Interior Design features fifty most renowned German designers and fifty leading German manufacturers of home interior products. This opulently illustrated coffee-table book presents numerous iconic products that demonstrate interior design "Made in Germany." The raw charm of rustic farmhouses, the inviting ease of country homes: New Romance features romantic interiors inspired by modernity. From country house to chalet, New Romance highlights the charm and grace of interiors. Soft classic tones and unfinished woods provide the look and feel of dreamy antiquity. Mudrooms and breezeways bring the bright airiness of rustic outdoor spaces within the walls of beautiful homes and residences. The classic lines and traditional textures nestled within a palette of pale greys and rosé tones add to the visual storyline: soft and sophisticated, nostalgic and contemporary. New Romance presents the impeccably stylized and the casually comfortable whilst providing creative insight and inspiration for established interior designers, quixotic stylists, and those undertaking their own DIY projects. In keeping with Studio O+A's unique approach to design, Twelve True Tales reimagines the traditional designer's monograph as a company self-portrait with quirky design proverbs, behind-the-scenes photos and a graphic novel insert that presents the firm's history with characteristic irreverence. Small Homes, Grand Living's assortment of projects and homes pays homage to the iconic innovation within modest living areas and shows the creative usage of space in continually expanding urban areas. As more people across the globe move into cities, living space becomes a precious commodity. A collection of cozy cocoons shows the personality and innovation of those living inside: a home is both shelter and a welcoming reflection of the residents. Small Homes, Grand Living offers real interior design solutions directly from the occupants' imaginations. Small Lofts features over 30 lofts, almost all under 1000 square feet/100 square meters, with floor plans and high-quality photography exploring interiors from every angle. All demonstrate unique architectural features, innovative furniture solutions, and tasteful, space-saving designs that maximize both utility and style. Showcasing projects from Tokyo to Madrid, New York to Paris, Rome to Bratislava, Small Lofts provides ample inspiration for creating a sense of boundless space in compact quarters. Best of Residential is a beautiful deep dive into the most cutting-edge residential design being done in the industry today. This book is a must have for the design community interested in the culture and those looking for a valuable resource on the best residential design. Every room in the house presents its own specific challenges as well as presenting a plethora of choices when it comes to choosing important (and expensive) elements such as flooring and lighting. Whether it's a stylish but practical kitchen floor or plenty of hardworking bedroom storage, in Space Works hard-working expert advice is on hand from Caroline Clifton-Mogg, Joanna Simmons, and Rebecca Tanqueray. Every room is discussed in detail and the topics covered range from galley kitchens and bedroom lighting to choosing sanitary ware for bathrooms and creating children's work spaces for homework and crafting. The last half of the twentieth century saw the emergence, evolution and consolidation of a distinct interior design practice and profession. This book is invaluable for students and practitioners, providing a detailed specialist, contemporary historical analysis of their profession and is beautifully illustrated, with over 200 photos and images from the 1950s through to the present day. Swatch Reference Guide for Interior Design is a complete learning tool for interior fabrics. An all-in-one text and swatch book, it is replete with 145 contemporary swatches relevant to the field of interior design. This reference offers all the pertinent information needed for fabric identification, analysis, acquisition, and usage. London-based designer Tara Bernerd is known for creating interiors that have a very special sense of place. Committed to the utility of good design, Bernerd works on an increasingly global platform with projects around the world. From the interiors of stunning beachside villas to chic urban apartments, her intelligent use of spatial planning, keen eye for composition and detailing, and remarkable flair for color and texture made her one of the most sought-after interior designers in the world. This book captures Bernerd's intuitive ability to create luxurious interiors that possess a remarkable feeling of character and warmth. Fabric for the Designed Interior is a comprehensive text addressing both residential and commercial interiors. The book begins by placing fabric in a historic context, examining its connection to the growth of civilization. Later chapters take a practical approach to provide readers with the tools they need for successfully specifying fabric, dealing with environmental and safety concerns, understanding fabric and carpet-care issues, working with bids and contracts, and learning strategies for navigating showrooms and fabricating facilities. Leading designers, fabric manufacturers, and suppliers weigh in with their experiences, giving readers a clear idea of real-world expectations. Night Fever 5 takes a grand global tour of the best in hospitality design. It showcases 130 recent and extraordinary interiors and delves into the design concepts for some of the world's top destinations to drink, dine, and dream. Readers discover how initial design ideas transform and develop as the tantalizing spaces are realized. In House of Hoppen, Kelly Hoppen, MBE takes a look back over her stellar career, which began when, as a sixteen-year-old full of drive and ideas, she was commissioned to design the kitchen of a family friend. Packed with previously unseen imagery and letters from Kelly's personal and professional archives, the book charts the course of her career and celebrates some of the many highlights. House of Hoppen features a rich mix of imagery, from unseen archive material, including personal photographs and letters, to illustrations by contemporary artists, to interior photography by acclaimed photographers such as Tom Stewart, Bill Batten, Vincent Knapp, Mel Yates and Simon Upton. Award-winning and internationally renowned textile designer Weitzner brings her signature aesthetic and sophisticated color sense to the page in Ode To Color: The Ten Essential Palettes for Living and Design. In ten thematic chapters, she employs her expert insight in essays, literary quotations, pop culture anecdotes, and dazzling visuals to explore the role of color in our lives and homes. The wide range of projects presented in this volume constitutes a comprehensive study of the latest trends in the architecture and design of children's spaces. Each project presents an innovative use of materials, color, lighting and texture to form a space that is stimulating, educational and safe for young people.
https://www.dexigner.com/directory/cat/Interior-Design/Books
Arcvisa Studio embraces all forms of architecture with true passion, providing a wide range of integrated design. We offer full or partial architectural services to best suit each client’s requirements. Full Stage – Professional Architectural Design Process & Services: - Stage 1. Inception Initial meet and greet. Receive and report on the client’s requirements regarding The client's brief; site location and orientation, rights and constraints; budgetary constraints, and the need for consultants. - Stage 2: Concept and Viability (Architectural Design & Concept Sketches) Prepare an initial design and advise on: The intended space provisions and planning relationships; proposed materials and intended building services; the functional characteristics of the design and check to the conformity of the concept with the rights to the use of land. - Stage 3: Architectural Design Development Develop the design by Confirming the scope and complexity; review the design and consult with local and statutory authorities; develop the design construction systems, materials, and components; incorporate services and the work of the consultants; Review the design and costings with consultants. - Stage 4: Documentation & Procurement 4.1 Prepare documentation needed for local authority/council submission which included: Co-coordinating technical documentation with the relevant consultants; prepare specifications for the works; review the costing and program with the consultants; obtain the client's authority and submit documentation to local authorities for approval. 4.2 Complete construction/working drawing documentation and proceed to call for tenders which include: The client’s authority to prepare documents for the execution of the works; obtain offers for the execution of the works; evaluate offers and recommend the use of a building contract; contract documentation is then prepared and arrangements made for the signing of the building contract. - Stage 5: Construction Stages 5 and 6 are optional and are sometimes excluded from the architectural service provided. The architectural service provided for this stage comprises the administration of the building contract and includes: Handing over the site to the appointed contractor; issue a set of construction documentation; monitor sub-contractors design and documentation; monitor works on site according to the design and contract documentation; perform the duties and obligations assigned to the principal-agent, as set out in the JBCC building agreements or similar approved contract; issue the certificate of practical completion; assist the client to obtain the occupation certificate from the relevant local authority. - Stage 6: Close Out The final stage of this process is the closeout of the project, which includes: The preparation of the necessary documentation for the completion and handover of the project; the architect shall issue the certificate related to the contract completion after the contractor’s obligations regarding the building contract are fulfilled; operating instruction manuals are made available to facilitate a smooth handover and operation of all systems in place; provide the client with as-built drawings and relevant technical and contractual undertakings and guarantees by the contractor and sub-contractors. Interior Architecture, Interior Design & Space Planning Although Interior Design is often seen as a separate “specialised” sector of the overall architectural design, we at Arcvisa Studio feel that The Interior finishes, textures and look-and-feel should be developed simultaneously throughout the architectural design process. The architect often creates unique architectural design elements in such a way as to marry the internal environment with the exterior building facades. Interior design forms a large part of our design process at Arcvisa Studio and this is what brings about beautifully integrated design envelopes between our indoor and outdoor spaces. Architectural 3D Modelling, Photo Realistic 3D Rendering & Perspectives Photo Realistic 3D modeling & rendering is Arcvisa Studio’s pride and joy. We have dedicated staff members which completed training with international render artists and our imagery speaks for itself. Taking time to perfect realistic design details in each and every render is what helps our clients visualise their dream before construction even begins. The more realistic, the more we feel space comes to life. Pierre, our 3D render & CG director, refines every piece of imagery our firm creates, and his specialised skill in modeling and rendering is enhanced and developed on every new project he is involved in. Arcvisa Studio provides 3D modeling & Rendering services to many South African and International Architecture firms, Interior Designers, Development companies, and marketing associates. Interior & Exterior Video Animation We recently found that clients want to move around in their new design space, which has not yet been built. Video animation makes this all possible. Arcvisa Studio can build up any scale project from a newly renovated bathroom to an entire square kilometre property development and surroundings. Video animation as well as 360-degree swivel renders allow you to understand and imagine the environment and atmosphere created within your newly designed area. Technology is advancing on a daily basis when it comes to CG video animation and we are always excited to find new creative ways of allowing people to experience a design they investing in. Arcvisa Studio provides Architectural Design, Interior Design, 3D Modelling & Rendering services to both South African and international clientele.
https://www.arcvisastudio.com/arcitectual-services
The Warner Group Architects, Inc. is a full service architecture and interior design firm with an interior portfolio as diverse and accomplished as the structures themselves. The ability to have architects and designers under the same roof affords a high level of continuity on all projects. Interior Design goes far beyond traditional decorative elements. The firm’s philosophy is that a home is one of the greatest forms of self-expression, inspiring its occupants and mirroring their lifestyle. Interior Design begins with the shell of a space and the selection of core elements such as flooring, countertops, millwork, fireplaces, plumbing, appliances, hardware, and acoustic treatments. Senior Interior Designer, Jamie Hallows, specializes in high-end residential and hospitality projects with an emphasis on unique and innovative design. Her education and experience span all aspects of the design and building process, giving her a valuable and well-rounded approach to interiors. She believes it is essential to create a dialogue between the architecture and interiors, and that the key component of any successful project is open communication with the client in order to achieve an outcome that is tailored specifically to them. Whether collaborating on architectural elements, or working on interiors-centered projects, Jamie and her team of designers are committed to the success of each phase of the project from concept to completion.
http://wgarch.com/profile/interiors
This paper presents key findings on user expectations of an augmented reality interior design service, a service which combines features of social media, augmented reality (AR) and 3D modeling to ambient home design. Our study uniquely bridges all actors of the value chain to the user-centered design of an augmented reality interior design service: consumers, interior design enthusiastics and professional interior designers, but also furniture retailer and digital service providers. We examine the benefits and challenges of applying user-centered methods with three different target user groups; consumers, pro-users and professionals. This paper also describes desired features of an AR interior design service for different target groups, and discusses their technical and practical feasibility. User expectations for AR interior design services were studied with a scenario-based survey, co-design sessions and focused interviews. Altogether 242 consumers and pro-users responded the ambient home design service on the survey. Thereafter, special co-creation sessions with five consumers and two professional interior designers were conducted to develop the concept further with the target groups. In addition, we interviewed four different commercial actors on the field to deepen insights on product expectations. The combination of different user-centered methods proved valuable in the early phases of concept design. It appears that there is a demand for easy-to-use design tools both for consumers and professional users. However, our research confirms that consumers and professionals have different needs and expectations for home design system, and therefore target users should be included in the design process with appropriate methods. Commercial actors see viable business model as most substantial attribute for the service, which must be taken into account in service design as well.
https://cris.vtt.fi/en/publications/user-centered-design-of-augmented-reality-interior-design-service
Famous Quotes About Interior Design Ideas Famous Quotes About Interior Design Ideas. “i describe the design process as like the tip of the. 1 decorating is not about making stage sets, it's not about making pretty pictures for the magazines, it's really about creating a quality of life, a beauty that nourishes the soul. “to create an interior, the designer must develop an overall concept and stick to it.”. Smoke & mirrors this quote shows that bad design is misleading, and obscures the truth, while good design is reflective and truthful. 15 striped interiors you have to see 0 thoughts? 1 Decorating Is Not About Making Stage Sets, It's Not About Making Pretty Pictures For The Magazines, It's Really About Creating A Quality Of Life, A Beauty That Nourishes The Soul. Here, four quotes by interior designers we love: It’s with a fun take on a popular quote that designer petrula vontrikis reminds us that the best way to begin is by always having a concept in mind. “everything we design is a response to the specific climate and culture of a particular place.”. This Quote Shows How Important Design Is, And How It Isn’t Just An Academic Discipline, But A Way We Live Our Lives. Last updated on july 13, 2022 by sampleboard. A house that is 100% perfect is rarely full of charm.” rebecca de ravenel, new york interior designer “the question of what you want to own is actually the question of how you want to live your life.” marie kondo, professional organiser “less is more. “architecture is an expression of values.”. We Are So Obsessed With The Design Itself, We Tend To Forget Nurturing Our Inner Artist Who Desperately Craves. “design is a plan for arranging elements in such a way as best to accomplish a particular purpose.”. I'm a designer, which includes interiors, architecture, fashion, furniture, and lifestyle. Design is knowing which ones to keep.”. See More Ideas About Interior Design Quotes, Design Quotes, Design. Albert hadley 0 an interior designer must be able to clarify his intent keeping ever in mind that decorating is not a look, it's a point of view. Live with what you love.” —unknown Explore our collection of motivational and famous quotes by authors you know and love. “To Create An Interior, The Designer Must Develop An Overall Concept And Stick To It.”. Discover and share interior design quotes. Check out our list of designing quotes below to kickstart your interior design journey! No longer shall i paint interiors with men reading and women knitting.
https://www.ancoti.com/2022/09/famous-quotes-about-interior-design.html
# The designer is expected to demonstrate a thorough understanding of the overarching critical elements of environmental and exhibition planning and design. The designer will be required to handle multiple projects at once.Responsibilities -Imagine and execute dynamic spaces. -Effectively create mood boards and lookbooks to convey the atmosphere and feel of the environment and provide visual details of preliminary project vision. -Diagram out major spatial program areas and their relationships to each other. -Communicate design intent and experiential intent through development of 3D computer-generated concept renderings, photo-composites, hand-drawn sketches, and graphics, and be able to direct others to do so. -Create comprehensive documentation packages for review by the internal team and the external client, highlighting all design features of the space. Documents to include the following: -Detailed layouts, including arrangement of furniture, rental pieces, custom elements, and other activation components to optimize space, integrating all parts effectively to ensure optimal flow throughout the space. -Scaled floor plans and elevations -Polished renderings (at multiple stages throughout the design process) -Furniture and decor boards including all material and finish selections -Graphics and branding previews, key-coded to elevations and floor plans -Work with a team (project manager, project coordinator, creative director, art director, senior designers, client, etc.) to successfully achieve project goals. -Work within and manage a specified creative budget. -Maintain accuracy and ownership of all details in the design process from concept to completion. Required Skills -BA required in architecture, interior design, exhibition design, graphic design, or related field with a strong design portfolio -Ability to comprehend floor plans and architectural drawings -Knowledge of graphic design and mechanical production -Environmental rendering ability -Knowledge of basic techniques of exhibition evaluation within the development process ( preferred ) -Software and programs: -Adobe Creative Suite (inDesign, Illustrator and Photoshop) -SketchUp and/or other 3D modeling/visualization software ( SketchUp preferred ) -Microsoft Office Suite Basic Criteria -Extremely detail oriented -Organized -Flexible -Able to multitask -Comfortable talking to people (great personality!) -Willing to work outside the standard work week -Team oriented What happens next?If you are interested in this opportunity, please apply below and we will review your application as soon as possible. Please note that due to the high level of applications we receive, it is not always possible for us to respond to each applicant in person. Should your profile fit this open position we will contact you within approximately 2 weeks. You can update your resume or upload a cover letter at any time by accessing your candidate profile. #Salary is also commensurate with experience #BRAVE is an equal opportunity workplace #Promotions opportunities BRAVE is a Interior Design & Decoration Co.,Ltd based in Yangon. It was established since 2015 and we have strong team of experts who have a decade or more of experience in their field. After several projects involving architects and designers, we can guarantee quality and speed for all types of projects.
https://www.jobnet.com.mm/job-details/interior-decoration-designer-brave-interior-design-and-decoration-co-ltd/36835
Established in 1989, J. Jilich Design & Associates is well known as a preeminent interior design firm in Charleston, South Carolina. focus on timeless, classic design rather than the popular trends that tend to go out of fashion quickly. We believe that the ideal home or office should be a stylish space years from now. With the primary goal of translating our clients' individual ideas into a unique and cohesive design, we work diligently to accommodate these ideas within the existing or new space. J. Jilich Design & Associates offers an exceptional level of service from the beginning concept stages to the finished design, which allows our clients to work with us as we bring their visions to life in a personalized and unified space. Interior designers are involved with each individual aspect of the design, working closely with architects and builders, which helps to keep each project running smoothly.
https://www.jjilichdesign.com/about
JKL Design Group offers our customers access to a full suite of interior design from concept to completion, including project management and construction. Our design solutions result from a collaborative process that encourages multidisciplinary professional teams to think outside the box, create ideas, develop vision, and share knowledge, in order to solve problems and create exceptional space. The firm is driven by the dynamic Kurt Lucas and his staff of architectural and interior designers who are creative thinkers, space makers, humanitarians…. The team is skilled, experienced, and are the most passionate about design, space…and you. These traits have seen JKLDG become nationally sought-after, receiving numerous awards and recommendations from some of the most respected institutions and people.
http://jkldesign.com/services/
There are 4 essential elements of meta space design that you need to pay close attention to. They are Radiation, Repetition, Form, and Shape. Each one plays a vital role in creating a cohesive design. Without each, your metaspace will feel like an unfinished painting. If you follow these guidelines, you will create a metaspace that will draw the eye. Read on to learn more. And remember, every element has its purpose and importance. Radiation This article presents a method based on GA that allows the optimal layout of metasurfaces. We have also presented the corresponding numerical simulations, which show that the optimum layout can modulate wave spectra in the far-field. We have shown that the radiation patterns produced by our optimized metasurfaces match the simulation results, which confirms our method’s effectiveness. Repetition When it comes to the design of a metaspace, repetition can be an important element. Repetition is repeating visual elements throughout apiece, bringing them together in one cohesive design. This is especially useful on one-page pieces, but it is equally important in multi-page documents. Repetition can make your design appear more interesting, which will increase your chances of getting people to read your content. Form Metaspace can grow and shrink dynamically, unlike PermGen, which has a set size. The form is an important element of metaspace design, as it is one of the key elements of effective metaspace development. It is an ideal place to store metadata, as it will be able to grow and shrink dynamically. It should be easy to use, with little effort. Here are some ways to apply the concept of form to metaspace. Shape We create interactions with buildings in the real world by opening and closing doors, getting in and out of lifts, and turning on switches. Metaspace design is much the same, only without the constraints of the real world. Instead, we design interactions with metaspace, where each element must be considered carefully. The shape is one of the key elements of metaspace design. Harmony Harmony is a sense of balance, and harmony is created when different design elements are repeated in a single space. They should have similar themes, aesthetic styles, or moods. In addition, they should blend to create a harmonious feeling. The architect uses color, shape, and pattern to achieve this balance in this design. The result is a harmonious space that feels right and is pleasing to look at.
https://www.baanthaihousesf.com/general/5-key-elements-of-metaspace-design/
# Coleoptile Coleoptile is the pointed protective sheath covering the emerging shoot in monocotyledons such as grasses in which few leaf primordia and shoot apex of monocot embryo remain enclosed. The coleoptile protects the first leaf as well as the growing stem in seedlings and eventually, allows the first leaf to emerge. Coleoptiles have two vascular bundles, one on either side. Unlike the flag leaves rolled up within, the pre-emergent coleoptile does not accumulate significant protochlorophyll or carotenoids, and so it is generally very pale. Some preemergent coleoptiles do, however, accumulate purple anthocyanin pigments. Coleoptiles consist of very similar cells that are all specialised to fast stretch growth. They do not divide, but increase in size as they accumulate more water. Coleoptiles also have water vessels (frequently two) along the axis to provide a water supply. When a coleoptile reaches the surface, it stops growing and the flag leaves penetrate its top, continuing to grow along. The wheat coleoptile is most developed in the third day of the germination (if in the darkness). ## Tropisms Early experiments on phototropism using coleoptiles suggested that plants grow towards light because plant cells on the darker side elongate more than those on the lighter side. In 1880 Charles Darwin and his son Francis found that coleoptiles only bend towards the light when their tips are exposed. Therefore, the tips must contain the photoreceptor cells although the bending takes place lower down on the shoot. A chemical messenger or hormone called auxin moves down the dark side of the shoot and stimulates growth on that side. The natural plant hormone responsible for phototropism is now known to be indoleacetic acid (IAA). The Cholodny–Went model is named after Frits Warmolt Went of the California Institute of Technology and the Russian scientist Nikolai Cholodny, who reached the same conclusion independently in 1927. It describes the phototropic and gravitropic properties of emerging shoots of monocotyledons. The model proposes that auxin, a plant growth hormone, is synthesized in the coleoptile tip, which senses light or gravity and will send the auxin down the appropriate side of the shoot. This causes asymmetric growth of one side of the plant. As a result, the plant shoot will begin to bend toward a light source or toward the surface. Coleoptiles also exhibit strong geotropic reaction, always growing upward and correcting direction after reorientation. Geotropic reaction is regulated by light (more exactly by phytochrome action). ## Physiology The coleoptile acts as a hollow organ with stiff walls, surrounding the young plantlet and the primary source of the gravitropic response. It is ephemeral, resulting in rapid senesence after the shoot emerges. This process resembles the creation of aerenchyma in roots and other parts of the plant. The coleoptile will emerge first appearing yellowish-white from an imbided seed before developing chlorophyll on the next day. By the seventh day, it will have withered following programmed cell death. The coleoptile grows and produces chlorophyll only for the first day, followed by degradation and water potential caused growth. The two vascular bundles are organized parallel longitudinally to one another with a crack forming perpendicularly. Greening mesophyll cells with chlorophyll are present 2 to 3 cell layers from epidermis on the outer region of the crack, while nongreening cells are present everywhere else. The inner region contains cells with large amyloplasts supporting germination as well as the most interior cells dying to form aerenchyma. The length of the coleoptile can be divided into an irreversible fraction, length at turgor pressure 0, and reversible fraction, or elastic shrinking. Changes induced by white light increase water potential in epidermal cells and decrease osmotic pressure, which resulted in an increase in the length of the coleoptile. The presence of the expanding coleoptile has also been shown to support developing tissues in the seedling as a hydrostatic tube prior to its emergence through the coleoptile tip. Adventitious roots initially derive from the coleoptile node, which quickly overtake the seminal root by volume. In addition to being more numerous, these roots will be thicker (0.3–0.7mm) than the seminal root (0.2–0.4mm). These roots will grow faster than the shoots at low temperatures and slower at high temperatures. ## Anaerobic germination In a small number of plants, such as rice, anaerobic germination can occur in waterlogged conditions. The seed uses the coleoptile as a 'snorkel', providing the seed with access to oxygen.
https://en.wikipedia.org/wiki/Coleoptiles
Chapter 39 PLANT RESPONSES TO INTERNAL AND EXTERNAL SIGNALS Plants respond to environmental signals. Since plants are fixed in a place for live, they respond to environmental signals by adjusting its growth and development. SIGNAL TRANSDUCTION AND PLANT RESPONSES General models for a signal-transduction pathway. Receptors are located in the plasma membrane of the target cells. When reception occurs at the plasma membrane, a pathway of several steps is initiated, which brings a change in a molecule which in turn causes a change in an adjacent molecule and so on. The last molecule in the sequence brings about the response. 1) Reception: the signal molecule binds to an integral protein in the plasma membrane. 2) Transduction: the binding of the signal causes a configurational change in the membrane protein, which initiates the process. Transduction can occur in one step or several steps. The intermediate molecules in the transduction pathway are called relay molecules. 3) Response: in the final stage, an enzyme is activated that causes a response. The response could the catalysis of a reaction, the rearrangement of the cytoskeleton or the activation of a gene Second messenger [See fig. 39.3, page 805] Certain small molecules and ions are involved in the transduction pathway and are called second messengers. The extracellular signal is the "first messenger." First messenger (signal) combines with receptors on the plasma membrane of the target cell. The plasma membrane has G-protein linked receptors. G-protein linked receptor activates a G protein. G protein releases GDP and then binds with GTP, which then becomes activated. The active G protein binds to a receptor, Ca2+ channels open and calcium ions move into the cell. Certain receptors are linked by a G protein to calcium ion channels. Calcium in the cell binds to the protein calmodulin and changes conformation. The activated calmodulin then activates certain enzymes, which activate genes and transcription starts. Transcription of mRNA leads to the translation of proteins and the response, e. g. greening of leaves. Transcriptional regulation. The resulting proteins are modified after translation very often by the addition of phosphate group (phosphorylation). Phosphorylation of proteins is catalyzed by protein kinases. PLANT RESPONSES TO HORMONES The tissues that sense environmental change are not necessarily those that respond to the change. Plant hormones are chemical messengers. Produced in one part of the plant. Transported to another part of the plant. Causes a physiological response: regulate growth and development. Each hormone type causes several responses. The responses of different hormones overlap. There are five classes of plant hormones. Tropism is a growth response to an external stimulus from a specific direction. Changes are permanent and irreversible. Tropisms may be positive if the plant grows toward the stimulus or away from it. The Darwins published their hypothesis about chemical signals and phototropism in 1881. Peter Boysen-Jensen conclude in 1913, after conducting experiments, that the signal was indeed a chemical and that it could diffuse from one part of the plant to another. In 1925, Frits Went conducted his classical experiment with oat coleoptiles. Went was able to collect the phototropic chemical in blocks of agar. Went was able to produce a phototropic-like response without the stimulus of light. Cholodny and Went proposed independently that the response is caused by an asymmetrical distribution of the hormone. Went named the hormone auxin from the Greek auxein, to increase. The Cholodny-Went Hypothesis. Auxin is produced in the tips of the coleoptiles. The word auxin is used for any substance that causes the elongation of coleoptiles. Auxins may have multiple effects. The auxin is then transported from one side to another of the coleoptile in response to light. Cells on the side with the greater concentration of auxin will elongate more causing the entire stem to bend towards the light. Other scientists have proposed that the auxin is destroyed in the side where the light strikes causing a difference in auxin concentration along the stem. Kögl and Thimann independently isolated the auxin hormone. It turned out to be indole acetic acid or IAA. 1. Auxins The natural auxin found in plants is indole-acetic acid, IAA. Other natural and synthetic substances have auxin activity. Auxin is made from the amino acid tryptophan in the shoot tip of plants. The concentration of IAA is about 50 nanograms for every 50 grams of fresh tissue. 1 ng = 1 billionth of a gram or 0.000 000 000 1 g When IAA arrives at a target cell, then its message must be received and transduced to produce the appropriate response. Researchers found that IAA binds to a receptor protein called ABP1 The Chemiosmotic Model. The speed at which auxin is transported down the stem from the shoot apex is about 10 mm per hour: too fast for diffusion but slower than translocation in the phloem. This model attempts to explain how polar transport takes place. The auxin in an acidic cell wall (pH 5.5) accepts a proton, H+, and becomes neutral. There are influx carrier proteins located only on the upper side of the cell membrane. Auxin is taken into the cell via this influx protein carrier that takes the auxin in with the attached proton. Inside the cell the pH is neutral (pH = 7) and the auxin loses the proton and becomes negative, anionic. There are carrier proteins, efflux carrier proteins, located only on the cell membrane at the base of the cell. Auxin leaves the cell through these carrier protein following an electrochemical gradient. These events repeat and the auxin is transported from top of the cell to the bottom of the cell and out to be pick up again by influx carrier of the cell below. The Acid-Growth Hypothesis This hypothesis attempts to explain the role of auxin in cell elongation. It proposes that... 1. IAA produces or activates additional proton pumps. 2. The pumping of protons into the extracellular matrix causes K+ and other positive ions to enter the cell. 3. This increase in solutes brings an influx of water into the cell. 4. There is then an increase in turgor pressure that makes cell expansion possible. Hager and colleagues found that cells treated with addition IAA increased the number of proton pumps by 80% relative to untreated control cells. They also found that the acidity of the of the cell wall changed from a pH of 5.5 to one of 4.5. The cell wall is rigid. So how does the cell wall expands? Cosgrove found two classes of cell wall proteins that actively increase cell length when the pH in the cell wall drops below 4.5. These proteins are called expansins. Expansins have been found in many species and tissues but how they work is not known yet. One hypothesis proposes that these protein break the bonds between cellulose fibers and pectin fibers or other wall components, allowing for stretching and expansion of the wall. An overview of auxin action. It is produce in the apical meristem of shoots, in young leaves and in seeds. It is transported downward in parenchyma cells. It causes cell elongation, promotes xylem and phloem differentiation, inhibits lateral bud development, stimulates fruit development but delays ripening, and inhibits leaf abscission. Auxin is also the root-growth hormone sold in nurseries. It promotes root growth on cut-off shoots. It helps to determine the overall shape of the plant due to changes in light availability, wind strength, etc. Auxin concentration signals how tissues should respond. 2. Cytokinins Cytokinins are modified form of adenine. Neither the enzyme that produces cytokinins nor the gene that encodes for them have been found. Plant cells have cytokinin receptors. There is evidence that two kinds of receptors exist, one on the cell membrane and another inside the cell. Ca2+ ion channels are stimulated by cytokinins and increase the ion concentration in the cytosol. The direct inhibition hypothesis proposes that auxin and cytokinins act antagonistically in regulation lateral bud growth. In apical dominance, the majority of the stem growth takes place in the apical meristem of the shoot, and inhibits the growth of other meristems (e.g. lateral buds) located down the stem of the plant. The cytokinins entering the stem from the roots counter the action of the auxin and promotes lateral bud development. Not all facts are known about these interaction. An overview of cytokinin action. They are produced in actively growing tissues like roots, embryos and fruits. Travel upward in the xylem. Cytokinins work together with auxin in the promotion of cell division. The concentration of cytokinins promotes cell division but the cells remain undifferentiated, but if the concentration is raised, the cell will differentiate. Promote cell division and differentiation in which unspecialized cells become specialized, promotes chloroplast development, stimulates lateral bud development, inhibits abscission and delays senescence. Zeatin was the first isolated naturally occurring cytokinin. 3. Gibberellins (GA) Japanese scientist isolated a substance in the 1930s that causes rice seedlings to elongate abnormally and fall over before harvest. These rice plants were infected with the fungus Gibberella fujikuroi. Treating seedling with extract of the fungus caused abnormally long plants. The Japanese scientists name the chemical signal gibberellin. By the 1950, scientists found that plants, not only fungi, produce gibberellins. Gibberellins cause cell wall loosening but not by acidifying the cell wall. One theory proposes that gibberellins facilitate the penetration of expansin proteins into the cell wall. Auxin acidifies the wall and activates expansins, and gibberellins, facilitates the penetration of expansins. Both hormones work in concert. An overview of gibberellin action. It is produced in young leaves, roots, shoot apical meristem and in the seed embryo. Method of transport in the plant is unknown. It promotes seed germination, cell division and elongation, fruit development, flowering in some plants and breaks seed dormancy and winter dormancy. They have little effect on root growth. 4. Abscisic acid (ABA) Abscisic acid was isolated in the 1960s. ABA slows down growth and act antagonistically to the growth-promoting hormones. The ratio of ABA to the other hormones determines the final outcome. Preliminary data suggests that... ABA activates transcription repressors. Both activators and repressors compete for the same site in the promoter gene. If ABA is in higher concentration, repression dominates and dormancy occurs. If GA is in higher concentration, activators dominate and germination proceeds. ABA allows the plant to withstand drought. ABA through its effect on second messengers, causes an increase in the opening of outwardly directed potassium channels in the plasma membrane of guard cells, leading to a massive loss of potassium from them, a reduction in turgor and the closing of the stomata. An overview of ABA activity. It is produced in older leaves, the root cap and stems. Stressed plants produce abscisic acid It travels in the vascular tissue. It inhibits seed germination and promotes winter and seed dormancy, formation of bud scales. It causes the closing of stomata in plants under water stress 5. Ethylene Plants produce ethylene in response to stresses such drought, flooding, mechanical pressure, injury and infection. Auxin may induce ethylene to cause some physiological effects. It is a gaseous hormone produced in stem nodes, aging tissues and ripening fruits. It probably diffuses out of the tissue that produces it. It promotes ripening of the fruits, senescence and abscission, inhibits cell elongation, stimulates germination of seeds and it is involved in responses to wounds and infections by microorganisms. Ethylene causes seedlings to undergo the triple response when they encounter a solid object blocking their path to the surface, allowing the seedling to circumvent the obstacle.. 1. Slowing of stem elongation 2. Thickening of the stem 3. Curving Apoptosis is programmed cell death. A burst of ethylene accompanies the programmed destruction of organs, cells and the entire plant. During apoptosis, enzymes breakdown DNA, RNA, proteins and membrane lipids. The plant may salvage these products. Abscission of leaves is controlled by a change in the balance of ethylene and auxin. The abscission layer is located at the base of the petiole. This layer is made of small parenchyma cells with a very thin cell wall. There are no fiber cells in the layer. Enzymes hydrolyze the polysaccharides in the cell walls. A layer of cork cells is formed on the stem side of the layer before the leaf falls. As the leaf ages, it produces less auxin, eventually the ethylene concentration prevails and cell produces the hydrolytic enzymes that digest the cellulose. The ripening of fruit is triggered by a burst of ethylene. Enzymatic breakdown of the cell wall softens the fruit, starch and acids are converted to sugars. The signal from ethylene is spreads from fruit to fruit; ethylene is a gas. Brassinosteroids are steroid chemically similar cholesterol and the sex hormone of animals. Their effects are similar to those of auxin. Brassinosteroids promote cell elongation and cell division, and may retard leaf abscission and promote xylem differentiation. PLANT RESPONSE TO LIGHT Light triggers many events in the development and growth of plants. These effects are called photomorphogenesis. Plants detect the presence of light, its direction, intensity and wavelength. Phototropism is a response to the direction of light. Through these means, plants measure the passage of days and seasons. Blue light is the most effective in initiating phototropism, light induced slowing of hypocotyl elongation when a seedling breaks ground during germination, and the light-induced opening of the stomata. Different types of pigments detect blue light: Cryptochromes for the inhibition of hypocotyl elongation. Phototropin for phototropism. Zeaxanthin for stomatal opening. The photoreceptor is a phytochrome. It consists of a protein covalently bonded to a nonprotein part that functions as chromophore, the light absorbing par the molecule. The photoreceptor is a group of five blue-green pigments Each coded by different gene. Collectively called phytochrome. Found in the cells of all vascular plants. Phytochrome occurs in two forms: one form, Pr, absorbs red light at 660 nm and the other form, Pfr, absorbs far-red light at 730 nm. When either form absorbs its preferred wavelength, it changes to the other form. They called this phenomenon photoreversibility. Pfr was considered to be the active form and Pr the inactive form of the phytochrome. Phytochrome is involved in the germination of seeds: Exposure to red light converts Pr to Pfr and germination occurs. Other physiological responses influenced by phytochrome include leaf abscission, pigment formation in flowers and fruits, sleep movements, stem elongation, shade avoidance and shoot dormancy. Phytochromes monitor the amount of shade a plant receives. Biological clocks control circadian rhythms in plants and other eukaryotes. These internal timers or biological clocks of organisms. It is innate in all living organisms except bacteria. It has a strong genetic component. They are not learned from or imprinted upon the organism by the environment. They are alternating patterns of activity that occur at regular intervals. Approximate 24-hour period (20-30 hour periods). Independent of temperature and light cycles. Reset by the sun every day. Opening and closing of stomata, sleep movements, opening of flowers. The rapid conversion of Pr to Pfr after dawn automatically resets the circadian rhythm. In the absence of external cues, circadian rhythms repeat every 20 to 30 hours. Sunrise resets the clock and avoids drifting of the reaction into wrong times of the day. Photoperiod is the length of daylight in a 24-hour day. A physiological response to a photoperiod is called photoperiodism. The length of the night or continuous darkness controls flowering and other responses to photoperiod. Short-day plants (long-night plants) flower when the night length is equal to or greater than some critical period. Plant detects the shortening of the day or lengthening of the night. Minimum critical night length varies with the species. Fall flowers like poinsettias and chrysanthemums. The Pfr inhibits flowering. They need long nights in order to flower. Long-day plants (short-night plants) flower when the night length is equal to or less than some critical period. Plant detects the lengthening of the day and shortening of the night. Maximum critical night length varies with the species. Spring flowers. The Pfr induces flowering. Day-neutral plants do not respond to photoperiod. Many originated in the tropics where there is little difference in day length throughout the year. Tomato, beans, corn, cucumber, etc. Phytochrome detects the varying periods of day length. Some plants measure the length of the night very accurately, not flowering if the night is one minute shorter than the critical length. Some plant flower after a single exposure to the photoperiod required. Others require several days of exposure. Others still require a previous exposure to another environmental stimulus before they respond to the photoperiod. There is evidence of hormonal regulation in flowering but the hormone(s) involved have not been found. PLANT RESPONSES TO OTHER ENVIRONMENTAL STIMULI Tropism is growth response to an external stimulus from a specific direction. Changes are permanent and irreversible. Tropisms may be positive if the plant grows toward the stimulus or away from it. Response to gravity Gravitropism (syn. geotropism) is a response to gravity. Gravitropism functions as soon as the seed germinates ensuring that the root grows into the soil and the shoot reaches sunlight regardless of how the seed happens to be oriented in the soil. Gravitropism may be positive (toward) or negative (away from). The curvature that occurs in reaction to gravity is due to differences in cell elongation on the opposite sides of a root or shoot. The molecule called auxin promotes cell elongation in shoot and inhibits it in roots. Statoliths made of starch accumulate at the bottom of cells in the root cap in response to gravity. Statoliths at the low point trigger a redistribution and accumulation of Ca2+ and auxin on the lower side of the root's zone of elongation. The side of the cell opposite to the statoliths elongates. Gravitropism may be positive (toward) or negative (away from). Response to mechanical stimuli Thigmomorphogenesis refers to the morphological changes that result from mechanical stress. Mechanical stress due to the action of wind, rain, etc. in exposed places causes plants to grow shorter and stockier. Mechanical stress activates a transduction pathway that increases the Ca2+, which in turn contributes to the activation of genes involved in regulating the quality of the cell wall. Thigmotropism is a response to contact with a solid object. The interior of plant cells has a negative charge relative to the exterior. This occurs because proton pumps are active in many cells creating a charge separation across the membrane. This charge separation is called polarization. This separation creates potential energy called a voltage. Potential energy is a tendency to move. Most plant cells then have a membrane voltage or a membrane potential. Plants like the Venus flytrap can send messages similar to nerve impulses. This impulse is a drastic voltage change across the membrane due to a rapid flow of charges in the form of ions, from the outside of the cell to the inside. This rapid, temporary voltage change is called an action potential. The action potential is a rapid change of the inside of the cell from negative to positive then back to negative. Depolarization occurs when positive charges begin to flow into the cell lowering the membrane potential by making the both sides more alike in charges. The mechanical signal of pulling or touching causes the depolarization of the hair cells at the base of the trap leaves of the Venus flytrap. These cells swell with water and their pH increases dramatically. The mechanism involved in this change in size is not well understood. Responses to drought Drought... Causes stomata to lose turgor and close to minimize transpiration. Stimulates the production of abscisic acid that causes leaves to drop. Inhibit the growth of young leaves. Inhibits the growth of shallow roots. Responses to flooding The air spaces of flooded soil lack the oxygen need for roots to live. Oxygen deprivation causes the production of ethylene, which causes the cell in the root cortex to undergo apoptosis. This creates air tubes that allow oxygen to reach the flooded roots. Response to salt stress A salty soil causes the roots to lose water. A high concentration of certain ion can be harmful to the plant. The semipermeable membrane prevents these ions to get into the root cells but this creates problems in obtaining enough water from a hypertonic surrounding soil. Some plants produce organic compounds that maintain a more negative water potential inside the cell. This, however, cannot be maintained for Response to heat stress Excessive heat can denature enzymes and disrupt metabolism. Evaporation may lower the temperature of leaves 3 - 10°C below ambient temperature. Above certain temperature (e. g. 40°C in most temperate plants), plants begin to synthesize large quantities of special proteins called heat-shock proteins. It is suspected that these heat-shock proteins like chaperon proteins, help to prevent the denaturing of enzymes by creating a scaffold around the enzyme. Response to cold stress Plants respond to cold stress by altering the lipid composition of its plasma membranes, e. g. more unsaturated fatty acids are incorporated into the membrane to maintain fluidity. The water in the cell wall and intercellular spaces freezes. This lower the water potential in these areas and more water leaves the cells resulting in an increase in the concentration of solutes and lowering the freezing point of the cytosol. Plants in cold regions increase the concentration of sugars in their cells before winter. Sugars are tolerated in large concentration than many ionic salts. PLANT DEFENSE: RESPONSE TO HERBIVORES AND PATHOGENS Protective chemicals are called secondary compounds since they are not essential for the metabolic processes of the plant. Substances not produced as part of primary metabolism in plant; frequently with an uncertain function. More than 20,000 different secondary compounds have been identified. Plant poisons or allelochemics are constantly produced in plants. There is no need for an stimulus. Allelochemics are secondary substances capable of modifying the growth, behavior or population dynamics of other species through inhibitory or regulatory processes. These compounds cover a wide range of organic chemicals: toxic proteins, terpenes, alkaloids, phenolics, resins, steroidal, cyanogenic and mustard oil glycosides and tannins (contain aromatic rings, some are glycosides). Tannins bind to the digestive enzymes of insects that sicken the insect. They also interfere with protein break down. Phenolics are very common amino acid derivatives found in seed-producing plants; they are the burning substances in poison ivy and poison oak. Alkaloids are also amino acid derivatives found thousands of species of plants. Cyanogenic glycosides are found in a few hundreds of species. Glycosides are oligosaccharides bound to alcohols, phenols or amino groups. They usually interfere with the formation of ATP. Nicotine, caffeine, cocaine and morphine are alkaloids. Alkaloids are found in about 20% of the plant species. Alkaloids are highly toxic to herbivores and parasites; disrupt several cell mechanisms: enzyme poisoning, inhibition of protein synthesis, disruption of a membrane transport system, etc. Some plants increase the production of their secondary metabolites in their wounded tissues. Other products mimic insect hormones that disrupt growth and development of the larva. Responding to pathogens. Plants can respond to pathogens and herbivores after they are attacked. Proteinase inhibitors inhibit the enzymes responsible for the digestion of proteins. Herbivores detect proteinase inhibitors by taste and avoid plants with large concentration these substances. Parasitoids lay their eggs in the larvae of insects and devour the larva slowly as they grow and develop. By the time larva dies, the parasitoid larvae is ready to emerge as an adult. Caterpillar saliva has a substance called volicitin that induces damaged leaves to produce volatile substances that attract wasps. These wasps are parasitoids and lay their eggs in the caterpillars that have damaged the plant. In this way plants recruit parasitoids to infect the herbivores that are eating them. Pathogens against which a plant has little defense is said to be virulent. Infectious; able to overcome the host's defenses. Avirulent pathogens can infect the host without severely damaging it. Gene-for-gene hypothesis When gene products from the plant and the pathogen match and interact. Plant has a dominant resistant allele R, which recognizes pathogens with a complementary dominant avirulent Avr allele. Infected cells respond by dying. This is called a hypersensitive response or HR. Pathogens infect the plant via a wound or some other means. Pathogens release their own proteins in the plant tissues. These proteins cause the plant to react and produce their own proteins that may or may not inactivate the pathogen's proteins. Binding between the plant and pathogen proteins causes the hypersensitive response and the plant cell dies and the pathogen with it. Not binding (no match) between the plant and the pathogen proteins causes no HR reaction and the plant becomes seriously infected and eventually succumbs to disease. Experiments from around the world have confirmed the gene-for-gene hypothesis through the synthesis of R (plant gene) and Avr (virulent/avirulent pathogen gene) gene products that interact. These experiments were confirmed in 1996 by an experiment designed and carried out by Scofield and colleagues. Non-resistant plants can mount localized responses when infected by pathogens. The infected cells release molecular signals. Molecules called elicitors induce the production of antimicrobial compounds called phytoalexins. Elicitors are often cellulose fragments called oligosaccharins; they are released by the damaged cell wall. Antimicrobial molecules attack the cell wall of the bacterium. Others function as signals that spread to other cells and organs. Infection also causes an increase in cross-linking of the cell wall molecules and an increase in lignin deposition that act as barricade. Phytoalexin production Plants can produce certain antibiotic compounds called phytoalexins. A phytoalexin is small molecule that is induced by infection and that poisons the pathogen. By the recognition of R-Avr gene-for-gene pathway. Plants make these antibiotics when infected by a pathogen. Phytoalexins occur at the point of infection but a slower and more widespread reaction occurs, the systemic acquired resistance (SAR). Salicylic acid concentration increases dramatically in infected plants. Experiments have shown that addition of SA triggers an SAR response. It is not clear if SA is the hormone that causes SAR or is only a local signal that causes the expression of genes involved in the SAR response.
https://studyres.com/doc/10294972/chapter39---facstaff-home-page-for-cbu
VIB researchers at Ghent University, Belgium, discovered how the transport of an important plant hormone is organized in a way that the plant knows in which direction its roots and leaves have to grow. They discovered how the needed transport protein turns up at the underside of plant cells. The discovery helps us to understand how plants grow, and how they organize themselves in order to grow. The scientific journal Nature published the news in advance on its Web site. It is known for a long time that the plant hormone auxin is transmitted from the top to the bottom of a plant, and that the local concentration of auxin is important for the growth direction of stems, the growth of roots, the sprouting of shoots. To name a few things; auxin is also relevant to, for instance, the ripening of fruit, the clinging of climbers and a series of other processes. Thousands of researchers try to understand the different roles of auxin. In many instances the distribution of auxin in the plant plays a key role, and thus the transport from cell to cell. At the bottom of plant cells, so-called PIN proteins are located on the cell membrane, helping auxin to flow through to the lower cell. However, no one thoroughly understood why the PIN proteins only showed up at the bottom of a cell. An international group of scientists from labs in five countries, headed by Jirí Friml of the VIB-department Plant Systems Biology at Ghent University, revealed a rather unusual mechanism. PIN proteins are made in the protein factories of the cell and are transported all over the cell membrane. Subsequently they are engulfed by the cell membrane, a process called endocytosis. The invagination closes to a vesicle, disconnects and moves back into the cell. Thus the PIN proteins are recycled and subsequently transported to the bottom of the cell, where they are again incorporated in the cell membrane. It is unclear why plants use such a complex mechanism, but a plausible explanation is this mechanism enables a quick reaction when plant cells feel a change of direction of gravity, giving them a new 'underside'. To see the path of the protein, the researchers used gene technology to make cells in which the PIN protein was linked to fluorescent proteins. (This technology was rewarded with the Nobel Prize 2008 for chemistry.) Subsequently they produced cells in which the endocytosis was disrupted in two different ways. The PIN proteins showed up all over the cell membrane. When the researchers proceeded from single cells to plant embryos, the embryos developed deformations, because the pattern of auxin concentrations in the embryo was distorted. When these plants with disrupted endocytosis grew further, roots developed where the first leaflet should have been.
https://phys.org/news/2008-10-scientists-unveil-mechanism.html
University of Washington (UW) researchers have developed a new toolkit based on modified yeast cells to tease out how plant genes and proteins respond to the plant hormone auxin. The yeast-based tool allowed them to decode auxin's basic effects on the diverse family of genes that plants use to detect and interpret auxin-driven messages. Auxin is the most widespread plant hormone, affecting nearly every aspect of plant biology, including growth, development, and stress response. Auxin acts on promoters to turn nearby genes on or off. Some genes turn on, others are switched off. Plant proteins mediate these responses by binding to auxin and then to promoters. "There is a large amount of cross-communication between proteins, and plants have a huge number of genes that are targets for auxin," said UW biology professor Jennifer Nemhauser. "That makes it incredibly difficult to decipher the basic auxin ‘code' in plant cells." The research team switched from plant cells to budding yeast and engineered yeast cells to express proteins that responded to auxin, so they could measure how auxin modified the on/off state of key plant genes that they also inserted into the cells. Their experiments revealed the basic code of auxin signaling, and shed light on the complex interplay within cells that produces clear auxin-mediated messages. For more details, read the news release at the UW website.
http://iasvn.org/en/homepage/Researchers-Modify-Yeast-to-Show-How-Plants-Respond-to-Auxin-4918.html
On this page, you will find Control and Coordination Class 10 Notes Science Chapter 7 Pdf free download. CBSE NCERT Class 10 Science Notes Chapter 7 Control and Coordination will seemingly help them to revise the important concepts in less time. CBSE Class 10 Science Chapter 7 Notes Control and Coordination Control and Coordination Class 10 Notes Understanding the Lesson 1.Growth related movements: A seed germinates and grows and seedling comes out by pushing the soil aside. Such a movement is related to growth as these movements would not happen if growth of seedling is stopped. 2. Growth unrelated movements: A cat running, children playing on swings, buffaloes chewing cud – these are not movements caused by growth. These are growth independent movements. When we touch the leaves of a chhui-mui (the ‘sensitive’ or ‘touch-me-not’ plant of the Mimosa family), they begin to fold up and droop. This movement of its leaves are independent of growth. 3. Movement is an attempt by living organisms to use changes in their environment to their advantage: Plants grow to get sunshine. Buffaloes chew cud to enable digestion of tough food. Swinging gives pleasure to the children. We try to protect ourselves by detecting the change in the environment and showing movement. 4. Control and coordination in animals is regulated by two systems: Nervous system and hormonal system. 5. Animals – Nervous System The specialised tips of some nerve cells detect all information from our environment with help of receptors usually located in our sense organs, such as the inner ear, the nose, the tongue, etc. Gustatory receptors: Detect taste (present on tongue). 6. Olfactory receptors: Detect smell (present in nose). 7. Stimulus: Any agent, factor, chemical or change in external or internal environment which elicits reaction in an organism. 8. Response: A change in an organism (an action) resulting from a stimulus. 9. Mode of transmission of nerve impulse: - An electrical impulse is generated when information is acquired at the end of the dendritic tip of a nerve cell. - This impulse travels from the dendrite to the cell body, and then along the axon to its end. - At the end of the axon, the electrical impulse sets off the release of some chemicals called neurotransmitters at synapse. Synapse is the junction between two neurons where axon ending of one neuron is placed close to dendrites of the next neuron. - These chemicals (neurotransmitters) cross the synapse, and start a similar electrical impulse in a dendrite of the next neuron. - A similar synapse finally allows delivery of such impulses from neurons to effectors. 10. Effectors are muscles, glands, tissues, cells, etc., which respond to the stimulus received from nervous system. Nervous tissue is made up of an organised network of nerve cells or neurons, and is specialised for conducting information via electrical impulses from one part of the body to another. Neuron (Nerve cell) is the structural and functional unit of the nervous system. 11. Parts of a neuron - where information is acquired—Dendrites - through which information travels as an electrical impulse—Axon - Impulse converted into a chemical signal for onward transmission—Synapse. 12. Reflex Action and Reflex Arc (i) A reflex action is a spontaneous, autonomic and mechanical response to a stimulus controlled by the spinal cord without the involvement of brain. (ii) In such reactions we do something without thinking about it, or without feeling in control of our reactions. Reflex actions are very fast response of nervous system to dangerous situations. Example: We withdraw our hand immediately if we prick our finger or touch a hot object. (iii) Reflex actions are involuntary actions as they cannot be controlled as per our will. They occur automatically. (iv) The stimulus received by receptors present on sense organ is sent through sensory neuron to spinal cord. Spinal cord sends messages through motor neuron to muscles (effectors) to cause movement of the part and avoid damage. The arc formed in such case is called as the reflex arc. 13. Human Brain - Brain is the main coordinating centre of the body. - The brain and spinal cord constitute the central nervous system and are composed of nerves. - They receive information from all parts of the body and integrate it. - The communication between the central nervous system and the other parts of the body is facilitated by the peripheral nervous system. - The nerves arising from the brain (cranial nerves) and nerves arising from spinal cord (spinal nerves) constitute the peripheral nervous system. - The brain allows us to think and take actions based on that thinking. - The actions based on our will are called voluntary actions. Example: Writing, talking, clapping at the end of a programme. - Brain also sends messages to muscles. This is the second way in which nervous system communicates with muscles. 14. Parts of Brain: - The brain has three major parts/regions: fore-brain, mid-brain and hind-brain. - The fore-brain is the main thinking part of the brain. It has regions (sensory area) which receive sensory impulses from various receptors. Separate areas of the fore-brain are specialised for hearing (auditory area), smell (olfactory area), and sight (optic area) and so on. - There are separate association areas where this sensory information is interpreted by putting it together with information from other receptors as well as with information that is already stored in brain. - A separate part of fore-brain associated with hunger gives a sensation of feeling full. - Many involuntary actions are controlled by mid-brain and hind-brain. - Hind-brain comprises of cerebellum, pons and medulla oblongata. - Cerebellum is responsible for precision of voluntary actions and maintaining the posture and balance of the body. Activities like walking in a straight line, riding a bicycle, picking up a pencil. - Pons connects cerebellum and medulla oblongata and helps in regulation of respiration rate. - Medulla controls the involuntary actions like blood pressure, salivation and vomiting. 15. Protection of Brain and spinal cord - Human brain is present inside a bony box called skull or cranium. - A fluid-filled inside the skull called cerebrospinal fluid, helps in shock absorption. - The spinal cord is protected by the vertebral column. 16. How does the Nervous Tissue cause Action? Muscle cells have contractile proteins, actin and myosin which change both their shape (by getting shortened) and their arrangement in the cell in response to nervous electrical impulses received by them. This results in movement of the part of the body. 17. Coordination in Plants The touch-me-not plant moves its leaves in response to touch as its cells change shape by changing the amount of water in them, resulting in swelling or shrinking. Such movement is growth independent movement. 18. Movement Due to Growth: Pea plant climbs up by means of tendrils which are sensitive to touch. The part of the tendril in contact with the object does not grow as rapidly as the part of the tendril away from the object. This causes the tendril to circle around the object and thus cling to it. 19. Tropism/Tropic movements: Movements in plants which occur in direction of the stimulus. They are directional movements. These directional, or tropic, movements can be either towards the stimulus, or away from it. 20. Phototropism: Growth of plant in response to light. Shoots respond by bending towards light while roots respond by bending away from it. 21. Geotropism: Growth of plant in response to gravity. The roots of a plant always grow downwards while the shoots usually grow upwards and away from the Earth. 22. Hydrotropism: Growth of plant in response to water. Roots always grow towards water and show hydrotropism. 23. Chemotropism: Growth of plant in response to chemicals. Example: Growth of pollen tubes towards ovules. 24. Thigmotropism: Growth of plant in response to touch. Example: Climbers coil around support. 25. Limitations to the use of electrical impulses Firstly, they do not reach each and every cell in the animal body. They reach only those cells that are connected by nervous tissue. Secondly, the cell takes some time to reset its mechanisms before it can generate and transmit a new impulse. They cannot continually create and transmit electrical impulses. 26. Way to overcome limitations to the use of electrical impulses Most multicellular organisms use chemical communication to overcome the limitations of electrical impulse. Chemical compounds (hormones) released by stimulated cells diffuses all around it and is detected by other cells with help of special molecules on their surfaces. 27. How do Plants coordinate their activity? Plants do not have nervous system. They respond to stimuli with help of chemicals called as plant growth regulators or plant hormones like auxin, gibberelin, cytokinin, abscissic acid, etc. 28. Auxin: It is synthesised at shoot tips and helps in bending of plant towards light. When light comes from one side of the plant, auxin diffuses towards the shady side of shoot. This higher concentration of auxin stimulates the cells to grow longer on the side of shoot which is away from light. Thus, plant appears to bend towards light. 29. Gibberellins: They help in the growth of the stem. 30. Cytokinins: This hormone promotes cell division. It occurs in higher concentration in areas of rapid cell division, such as in fruits and seeds. 31. Ethylene: It is a gaseous hormone which helps in ripening of fruits. 32. Abscissic acid: This hormone inhibits growth. Its effects include wilting of leaves. It is also called as stress hormone as it helps to overcome stress conditions. 33. Hormones in Animals Hormones are non-nutrient chemicals which act as intercellular messengers, are produced in trace amounts, directly poured in the blood stream and act only on a specific target organ. They are secreted by endocrine glands (ductless glands). 34. Functions of Animal Hormones: (i) Thyroxin hormone: Iodine is necessary for the thyroid gland to make thyroxin hormone. Thyroxin regulates carbohydrate, protein and fat metabolism in the body so as to provide the best balance for growth. Iodine is essential for the synthesis of thyroxin. In case iodine is deficient in our diet, there is a possibility that we might suffer from goitre. One of the symptoms in this disease is a swollen neck. (ii) Adrenaline hormone: It is secreted by adrenal gland in response to stress of any kind and during emergency situations fear, joy, emotional stress, etc. Adrenaline increases breathing rate and the blood supply to heart and muscles. It constricts arterioles. Its target organ is heart and arteries. It is also called as emergency hormone or stress hormone. (iii) Growth hormone is secreted by the anterior pituitary gland. If there is a deficiency of this hormone in childhood, it leads to dwarfism. Its excess causes gigantism. (iv) Testosterone in males secreted by testis and oestrogen in females secreted by ovary causes changes in body of males and females during puberty. (v) Insulin hormone is produced by the pancreas and helps in regulating blood sugar levels. Its deficiency causes diabetes due to increase in blood glucose level. 35. Feedback Mechanisms: The timing and amount of hormone released are regulated by feedback mechanisms. For example, if the blood glucose level rises, it is detected by the cells of the pancreas which respond by producing more insulin to promote absorption of glucose and formation of glycogen in liver and muscles. When the blood sugar level falls and comes to normal, insulin secretion is stopped by the pancreas. Class 10 Science Chapter 7 Notes Important Terms Gustatory receptors: The receptors present in the tongue which help to detect taste. Olfactory receptors: The receptors present in the nose which help to detect smell. Neuron (Nerve cell): It is the structural and functional unit of the nervous system. Synapse: The junction between the two neurons which helps to transmit the electrical or chemical signal to the next neuron. Reflex action: A reflex action is a spontaneous, autonomic and mechanical response to a stimulus controlled by the spinal cord without the involvement of brain. Tropism/Tropic movements: Tropism is a growth movement whose direction is determined by the direction from which the stimulus strikes the plant. - Positive = Growth towards the stimulus - Negative = growth away from the stimulus. Phototropism: The response of a plant or its part to light. Roots are negatively phototropic while shoots are positively phototropic. Geotropism: The response of a plant or its part to gravity. Roots are positively geotropic while shoots are negatively geotropic. Hydrotropism: The response of a plant or its part to water. Roots always grow towards water and show positive hydrotropism. Chemotropism: The response of a plant or its part to chemical stimulus. Pollen tubes grow towards ovule due to chemicals secreted by them. Thigmotropism: The response of a plant or its part to stimulus of touch. Hormones: Hormones are chemical messengers that are secreted directly into the blood, which carries them to the specific target organs and tissues of the body to exert their functions.
https://www.learninsta.com/control-and-coordination-class-10-notes/
Graduate student David Korasick commuted between the Strader Lab, which specializes in genetics, and the Jez Lab, which has expertise in structural biology, to learn how plants control the effects of the master hormone auxin. Wikipedia lists 65 adjectives that botanists use to describe the shapes of plant leaves. In English (rather than Latin) they mean the leaf is lance-shaped, spear-shaped, kidney-shaped, diamond shaped, arrow-head-shaped, egg-shaped, circular, spoon-shaped , heart-shaped, tear-drop-shaped or sickle-shaped — among other possibilities. The ornate leaves of a humble sprig of cilantro are produced by the action of the plant hormone auxin. How does the plant “know” how to make these shapes? The answer is by controlling the distribution of a plant hormone called auxin, which determines the rate at which plant cells divide and lengthen. But how can one molecule make so many different patterns? Because the hormone’s effects are mediated by the interplay between large families of proteins that either step on the gas or put on the brake when auxin is around. In recent years, as more and more of these proteins were discovered, the auxin signaling machinery began to seem baroque to the point of being unintelligible. Now the Strader and Jez labs at Washington University in St. Louis have made a discovery about one of the proteins in the auxin signaling network that may prove key to understanding the entire network. In the March 24 issue of the Proceedings of the National Academy of Sciences, they explain that they were able to crystallize a key protein called a transcription factor and work out its structure. The interaction domain of the protein, they learned, folds into a flat paddle with a positively charged face and a negatively charged face. These faces allow the proteins to snap together like magnets, forming long chains, or oligomers. We have some evidence that proteins chain in plant cells as well as in solution, said senior author Lucia Strader, PhD, assistant professor of biology, in Arts & Sciences, and an auxin expert. By varying the length of these chains, plants may fine-tune the response of individual cells to auxin to produce detailed patterns such as the toothed lobes of the cilantro leaf. Sculpting leaves is just one of many roles auxin plays in plants. Among other things, the hormone helps make plants bend toward the light, roots grow down and shoots grow up, fruits develop and fruits fall off. “The most potent form of the hormone is indole-3-acetic acid, abbreviated IAA, and my lab members joke that IAA really stands for Involved in Almost Everything,” Strader said. The backstory here is that whole families of proteins intervene between auxin and genes that respond to auxin by making proteins. In the model plant Aribidopsis thaliana, these include five transcription factors that activate genes when auxin is present (called ARFs) and 29 repressor proteins that block the transcription factors by binding to them (Aux/IAA proteins). A third family marks repressors for destruction. “Different combinations of these proteins are present in each cell,” Strader said. “On top of that, some combinations interact more strongly than others, and some of the transcription factors also interact with one another." In an idle moment, David Korasick, a graduate fellow in the Strader and Jez labs and first author on the PNAS article, did a back-of-the-envelope calculation to put a number on the complexity of the system they were trying to understand. From a strictly mathematical point of view, there are 3,828 possible combinations of the auxin-related Arabidopsis proteins. That is assuming interactions involve only one of each type of protein; if multiples are possible, the number, of course, explodes. To make any headway, Strader said, we had a better understanding of how these proteins interact. The rule in protein chemistry is the opposite of the one in design: instead of form following function, function follows form. So to figure out a protein’s form — the way it folds in space — they turned to the Jez lab, which specializes in protein crystallography, essentially a form of high-resolution microscopy that allows protein structures to be visualized at the atomic level. Korasick had the job of crystallizing ARF7, a transcription factor that helps Arabidopsis bend toward the light. With the help of Joseph Jez, PhD, associate professor of biology, and postdoctoral research associates Corey Westfall and Soon Goo Lee, Korasick cut “floppy bits” off the protein that might have made it hard to crystallize, leaving just the part of the protein where it interacts with repressor molecules. After he had that construct, crystallization was remarkably fast. He set up his first drops in solution wells on the Fourth of July. The protein crystallized with a fuss, and he ran the crystals up to the Advanced Photon Source at the Argonne National Laboratory outside Chicago. By Aug. 1, he had the diffraction data he needed to solve the protein’s structure. The previous model for the interaction between a repressor and a transcription factor – a model that had stood for 15 years, Strader said-- was that the repressor lay flat on the transcription factor, two domains on the repressor matching up with the corresponding two domains on the transcription factor. The structural model Korasick developed showed that the two domains fold together to form a single domain, called a PB1 domain. A PB1 domain is a protein interaction module that can be found in animals and fungi as well as plants. The transcription factor ARF7 turned out to have a magnet-like interaction region, called a PB1 domain, with positively (blue) and negatively (red) charged faces. The repressor proteins, which are predicted to have PB1 domains identical to that of the ARF transcription factor, then stick to one or the other side of the transcription factor’s PB1 domain, preventing it from doing its job. Experiments showed that there had to be a repressor protein stuck to both faces of the transcription factor’s PB1 domain to repress the activity of auxin. This means the model, which pairs a single repressor protein with a single transcription factor, is wrong, Strader said. In Korasick’s crystal five of the ARF7 PB1 domains stuck to one another, forming a pentamer. The double-sided interaction domain may allow multi-protein chains to form. In Korasick’s crystal, five of the ARF7 PB1 domains stuck to one another, forming a pentamer. “It was really beautiful to look at in the software, because you could actually see its spirals and turns,” said Korasick. But both Strader and Korasick suspect that it does. Strader points out that the complexity of the auxin signaling system has increased over evolutionary time as plants became fancier. A simple plant like the moss Physcomitrella patens has fewer signaling proteins than a complicated plant like soybean. “Probably what that’s saying is that it’s really, really important for a plant to be able to modulate auxin signaling, to have the right amount in each cell, to balance positive and negative growth,” Korasick said.
http://news-archive.wustl.edu/news/Pages/26686.aspx
Plants can move when their cells grow as a response to either light or gravity, which they are able to detect even as seedlings. This type of movement is slow and permanent. Other kinds of plants are capable of quick, momentary bursts of movement, which is due to hydraulics. Phototropism occurs when the pigment phototropin absorbs light. This pigment is present in the tips of plant shoots, which are the sources of directional growth in the plant. Once phototropin absorbs light, the growth hormone auxin is released, causing the cells of the plant to divide. Auxin directly responds to the direction of light, thus allowing the plant to grow where light is strongest. Gravitropism, which also utilizes auxin, allows a plant to grow either toward or away from gravitational direction. This process is regulated by statoliths, or small packages of starch that are present in the shoot tips and roots of the plant. When the plant is tilted, the statoliths settle on whichever side gravity dictates. Auxin is then released in that direction. Some predatory plants such as the Venus flytrap move as a response to physical stimulus. Tiny hairs cover the Venus flytrap's surface and allow the plants' cells to sense pressure. Once enough pressure is reached, chloride ions are released that allow an electrical signal to travel throughout the plant. This causes potassium ions to enter and exit cells. Water then follows, as parts of the plant function as levers.
https://www.reference.com/world-view/plants-move-48a31c8395a6f291
Warning: more... Generate a file for use with external citation management software. The plant hormone auxin regulates diverse aspects of plant growth and development. Recent studies indicate that auxin acts by promoting the degradation of the Aux/IAA transcriptional repressors through the action of the ubiquitin protein ligase SCF(TIR1). The nature of the signalling cascade that leads to this effect is not known. However, recent studies indicate that the auxin receptor and other signalling components involved in this response are soluble factors. Using an in vitro pull-down assay, we demonstrate that the interaction between transport inhibitor response 1 (TIR1) and Aux/IAA proteins does not require stable modification of either protein. Instead auxin promotes the Aux/IAA-SCF(TIR1) interaction by binding directly to SCF(TIR1). We further show that the loss of TIR1 and three related F-box proteins eliminates saturable auxin binding in plant extracts. Finally, TIR1 synthesized in insect cells binds Aux/IAA proteins in an auxin-dependent manner. Together, these results indicate that TIR1 is an auxin receptor that mediates Aux/IAA degradation and auxin-regulated transcription. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/15917797?dopt=Abstract
- © 2020 American Society of Plant Biologists. All rights reserved. Plant cells need to know their tops from their bottoms. This is necessary to control basic aspects of development—where to place a new organ and which direction to grow in. It is also crucial for responding to the environment. Plants do this in a directional manner, growing stems toward intense light or roots downward in response to gravity and vice versa. Plants use polarity cues to localize and relocalize members of the PIN-FORMED (PIN) family of proteins. PIN proteins export auxin and localize to one side of the cell (Wiśniewska et al., 2006). This is often coordinated across tissues, with PINs in many cells aligned in the same direction. This means that they can move auxin directionally and cause local developmental changes. A classic example of this is in the root: PIN2 is apically localized in root epidermal cells. After gravistimulation, apical PIN2 causes an auxin accumulation on the lower epidermis and makes roots grow downward (Wiśniewska et al., 2006). Previous studies have shown that phosphorylation and vesicle trafficking are important for PIN polarity, but the molecular components that control these processes are poorly characterized. In this issue of The Plant Cell, Zhang and colleagues (2020) show that the phospholipid flippase ALA3 (that shuffles lipids from one side of the membrane to the other) regulates PIN polarity. To show this, the authors ectopically expressed PIN1 under the control of a PIN2 promoter in pin2 mutant Arabidopsis (Arabidopsis thaliana) plants. These plants normally express PIN1 on the basal side of root epidermal cells. The authors identified an EMS mutant with apically localized PIN1 (see figure). They identified the causal gene as AMINOPHOSPHOLIPID ATPASE3 (ALA3) and showed that ala3 mutants exhibit an array of auxin-related developmental defects. These include altered root gravitropism, defects in root hairs, apical hook formation, venation patterning, and petal number, as well as enhanced auxin transcriptional responses during etiolation as measured by the auxin reporter DR5rev::GFP. As predicted for a mutant with defects in membrane composition, ala3 mutants are defective in vesicle trafficking. They are more sensitive to brefeldin A, which inhibits ADP ribosylation factor guanine nucleotide exchange factor (ARF GEF) regulators of vesicle budding and causes membrane-localized proteins to be internalized as intracellular aggregates. Using the dye FM4-64, which marks lipid bilayers, Zhang and coworkers (2020) showed that ala3 mutants show increased endocytosis and disrupted trafficking to the Golgi and trans-Golgi network. To further test the role of ALA3 in ARF GEF-mediated vesicle trafficking, the authors crossed the ala3 mutant to the ARF GEF mutants big3 and gnom. They showed that big3 is epistatic to ala3, while ala3 and gnom act synergistically. They also used bimolecular fluorescence complementation and coimmunoprecipitation assays in Nicotiana benthamiana to show that ALA3 interacts physically with GNOM and BIG3. These results suggest that ALA3 acts together with ARF GEFs to regulate PIN polarity via targeted vesicle trafficking. These results echo findings in Caenorhabditis elegans and Saccharomyces cerevisiae, where ARF GEFs interact with flippases to regulate membrane properties. But many questions remain about how flippases link membrane properties to vesicle trafficking and subcellular polarity. One idea is that flippases generate an imbalance in phospholipids between the internal and external membrane surfaces, causing the membrane to bend inward and initiate vesicle budding (Lopez-Marques et al., 2014). Whatever future research holds, it will be fascinating to understand how different species use this poorly understood process to regulate fundamental aspects of development. Footnotes ↵[OPEN] Articles can be viewed without a subscription.
http://www.plantcell.org/content/32/5/1354
Type 2 diabetes is a polygenic disease characterized by hyperglycaemia due to impaired pancreatic beta-cell function and insulin resistance in peripheral target tissues such as skeletal muscle, adipose tissue and the liver. The disease develops as a conspiracy between the genetic background and the environment. Recent genome-wide association studies have identified more than 60 genetic variants associated with type 2 diabetes. Moreover, ageing, physical inactivity and obesity represent non-genetic risk factors for type 2 diabetes. The interaction between genes and environment may involve epigenetic factors, such as DNA methylation and histone modifications, to promote type 2 diabetes. Indeed, recent studies from our group and others propose that epigenetic factors play an important role in the growing incidence of type 2 diabetes. We were the first to demonstrate that DNA methylation plays a role in gene regulation in pancreatic islets from patients with type 2 diabetes. Additionally, we have performed genome-wide analyses of DNA methylation in pancreatic islets, skeletal muscle, adipose tissue and the liver from subjects with type 2 diabetes and non-diabetic controls. In these studies, we identified epigenetic alterations that are likely to contribute to the development of diabetes. We have also shown that age, diet, physical activity, birth weight and genetic variation influence the DNA methylation pattern in human pancreatic islets, skeletal muscle, adipose tissue and the liver. Nevertheless, our knowledge about the epigenetic mechanisms linking environmental factors and type 2 diabetes remains limited. The overall objective of our research is to identify the key epigenetic mechanisms influencing the pathogenesis of T2D. We are currently analysing DNA methylation and histone modifications in a number of human cohorts. Here, we examine if non-genetic and genetic factors as well as type 2 diabetes affect epigenetic variation in human tissues (Figure 1). We are further relating DNA methylation to gene expression, in vivo metabolism and type 2 diabetes. Furthermore, we are dissecting the role of epigenetic enzymes in development of type 2 diabetes. Blood-based biomarkers reflect age-associated epigenetic changes in human pancreatic islets and associate with insulin secretion and diabetes. Adipose tissue transcriptomics and epigenomics in low birth weight men and controls – role of high-fat overfeeding. Age, BMI and HbA1c levels are associated with altered DNA methylation and mRNA expression patterns in human adipose tissue and identification of epigenetic biomarkers in blood. Human Molecular Genetics 2015 Jul 1;24(13):3792-813. Genome Biology, 2014 Dec 3;15(12):522. PLoS Genetics, 2014 Nov 6;10(11):e1004735. Altered DNA methylation and differential expression of genes influencing metabolism and inflammation in adipose tissue from monozygotic twin pairs discordant for type 2 diabetes. PLoS Genetics, 2014 Mar 6;10(3):e1004160. Cell Metabolism 2011 Jan 5;13(1):80-91.
https://www.ludc.lu.se/research-units/epigenetics-and-diabetes
Epigenetics, the study of heritable changes in gene function that happens without DNA sequence change, is a relevant topic in Biology nowadays. Epigenetic mechanisms are crucial for the proper development of plants and it is now possible to exploit epigenetic variation to obtain novel crops varieties in a more rational and efficient way. We are developing a translational research program from the model plant Arabidopsis thaliana to Brassica crops species (Figure 1). To unravel the epigenetic basis of key agronomic traits we are using molecular genetic analyses together with state-of-the-art epigenomics approaches. Flowering time is a developmental transition that has a direct impact on crop yield because it is crucial for the formation of fruit and seeds. Over the past decades, research in Arabdiopsis has shown that flowering time is regulated by a number of epigenetic mechanisms in response to endogenous and environmental cues. However, detailed characterization of flowering pathways and epigenetic phenomena in crop species is scarce. To address this issue, we are investigating flowering time in a Brassica rapa oilseed type cultivar by using a series of B. rapa tilling mutants in master floral regulators and epigenetic modifiers factors (Figure 2). Our work includes comprehensive flowering time phenotyping, molecular characterization of flowering time gene expression and genomics analyses. Figure 2. B. rapa plants mutated in a floral promoting gene delay flowering time (A) whereas a mutation in a epigenetic modifier gene confers early flowering phenotype (B). Plants are able to track and measure a number of environmental cues. They regulate their growth and metabolism according to these conditions to adapt perfectly to an ever-changing environment. We are studying how plants regulate key developmental transitions in response to light and ambient temperature (Figure 3A). In addition, to flowering time we also are interested in other environmentally regulated developmental processes like seedling emergence (Figure 3B). This process is tightly regulated in response to light and temperature and it is a determinant of seed vigour, an important agronomic trait. Figure 3. (A) B. rapa development of plants grown at 21 °C vs 28 °C ambient temperature. (B) B. rapa seedlings germinate differently in darkness or constant light. The study of the epigenome landscape and its relation with the underlying genome sequence in animal and plant cells has become a central question nowadays. The methylation of specific amino acid resides at histone tails is a conserved epigenetic mechanism involved in the regulation of fundamental processes like transcription or DNA replication. Nevertheless, epigenome studies in plant crops and its comparison with model plant systems is uncommon. In collaboration with the BIOLOGICAL INFORMATICS group we are investigating the lysine methylation epigenome in Arabidopsis and Brassica crops with an emphasis on the evolutionary relationships of epigenomic signatures. We are producing new state-of-the-art epigenome dataset and developing new computational methods to precisely infer different epigenetic states in plants (Figure 4). We will also study the evolutionary patterns of histone modifications that regulate gene expression and define novel epigenomic features. Being able to study the plant epigenome will help us to understand the complex gene regulatory processes that control plant development and are the basis for important crop traits. Figure 4. H3K27me3 B. rapa epigenome. IGV browser snapshots of ChIP-seq signal across representative genes. The transition from vegetative growth to reproduction is critical in plant life and determines plant fitness. Flowering time regulation has been much studied in Arabidopsis, however their regulation in response to pathogen infection has not been analysed, to our knowledge. In collaboration with the PLANT-VIRUS INTERACTION AND CO-EVOLUTION group, we are analysing the role of master floral regulators in plant-virus interactions. Our results contribute to understand the interaction between plant development and plant-pathogen interactions, a novel area of research that may become soon a hot topic. Programa “Apoyo a Centros de Excelencia Severo Ochoa” al CBGP (UPM-INIA) (MINECO SEV2016-0672). REGULACION EPIGENETICA DEL TIEMPO DE FLORACION EN CULTIVOS OLEAGINOSOS DE BRASSICA Programa Estatal de I+D+i Orientada a los Retos de la Sociedad (MINECO BIO2015-68031-R). Crevillén, P; Yang, H; Cui, X; Greeff, C; Trick, M; Qiu, Q; Cao, X; Dean, C. 2014. "Epigenetic reprogramming that prevents transgenerational inheritance of the vernalized state". Nature. DOI: 10.1038/nature13722". Crevillén, P; Sonmez, C; Wu, Z; Dean, C. 2013. "A gene loop containing the floral repressor FLC is disrupted in the early phase of vernalization". EMBO Journal. DOI: 10.1038/emboj.2012.324".
http://www.cbgp.upm.es/index.php/en/scientific-information/young-investigator-research-lines/epigenetic-regulation
Institutional AffiliationPennsylvania State U. Grant numberGr. 9643 Approve DateApril 16, 2018 Project TitleGrogan, Dr. Kathleen E., Pennsylvania State U., State College, PA - To aid research on 'Functional Epigenomics of Growth and Development in Human Hunter-gatherers and Agriculturalists' Preliminary abstract: The epigenome is one mechanism through which sociocultural processes and environmental variation may influence human biology and even evolution. Because epigenetic marks can modify gene expression, they are an important contributor to human phenotypic variation. Epigenetic patterns are affected by inherited genetic variation and dynamically responsive to the ecological or social environment. By studying patterns of epigenomic variation among human populations, we can contextualize human variation and evolution within the framework of major differences in environmental factors and/or lifestyles. For example, subsistence strategy differences, e.g. between hunting and gathering versus agricultural societies, result in major habitat, activity level, and nutritional intake differences that could affect phenotypic variation. Compared to neighboring agriculturalist (AGR) populations, rainforest hunter-gatherers (RHGs) have significantly shorter mean adult stature. Although the environment plays a role, this height difference has a major heritable component. Furthermore, RHG and AGR populations have epigenetic differences near genes involved in growth. To study the interaction between the genome, epigenome, and environment, I will quantify how gene expression and methylation patterns of cells from Batwa RHG and Bakiga AGR from southwest Uganda (1) differ at baseline and (2) change in response to growth hormones. By describing baseline epigenomic variation and its response to growth hormone treatment, we may better understand gene expression differences play a role in stature differences between populations. This project represents a unique opportunity to investigate evolutionary and ecological influences on epigenetic regulation of growth and development as well as the flexibility of the mechanisms regulating these pathways.
https://wennergren.org/grantee/kathleen-grogan/
Somatic cell reprogramming is the process by which enforced expression of defined embryonic transcription factors (TFs) in somatic cells changes their fate to induced pluripotent stem cells (iPSCs). The latter cells -similar to embryonic stem cells (ESCs) derived by explanting early mammalian embryos- are characterized by two hallmark properties: they can self-renew infinitely in culture and they can differentiate to form all cell types of the adult body holding a great potential for regenerative medicine. In addition, iPSC technology offers a unique and tractable experimental system to study the molecular mechanisms underlying cell fate changes. In our lab we focus on the study of three-dimensional chromatin architecture and its dynamic rearrangements upon differentiation and reprogramming. We hypothesize that the interplay among transcription factors, epigenetic modulators and chromatin topology determines the gene expression program and cell identity. Unraveling the principles of this interplay will enable deeper understanding of physiological or pathological cell fate alterations, such as lineage specification and cancer respectively. Approach We use high-throughput sequencing techniques such as 4C-Seq, Hi-C, HiChIP, ChIP-seq and scRNA-seq. We also employ biochemical, molecular, and cell biology assays, as well as novel mouse genetics tools. Our data on chromatin organization, combined with gene expression data, DNA methylation and chromatin occupancy studies, will provide an integrative view of epigenetic regulatory mechanisms, which govern pluripotency, differentiation, and reprogramming.
https://www.apostoloulab.com/research?lightbox=dataItem-il9gyqzd
Dr. Lin is currently a grantee of both NIH and Fulbright, with a long-term interest in reducing the prevalence of substance abuse around the world. Dr. Lin’s research focuses on delineation of environmentally related genomic risks for substance abuse, through mapping out molecular pathways that translate external risks into regulated gene activity in the dopaminergic system, and related to addiction. He is also interested in exploiting these risk pathways for medication development. Prior to joining McLean Hospital in 2006, Dr. Lin was an IRTA fellow and then a research fellow at the National Institute on Drug Abuse Intramural Research Program in Baltimore, and worked as an instructor at the former New England Primate Research Center of Harvard Medical School. He is now director of McLean’s Laboratory of Psychiatric Neurogenomics. Since 2007, Dr. Lin’s Laboratory of Psychiatric Neurogenomics has been exploring the influence of genes on brain function and how they contribute to neurological or psychiatric disorders. Ultimately, Dr. Lin aims to be able to provide early diagnosis of certain conditions as well as to develop medications that prevent or treat them. Dr. Lin’s group aims to clarify the regulatory cascades for genes implicated in the pathophysiology of monoamine-related brain disorders. The ultimate goal is to develop novel therapeutic approaches. The dysregulation of dopamine transmission contributes to several stressor-sensitive brain disorders. These include drug addiction, depression, schizophrenia, and Parkinson’s disease. The molecular pathways leading from environmental stressors to dysregulation in vulnerable individuals are candidates for medication targets. Dr. Lin’s research focuses on three goals: identifying the genetic risk factors in dopamine neurons, devising new genetic approaches to understanding the interaction between the environment and dopamine, and developing therapeutic strategies based on related genetic pathways. Among these risk factors is the human dopamine transporter gene (hDAT or SLC6A3). This gene, hDAT, is associated with a variety of major brain disorders, suggesting that various pathways might regulate related hDAT variants. Dr. Lin’s group is interested in how genomic status (DNA sequence polymorphisms, epigenetic modification, and molecular binding) affects the transcriptional activity of hDAT. Specifically, they are working on novel transcription mechanisms in hDAT. The goal is to clarify regulatory mechanisms of abnormal gene expression and link abnormal expression levels to the pathophysiology of the relevant diseases. The therapeutic potential of elucidating the regulatory cascades depends on identifying transcription factors that bind to polymorphic sites and generating gene knockout and knockin models in rodents. If linked to pathophysiology, these factors can serve as medication targets. Correcting transcriptional activity could restore normal levels of gene expression. These large molecule-based approaches may eventually contribute to the growing field of individualized medicine. The environment plays a major role in the etiology of major brain disorders, especially addiction. To understand the interplay between the environment and genetics, Dr. Lin is investigating polymorphism-specific environmental regulations including transcriptional and epigenetic mechanisms involved in monoamine (especially dopamine) function. Aging and environmental factors, such as stressors and drug abuse, likely remodel chromatin in a manner specific to brain regions. Such epigenetic “imprinting” could contribute, in a polymorphism-dependent manner, to the process of addiction and to triggering relapse. The lab studies the causes of substance abuse at several levels—genetic, genomic, and environmental—and the interactions between them. Some people, for example, carry variants of genes that make them more susceptible to stress in their environments. This can raise their risk of substance abuse. Lin Z, Zhao Y, Chung CY, Zhou Y, Xiong N, Glatt CE, Isacson O. High regulatability favors genetic selection in SLC18A2, a monoamine transporter essential for life. FASEB Journal 2010;24:2191-200. Zhou Y, Michelhaugh SK, Schmidt CJ, Liu JS, Bannon MJ, Lin Z. Ventral midbrain correlation between genetic variation and expression of the dopamine transporter gene in cocaine-abusing versus non-abusing subjects. Addiction Biology 2014;19:122-31. Onaivi ES, Schanz N, Lin Z. Psychiatric disturbances regulate the innate immune system in CSF of conscious mice. Translational Psychiatry 2014;4,e367.
https://www.mcleanhospital.org/profile/zhicheng-carl-lin
Genetic information on genomic DNA is faithfully replicated and transmitted to the two daughter cells through mitosis. However, for differential expression of genes in various cell lineages during development, or for prompt responses to environmental stimuli, organisms further utilize epigenetic mechanisms that generate additional layers of heritable information on chromosome. Inheritance of the epigenetic information is mediated by modifications of chromatin, including DNA cytosine methylation, post-translational modifications of the core histone proteins and production of small RNA molecules. The epigenetic modifications can mediate both short-term (mitotic) and long-term (meiotic) transmission of active or inactive state of chromatin without changing the primary DNA sequence. Importantly, the epigenetic marks are heritable but potentially reversible, which therefore allow dynamic regulation of gene activities in response to surrounding environmental changes. Plants offer ideal model systems for the studies of general epigenetic mechanisms and epigenetic inheritance of phenotypic variations. For instance, many epigenetic modifying factors found in other organisms are evolutionally conserved in plants. Moreover, detailed profiles of genome-wide gene expression and epigenetic states of the genome (epigenome) in various environmental conditions are now available for the model plants such as Arabidopsis and rice. By taking both genetics and genomics approach, we will explore the molecular mechanisms of epigenetic regulation of gene activity and trans-generational inheritance of epigenetic memory in plants. Research Goals By using the model plant Arabidopsis and other plant systems, we will address following key questions in epigenetics: - How do cells distinguish between essential genes and parasitic transposons, and deposit the characteristic epigenetic modifications? - What are the underlying molecular mechanisms of inheritance of epigenetic states of chromatin through cell divisions, or over multiple generations? - How much of the phenotypic variations or complex traits are attributed not to conventional “genotype” but to the plastic “epigenotype”? Do the epigenotypes ultimately contribute to the genome evolution? Trans-generational epigenetic inheritance of phenotypic variations has been reported in many organisms including plants and animals. Since plants conserve a wide range of epigenetic modifications, our studies could also have broader implications for epigenetic regulation of genome integrity and phenotypic variations in other organisms. References Osabe K, Harukawa Y, Miura S, Saze H. (2017). Epigenetic Regulation of Intronic Transgenes in Arabidopsis. Sci Rep. Mar 24;7:45166. doi: 10.1038/srep45166. Le Tu N., Miyazaki Y., Takuno S., Saze H. (2015). Epigenetic regulation of intragenic transposable elements impacts gene transcription in Arabidopsis thaliana. Nucleic Acids Res., doi: 10.1093/nar/gkv258 Saze H., Kitayama J, Takashima K, Miura S, Harukawa Y, Ito T, Kakutani T. (2013). Mechanism for full-length RNA processing of Arabidopsis genes containing intragenic heterochromatin. Nat. Commun., 4:2301. Saze, H., Shiraishi, A., Miura, A., and Kakutani, T. (2008). Control of genic DNA methylation by a jmjC-domain containing protein in Arabidopsis thaliana. Science 319, 462-465. Saze, H. (2008). Epigenetic Memory Transmission through Mitosis and Meiosis in Plants. Semin. Cell Dev. Biol. 19, 527-536. Saze, H., and Kakutani, T. (2007). Heritable epigenetic mutation of a transposon-flanked Arabidopsis gene due to lack of the chromatin-remodeling factor DDM1. EMBO J. 26, 3641-3652. Saze, H., Mittelsten-Scheid, O., and Paszkowski, J. (2003). Maintenance of CpG methylation is essential for epigenetic inheritance during plant gametogenesis. Nat. Genet. 34, 65-69.
https://groups.oist.jp/peu
Copyright 2013 John Wiley & Sons. Abstract Summary: T cells are exquisitely poised to respond rapidly to pathogens and have proved an instructive model for exploring the regulation of inducible genes. Individual genes respond to antigenic stimulation in different ways, and it has become clear that the interplay between transcription factors and the chromatin platform of individual genes governs these responses. Our understanding of the complexity of the chromatin platform and the epigenetic mechanisms that contribute to transcriptional control has expanded dramatically in recent years. These mechanisms include the presence/absence of histone modification marks, which form an epigenetic signature to mark active or inactive genes. These signatures are dynamically added or removed by epigenetic enzymes, comprising an array of histone-modifying enzymes, including the more recently recognized chromatin-associated signalling kinases. In addition, chromatin-remodelling complexes physically alter the chromatin structure to regulate chromatin accessibility to transcriptional regulatory factors. The advent of genome-wide technologies has enabled characterization of the chromatin landscape of T cells in terms of histone occupancy, histone modification patterns and transcription factor association with specific genomic regulatory regions, generating a picture of the T-cell epigenome. Here, we discuss the multi-layered regulation of inducible gene expression in the immune system, focusing on the interplay between transcription factors, and the T-cell epigenome, including the role played by chromatin remodellers and epigenetic enzymes. We will also use IL2, a key inducible cytokine gene in T cells, as an example of how the different layers of epigenetic mechanisms regulate immune responsive genes during T-cell activation. © 2013 John Wiley & Sons Ltd.
http://ecite.utas.edu.au/85566
Warning: more... Fetching bibliography... Generate a file for use with external citation management software. Abnormal brain-derived neurotrophic factor (BDNF) signaling seems to have a central role in the course and development of various neurological and psychiatric disorders. In addition, positive effects of psychotropic drugs are known to activate BDNF-mediated signaling. Although the BDNF gene has been associated with several diseases, molecular mechanisms other than functional genetic variations can impact on the regulation of BDNF gene expression and lead to disturbed BDNF signaling and associated pathology. Thus, epigenetic modifications, representing key mechanisms by which environmental factors induce enduring changes in gene expression, are suspected to participate in the onset of various psychiatric disorders. More specifically, various environmental factors, particularly when occurring during development, have been claimed to produce long-lasting epigenetic changes at the BDNF gene, thereby affecting availability and function of the BDNF protein. Such stabile imprints on the BDNF gene might explain, at least in part, the delayed efficacy of treatments as well as the high degree of relapses observed in psychiatric disorders. Moreover, BDNF gene has a complex structure displaying differential exon regulation and usage, suggesting a subcellular- and brain region-specific distribution. As such, developing drugs that modify epigenetic regulation at specific BDNF exons represents a promising strategy for the treatment of psychiatric disorders. Here, we present an overview of the current literature on epigenetic modifications at the BDNF locus in psychiatric disorders and related animal models. National Center for Biotechnology Information,
https://www.ncbi.nlm.nih.gov/pubmed/21894152?dopt=Abstract
Epigenetic Modification in Mammal’s Development Epigenetic modification has emerged as a surrogate marker of exposure to the environment. Along with environmental exposure data and genetic variants, they are now an important component of epidemiology research that help to identify disease-relevant genomic areas, offer options for prevention and early detection measures, and improve risk stratification. Any meiotic or mitotic alteration that does not result in a change in DNA sequence but has a major impact on the organism’s development is characterised as an epigenetic modification. In vertebrates, enzyme modifications of cytosine bases and histone proteins in the nucleosome core give heritable epigenetic information not encoded in the cell’s nucleotide sequence. During the S phase of the cell cycle, chromatin replication provides a window of opportunity for these enzymes and auxiliary factors to load onto newly produced DNA and robustly disseminate all the molecular information. If the correct epigenetic modification is not preserved it could have disastrous outcomes for the cell, such as improper gene expression and apoptosis. Notably, the methylation of cytosine in mammalian cells is maintained reliably between cell divisions. DNA (cytosine-5) methyltransferases catalyse the retention of DNA methylation throughout cell division (DNMTs). DNA Methylation and Epigenetic Modification DNA methylation occurs in mammalian genomes by covalent alteration of the fifth carbon (C5) in the cytosine base, with the majority of these modifications occurring at CpG dinucleotides. CpG dinucleotides are widely dispersed across the human genome, although they are concentrated in dense regions known as CpG islands (CGIs). The methylation pattern in every particular cell is the result of separate yet dynamic methylation and demethylation processes. Methylation patterns in differentiated somatic cells are generally permanent and inheritable in the mammalian genome. However, methylation pattern modification (demethylation/remethylation) occurs in two developmental stages: germ cells and preimplantation embryos. Unlike primordial germ cells, where genome-wide demethylation occurs, the genomes of mature sperms and eggs in mammals are extensively methylated as compared to somatic cells. During development and in normal (non-neoplastic/non-senescent) tissue types, CpG dinucleotides within CGI promoters are normally unmethylated. During development and in normal (non-neoplastic/non-senescent) tissue types, CpG dinucleotides within CGI promoters are normally unmethylated. In a study CGIs showed tissue-specific methylation of development-related genes, implying a pre-programmed DNA methylation process. Another way for DNA methylation to spread is by methylation spreading, which begins immediately after fertilisation with genome-wide demethylation. The majority of the genome is remethylated after the blastocyst stage and continues at a lesser rate throughout development. The methylation status of CGIs was shown to be connected with DNA sequence, repeat rates, and projected DNA structures, according to combined research using bioinformatic techniques and methylation data from chromosome 21. One of the fundamental aspects linked with complicated disorders such as cancer, type 2 diabetes, schizophrenia, and autoimmune disease is aberrant gene expression. These disorders are known to be heritable, even though their inheritance patterns are not Mendelian. Several lines of evidence imply that epigenetic abnormalities, in combination with genetic modifications, are to blame for the illnesses’ dysregulation of important regulator genes. The epigenetic process explains some of the characteristics of complicated diseases, such as late start, gender effects, parent-of-origin effects, and symptom variation. Genomic Imprinting Mammalian diploid animals have two copies of autosomal genes, one from each parent. Both paternal alleles have an equal chance of being expressed in cells in most circumstances. However, depending on the gene’s parent-of-origin, a minority of autosomal genes are prone to genomic imprinting, in which expression is restricted to one of two parental alleles. Failure to establish accurate genomic imprinting has been found to cause problems in embryonic and neonatal development, as well as neurological illnesses such as Prader-Willi syndrome, in placental mammals. Several protein-coding genes and at least one non-coding RNA (ncRNA) gene are often found in each imprinted gene cluster, which spans 100–3000 kb of DNA. The imprinting control region (ICR) is a single main cis-acting element that regulates the expression of imprinted genes in each cluster. ICRs are CpG-rich DNA regions that are solely methylated in one of the two parental gametes, carrying the parental information. During gametogenesis, this DNA methylation imprint is acquired. The parental imprints are determined before the sex is determined. Gametic imprints are placed on paternally imprinted genes during sperm production and maternally imprinted genes during egg development as the embryo develops into a male or female. This methylation imprint is retained on the same parental chromosome through cell divisions after fertilisation. A set of epigenetic machinery is required for the establishment and preservation of imprints. Dosage Compensation Heterogametic species are known to have a different number of sex chromosomes than females. Males and females have distinct transcription levels for these chromosomes. Dosage compensation systems involving gene expression and chromatin accessibility regulations have arisen during evolution to address this imbalance. The existence of an underlying epigenetic process that increases the accessibility of the X chromosome chromatin in males, allowing for X-linked gene dose correction between sexes has been suggested in many studies. Dosage compensation is frequently regulated by epigenetic mechanisms that regulate chromatin accessibility on the X or Z chromosomes in humans and birds respectively. Chromatin Modification and Epigenetics Long-term gene expression is influenced by epigenetic mechanisms, which is necessary for the precise execution of developmental programmes and the maintenance of cell types across cell divisions. In addition to the activation of genomic programmes that lead to the formation of specific cell types, a cell must also mute alternative gene expression programmes that are exclusive to other cell types to secure its fate. Neurogenesis, when neural cell fates are acquired in the developing nervous system, is the finest illustration of this lineage restriction. In contrast to the stable and inheritable silencing of neuronal chromatin in terminally differentiated nonneuronal cells, the situation in ES cells and neuronal progenitors raises another epigenetic concern about gene expression, because these cells should be able to relieve the silent chromatin state upon differentiation to allow lineage-specific gene expression. Together, epigenetic mechanisms provide a key foundation for stem cell identity maintenance and long-term cellular memory, both of which are critical for normal development.
https://strickendots.com/epigenetic-modification-in-mammals-development/
All files, sofware, and tutorials that make up SABLE are Copyright (c) 1997, 1998, 1999 Virginia Tech. You may freely use these programs under the conditions of theSABLE General License. This tutorial discusses a classification system that is often used to describe the measurement of concepts or variables that are used in social sciences and behavioral research. This classification system categorizes the variables as being measured on either a nominal, ordinal, interval, or ratio scale. After introducing the classification system and providing examples of variables which are typically measured on each type of scale, we note the implications of these measurement scales for the analysis of data. Specifically, we discuss the statistical tests which are most appropriate for data measured on each type of scale. Finally, we will briefly consider some of the limits and criticisms of this classification system. In the social and behavioral sciences, as in many other areas of science, we typically assign numbers to various attributes of people, objects, or concepts. This process is known as measurement. For example, we can measure the height of a person by assigning the person a number based on the number of inches tall that person is. Or, we can measure the size of a city by assigning the city a number which is equal to the number of residents in that city. Sometimes the assignment of numbers to concepts we are studying is rather crude, such as when we assign a number to reflect a person's gender (i.e., Male = 0 and Female = 1). This type of measurement is known as a Nominal measurement scale. A Nominal measurement scale is used for variables in which each participant or observation in the study must be placed into one mutually exclusive and exhaustive category. For example, categorizing study participants into "male" and "female" categories demonstrates that 'sex' is measured on a nominal scale. Every observation in the study falls into one, and only one, Nominal category. With a nominal measurement scale, there is no relative ordering of the categories -- the assignment of numeric scores to each category (Male, Female) is purely arbitrary. The next level of measurement, Ordinal measurement scales, do indicate something about the rank-ordering of study participants. For example, if you think of some type of competition or race (swimming, running), it is possible to rank order the finishers from first place to last place. If someone tells you they finished 2nd, you know that one person finished ahead of them, and all other participants finished behind them. Although ordinal variables provide information concerning the relative position of participants or observations in our research study, ordinal variables do not tell us anything about the absolute magnitude of the difference between 1st and 2nd or between 2nd and 3rd. That is, we know 1st was before 2nd, and 2nd was before 3rd, but we do not know how close 3rd was to 2nd or how close 2nd was to 1st. The 1st place finisher could have been a great deal ahead of the 2nd place finisher, who finished a great deal ahead of the 3rd place finisher; or, the 1st, 2nd, and 3rd place finishers may have all finished very close together. The image below illustrates the ordinal ranking of individuals in a competition. The tick mark to the far right illustrates the person who finished in first place, while the tick mark to the far left represents the person who finished sixth out of six. The limits of ordinal data are most apparent when one looks at the distance between the third and the fourth place finishers. Although the absolute distance between third and fourth was not that large, the measurement of ordinal data does not indicate this detail. The next level of measurement, Interval scales, provide us with still more quantitative information. When a variable is measured on an interval scale, the distance between numbers or units on the scale is equal over all levels of the scale. An example of an Interval scale is the Farenheit scale of temperature. In the Farenheit temperature scale, the distance between 20 degrees and 40 degrees is the same as the distance between 75 degrees and 95 degrees. With Interval scales, there is no absolute zero point. For this reason, it is inappropriate to express Interval level measurements as ratios; it would not be appropriate to say that 60 degrees is twice as hot as 30 degrees. Our final type of measurement scales, Ratio scales, do have a fixed zero point. Not only are numbers or units on the scale equal over all levels of the scale, but there is also a meaningful zero point which allows for the interpretation of ratio comparisons. Time is an example of a ratio measurement scale. Not only can we say that difference between three hours and five hours is the same as the difference between eight hours and ten hours (equal intervals), but we can also say that ten hours is twice as long as five hours (a ratio comparison). Test your understanding of measurement scales in the question and answer panel which follows: For a helpful hint to remember the four measurement scales, click here. One of the primary purposes of classifying variables according to their level or scale of measurement is to facilitate the choice of a statistical test used to analyze the data. There are certain statistical analyses which are only meaningful for data which are measured at certain measurement scales. For example, it is generally inappropriate to compute the mean for Nominal variables. Suppose you had 20 subjects, 12 of which were male, and 8 of which were female. If you assigned males a value of '1' and females a value of '2', could you compute the mean sex of subjects in your sample? It is possible to compute a mean value, but how meaningful would that be? How would you interpret a mean sex of 1.4? When you are examining a Nominal variable such as sex, it is more appropriate to compute a statistic such as a percentage (60% of the sample was male). When a research wishes to examine the relationship or association between two variables, there are also guidelines concerning which statistical tests are appropriate. For example, let's say a University administrator was interested in the relationship between student gender (a Nominal variable) and major field of study (another Nominal variable). In this case, the most appropriate measure of association between gender and major would be a Chi-Square test. Let's say our University administrator was interested in the relationship between undergraduate major and starting salary of students' first job after graduation. In this case, salary is not a Nominal variable; it is a ratio level variable. The appropriate test of association between undergraduate major and salary would be a one-way Analysis of Variance (ANOVA), to see if the mean starting salary is related to undergraduate major. Finally, suppose we were interested in the relationship between undergraduate grade point average and starting salary. In this case, both grade point average and starting salary are ratio level variables. Now, neither Chi-square nor ANOVA would be appropriate; instead, we would look at the relationship between these two variables using the Pearson correlation coefficient. As a final comment, we alert you to what is perhaps the most common criticism of the measurement scales discussed in this tutorial. Even though this comment might seem at odds with much of what has been covered in this tutorial, it is an important issue that we must deal with. In the social and behavioral sciences, much of what we study is measured on what would be classified as an ordinal level. We often ask if people "Strongly Disagree", "Slightly Disagree", or are "Neutral" to a series of statements. We then assign a value of '1' if they Strongly Disagree with a statement, up to a '5' if they Strongly Agree with a statement. To be sure, this type of measurement is ordinal, in the sense that "Strongly Agree" reflects more agreement than "Slightly Agree". This type of measurement is not an interval or a ratio level of measurement, because we can not state for certain that the interval between "Strongly Disagree" and "Slightly Disagree" is equivalent to the interval between "Slightly Disagree" and "Neutral". Nor can we say that there is an absolute zero point for level of agreement. However, if we were to rigidly follow the rules of "permissible" analyses for ordinal variables, many of the analyses we conduct in social sciences research would be deemed impermissible. On the other hand, some scientists have conducted computer simulations to try and find out what would happen if we violated certain "rules" of data analysis. They have found that for the most part, it is alright to treat ordinal data (such as variables which have been measured using Strongly Disagree to Strongly Agree response alternatives) as though it were interval level data, and conduct statistical tests that are appropriate for interval level data. The point of this concluding note is to inform you that while the classification of variables according to their measurement scales is useful to assist you in choosing an analytic procedure, it is not meant to be a substitute for using sound judgment when choosing a statistical analysis. In summary, when choosing a statistical analysis procedure, you should consider the level of measurement of your variables, but you also need to consider the assumptions of the statistical analytic procedure you are considering, and you also need to consider the substantive meaning and the interpretability of the statistics you are computing. There is no substitute for informed, sound judgment when choosing a statistical test for analyzing your data.
https://geosim.cs.vt.edu/Sable/converted/Measurement/activity.html
26. The Myth Busters hosts concluded from their study that there is “little doubt, yawning seems to be contagious.” Based on your simulation analysis of their data, considering the issue of statistical significance, do you agree with this conclusion? Explain your answer, as if to the hosts, without using statistical jargon. Be sure to include in your answer an explanation for why you conducted the simulation analysis and what the analysis revealed about the research question. A rock concert producer has scheduled an outdoor concert. The producer estimates the attendance will depend on the weather according to the following table. Weather Attendance Probability wet, cold wet, warm dry, cold dry, warm (a) What is the expected attendance? (b) If tickets cost $ 30 each, the band will cost $ 150000, plus $ 50000 for administration. What is the expected profit? Discuss the logic underlying the use of three sigma limits on shewhart control chart.how will chart respond if narrow limits are chosen?how will it respond if wider limits are chosen? Which of the following statements is correct? A. Changing the units of measurements of the explanatory or response variable does not change the value of the correlation. B. A negative value for the correlation indicates that there is no relationship between the two variables. C. The correlation has the same units (e.g., feet or minutes) as the explanatory variable. D. Correlation... 1. Is this a randomized experiment or an observational study? Explain how you know. 2. State the two variables measured on each unit. Recall from Exercise 10.1.11 that the data fi le House Prices contains data on prices ($) and sizes (in square feet) for a random sample of houses that sold in the year 2006 in Arroyo Grande, California. a. State in words the appropriate null and alternative hypotheses to test whether there is an association between prices and sizes of houses. b. Describe how one might use everyday items (for...
https://www.quesba.com/questions/26-the-myth-busters-hosts-concluded-from-their-study-that-there-is-little-d-41636
Environmental variability in aquatic ecosystems makes the study of ectotherms complex and challenging. Physiologists have historically overcome this hurdle in the laboratory by using ‘average’ conditions, representative of the natural environment for any given animal. Temperature, in particular, has widespread impact on the physiology of animals, and it is becoming increasingly important to understand these effects as we face future climate challenges. The majority of research to date has focused on the expected global average increase in temperature; however, increases in climate variability are predicted to affect animals as much or more than climate warming. Physiological responses associated with the acclimation to a new stable temperature are distinct from those in thermally variable environments. Our goal is to highlight these physiological differences as they relate to both thermal acclimation and the ‘fallacy of the average’ or Jensen's inequality using theoretical models and novel empirical data. We encourage the use of more realistic thermal environments in experimental design to advance our understanding of these physiological responses such that we can better predict how aquatic animals will respond to future changes in our climate. Introduction Aquatic environments are highly variable. For example, temperature, dissolved oxygen and salinity vary daily, and often cycle based on external forces such as tides/water flow, air temperature or sunlight. These fluctuations in physical habitat have pervasive effects on the biology of ectotherms (see Glossary) living within these environments. Temperature, in particular, has widespread effects on biological function and drives ecological patterns of ectothermic species distributions (Seebacher and Franklin, 2012). Often, physiologists study these animals under ‘average’ conditions (i.e. average summer temperature versus average winter temperature) to understand the effects of temperature on their physiology for practical reasons and to simplify experimental designs. However, the response to a stable average condition may be quite different from the average response to variable conditions. For example, growth rate of an animal from a stable, average environment may be different from the average growth rate of an animal from a thermally fluctuating environment. There are two related but distinct concepts potentially driving these differences: thermal acclimation (see Glossary) to a new thermally variable environment and Jensen's inequality (see Glossary) – a mathematical property of non-linear averaging named after Johan Jensen, a Danish mathematician (Jensen, 1906). Denny (2017) recently provided an interesting Commentary on the mathematical derivation of Jensen's inequality, illustrating that in physiology, strict linear relationships between performance (e.g. growth, reproduction, metabolism) and the environment are uncommon and this will have consequences for biological systems. Thus, laboratory estimates of performance variables conducted on animals acclimated to a thermally stable environment may not accurately reflect their performance in the wild, where temperature varies. Our goal is to highlight the roles of Jensen's inequality and thermal acclimation in the physiological response of animals to thermally variable environments, particularly as it relates to aerobic metabolism as a performance indicator in fishes. We also provide empirical evidence in a wild fish to support the incorporation of thermal variability when investigating the relationship between performance and the environment. Surprisingly, we still understand relatively little regarding the effects of thermal variation on physiological performance despite recognition of its potential importance decades ago. In 1979, Cynthia Carey wrote, ‘So few studies have compared metabolic responses of ectotherms acclimated to constant and cyclic temperatures that no general patterns are apparent’ (Carey, 1979). Our understanding remains incomplete almost 40 years later. That said, there has been a recent resurgence in interest to characterize these differences, particularly in terms of global climate change and conservation efforts. New theoretical models incorporating the effects of thermal variability on several performance indicators have provided insight into the effects of this common environmental condition (e.g. Denny, 2017; Dowd et al., 2015; Martin and Huey, 2008; Ruel and Ayres, 1999; Vasseur et al., 2014), but empirical tests of these models are lacking. We highlight some of these models as well as examples of physiological research incorporating natural thermal variation that support the theoretical predictions of the models. We also emphasize the importance of understanding natural temperature diel cycles in wild aquatic species. Metabolism and temperature Metabolism is arguably one of the most important variables in animal ecophysiology as it sets constraints on the rate of biological functions such as growth, reproduction and locomotion (Brown et al., 2004; Hochachka and Somero, 2002). Thus, measurements of metabolism – specifically, aerobic metabolic rate – are commonly used to determine species-specific optimal temperatures and to predict whole-animal performance and fitness (see Glossary). The oxygen- and capacity-limited thermal tolerance (OCLTT) hypothesis (see Glossary), proposed by H.-O. Pörtner in 2010 (Pörtner, 2010), suggests that aerobic scope, the difference between minimum and maximum aerobic metabolic rate, is maximized within a defined thermal range (ToptAS) to optimize fitness-related performance (Pörtner, 2010; Pörtner and Farrell, 2008; Pörtner and Gutt, 2016; Pörtner et al., 2017). Outside this thermal range, aerobic scope declines and presumably decreases performance. Therefore, low aerobic scope may limit fitness-related performance such as growth, reproduction and activity (Farrell, 2009; Fry, 1947; Holt and Jørgensen, 2014; Pörtner, 2010; Wang and Overgaard, 2007). The OCLTT hypothesis is not without its limitations (Clark et al., 2013; Jutfelt et al., 2018) but has provided a link between environmental conditions, physiological performance and ecology, such that understanding an animal's physiology can allow us to predict its response to future changes in its environment (Cooke et al., 2012). Most of what we know regarding the effects of temperature on metabolism, aerobic scope and fitness in fishes comes from laboratory-based studies conducted on animals acclimated to stable thermal profiles (e.g. Brett, 1971; Claireaux et al., 2000; Clark et al., 2011; Crespel et al., 2017; Healy and Schulte, 2012; Jain and Farrell, 2003; Mazloumi et al., 2017; Norin et al., 2014; Poletto et al., 2017; Reidy et al., 2000). These studies provide important mechanistic insight – however, they do not readily allow extrapolation to natural conditions. Climate models predict that not only average temperature but also temperature variability and extreme weather events will increase (IPCC, 2013), and these are potentially as important in defining performance limits as average environmental temperature (Denny, 2017; Harley and Paine, 2009; Helmuth et al., 2005; Vasseur et al., 2014). Thus, to predict the effects of future climate change, we need to appreciate the effects of thermal variation on metabolism, thermal performance curves (TPCs) and TPC plasticity. By incorporating Jensen's inequality, we can begin to understand the effects of thermal variation on mean trait values (i.e. growth rate) and overall fitness. Ectotherm An animal whose body temperature fluctuates with its environment. Eurythermal Able to tolerate a wide range of temperatures. Feed conversion Measure of food consumed that is converted into body mass versus waste. Fitness Ability of an animal to reproduce within its lifetime. Jensen's inequality Also known as the ‘fallacy of the average’, this is a mathematical property of non-linear functions derived by Johan Jensen in 1906. In biology, this is used to illustrate that the response of a system to average conditions is different from the system's average response to variable conditions. OCLTT theory Oxygen- and capacity-limited thermal tolerance. Delivery of oxygen is the limiting factor in thermal tolerance. Proposed by H.-O. Portner, 2010 (Pörtner, 2010). Optimum temperature Temperature at which performance is maximal. Parr Juvenile freshwater salmon with distinctive ‘parr’ marks. Stenothermal Able to tolerate a narrow range of temperatures. Thermal acclimation A non-hereditary, usually reversible phenotypic change of an individual in response to a change in temperature. Jensen's inequality and physiological performance TPCs are often generated to evaluate and estimate the ecological consequences of temperature. These curves measure instantaneous performance traits across fixed temperatures (Huey and Slatkin, 1976) and do not consider natural field conditions that are often heterogeneous in space and time (Sinclair et al., 2016). One way we can use TPCs and partially skirt around this temporal and spatial dynamism that is implicit in these curves is to incorporate Jensen's inequality. Here, using a TPC for a eurythermal animal (see Glossary; Fig. 1), generated with the assumption that animals are acclimated to constant temperature for each performance measurement, we can calculate the performance (y) at any given temperature (x) using y=f (x). As Denny (2017) describes, if the relationship between x and y is a non-linear function, as it often is in biology, the average of y at x is not equal to the y of average x. In this example, the optimum temperature (x; see Glossary) that maximizes performance (Topt) is 18°C (Fig. 1). If temperature fluctuates about this average, such that equal time is spent at two different temperatures on the curve, we can calculate the performance at the average temperature (average x) using the slope of the linear line connecting the two points (Denny, 2017; Fig. 1). For example, using a range of ±2°C (16–20°C, average 18°C; line a), according to Jensen's inequality, the performance at average 18°C is roughly 8% lower than the average performance at a stable 18°C. While this difference is relatively minor, it becomes magnified as the thermal range increases (±3°C, line b, 15% lower; ±4°C, line c, 30% lower). This difference is further exacerbated if examined in a stenothermal (see Glossary) animal whose temperature range is comparatively narrow (i.e. a steeper TPC). In this instance, small temperature variations dramatically increase the effect of Jensen's inequality, resulting in larger decreases in physiological performance. Furthermore, given that most TPCs are asymmetric and skewed towards the critical minimum temperature (CTmin), changes in temperature beyond the optimum and approaching the critical maximum temperature (CTmax) will have comparatively greater performance effects than the same change in temperature below the optimum. Thus, thermal variability is crucial in predicting how an ectotherm will respond to changes in temperature. Given the shape of the TPC (Fig. 1), changes in temperature can have negligible, small or large effects on performance (Sinclair et al., 2016). Indeed, the effect of Jensen's inequality on animal performance is dependent on several factors: (1) the range of temperatures at which the species can survive (i.e. thermal breadth), (2) the nature of the relationship (e.g. exponential, logarithmic, linear), (3) the skewness of the relationship and (4) the range of the thermal variation with respect to the shape of the curve (Fig. 1). For example, in cases of thermal variation near the Topt, performance is predicted to be lower under fluctuating conditions (Fig. 1, lines a, b and c), with a greater inequality for stenothermal than eurythermal animals. By contrast, if, in this example, the thermal range being tested is at the colder end of the TPC (e.g. 6–12°C), performance is predicted to be higher under fluctuating compared with stable conditions (Fig. 1, line d). When thermal variation spans linear sections of the curve, there is no effect of Jensen's inequality. This predicted difference in performance with TPCs does not necessarily imply that animals from stable or variable environments have different trait or performance values when both are measured at Topt. Rather, the predicted decrease in average performance of animals in thermally variable environments results from the acute effects of exposure to temperatures above and below Topt that would alter performance over time. However, many animal species also exhibit varying degrees of plasticity in their TPCs when exposed to chronic thermal change in order to maximize or maintain performance under new variable environmental conditions (Angilletta, 2009; DeWitt and Scheiner, 2004; Guderley and St-Pierre, 2002; Hochachka and Somero, 2002; Wilson and Franklin, 1999). In such cases, ectothermic animals can reduce the thermal sensitivity of important physiological performance traits such as metabolic rate, enzyme activity, heart rate, etc. In turn, this decreased sensitivity should widen thermal breadth and buffer against temperature variability such that performance is maintained. Recent theoretical performance models incorporating thermal variation have shown that variation is as strong, or stronger, than average temperature alone at predicting future fish performance in forecasted climate scenarios (Vasseur et al., 2014). Vasseur et al. (2014) developed a theoretical model to account for both average temperature and temperature variability and tested it against 38 previously developed species-specific TPCs for globally distributed ectothermic invertebrates under projected climate model temperature extremes and variation. Using the average temperature alone, only 32% of the variation in thermal performance could be explained; however, when temperature variability was used, it explained 54% of the variation. Remarkably, 93% of the variation in performance can be explained if both average and variance are included in the model (Vasseur et al., 2014). The inequality between the performance at an average temperature and the average performance across a range of temperatures is not necessarily a biological manifestation; rather, it is a mathematical consequence of the non-linearity of TPCs and the natural variation of temperature in the wild. Thus, any predictive understanding of ectotherm physiological performance in future climate scenarios must account for thermal variability. Empirical data Empirical evidence is now emerging that supports these predicted differences in performance between animals in stable and thermally variable environments. Recent studies in fishes, reptiles and insects have shown that physiological responses of thermally cycled animals are distinct from those of animals acclimated to stable temperatures. We present data highlighting the differences between the effects of Jensen's inequality on mean trait values (e.g. growth rate) and TPC plasticity on instantaneous measures of physiological performance (e.g. metabolic rate). Growth rate To date, the majority of data suggest that thermal variability will decrease growth rate, as predicted by the theoretical models discussed above. In salmonid fishes, growth rate is significantly lower when fish are exposed to thermal fluctuations compared with stable temperatures (Flodmark et al., 2004; Imholt et al., 2011; Meeuwig et al., 2004). Furthermore, both Imholt et al. (2011) and Meeuwig et al. (2004) observed that growth rate in Atlantic salmon (Salmo salar) and cutthroat trout (Oncorhynchus clarki henshawi) decreased further as the magnitude of temperature fluctuations increased, even if the daily average temperature remained the same. A similar pattern was observed for a variety of North American temperate fish species, although not all responded in the same way (Eldridge et al., 2015). Growth rate also decreased in spike dace (Meda fulgida) exposed to thermal variations of 24–34, 28–34 and 30–34°C (Carveth et al., 2007). Similarly, the growth of juvenile walleye (Sander vitreus) decreased, whereas the growth of adult perch (Perca flavescens) increased when exposed to thermal variation (23 versus 23±4°C) (Coulter et al., 2016), highlighting the species-specific effects of thermal variation on growth. It is possible that life stage also played a role in these differences, as well as their individual thermal optima (22°C for walleye, 25°C for perch; Hokanson, 1977). Comparing these data with species-specific TPCs may help to reconcile these differences, bearing in mind the often static, instantaneous nature of the TPC. These changes in growth rate with thermal variation highlight the effects of Jensen's inequality such that the change in average growth rate is the result of changes in instantaneous growth across the range of temperatures experienced by the fish. The magnitude of these changes is dependent on the overall shape of the TPC and the range of temperatures being tested. Niehaus et al. (2012) attempted to validate the framework for predicting performance during thermal variation with empirical data on larval striped marsh frogs (Limnodynastes peronii). Developmental and growth rates were estimated from a model of thermal reaction norms based on stable acclimation temperatures, and then compared with empirical measurements under natural fluctuating temperature conditions. In the majority of cases, the observed and predicted rates were significantly different, and this difference increased with increasing thermal variation (18–28 versus 18–34°C; Niehaus et al., 2012). TPC plasticity may account for some of these differences as the animals acclimate to their new thermal cycle. These findings further underscore the inability to predict the effects of thermal variability on growth performance using stable temperature models alone. Metabolism Changes in growth rate are probably a result of changes in metabolic rate, food intake or feed conversion/growth efficiency (see Glossary). For example, food intake decreased in brown trout (Salmo trutta) held in a diel thermal cycle (Flodmark et al., 2004). However, there was no difference in food intake in Atlantic salmon (Imholt et al., 2011) or cutthroat trout (Meeuwig et al., 2004) in variable thermal environments, despite a decrease in growth. Similarly, no differences in food intake or gross feed conversion were observed in brown trout grown under varying thermal regimes compared with a stable temperature (Spigarelli et al., 1982). The effects of thermal variability on food intake and conversion are relatively understudied, but these few examples suggest that they may not play a significant role in the observed changes in growth rate. However, this is a complex area of research given that feed composition, digestive physiology, temperature and time of day all play a role in feed conversion. It is possible that decreases in growth rate in thermally variable environments may be due to changes in routine metabolic rate (RMR) as a result of acclimation (Beauregard et al., 2013). RMRs (fasted, unrestrained, but minimal activity) of Atlantic salmon parr (see Glossary) exposed to 20±2 or ±3°C were 25% and 32% higher, respectively, than those of parr maintained at a constant 20°C, when measured at 20°C (Beauregard et al., 2013). Interestingly, there were no changes in standard metabolic rate (SMR; RMR in the absence of any spontaneous activity) for Atlantic salmon at 15±2.5°C when compared with that at a stable 15°C. However, at 20±2.5°C there was a 33% increase in SMR compared with that at a stable 20°C, when measured at 20°C (Oligny-Hébert et al., 2015). As these are measures of instantaneous ṀO2 at a single defined temperature, the effects of thermal variability will be dependent on the capacity for acclimation and probably the result of changes in the shape of individual TPCs after exposure to fluctuating temperatures. A similar pattern has also been observed in toads, Bufo boreas sp. (Carey, 1979). An increase in RMR presumably would reduce energy for growth and help explain the decrease in growth rate with thermal variability. However, the growth rate of tadpoles of the striped marsh frog Limnodynastes peronii was decreased under thermally variable conditions, but there were no corresponding changes in RMR (Kern et al., 2015). Similarly, there was no change in RMR of this species with increased thermal variability; however, growth rate was decreased (Niehaus et al., 2011), suggesting that there may not have been any changes in the TPC, but the effects of Jensen's inequality remain apparent in the growth rate. The effects of thermal variability on the shape of individual species' TPCs probably alters instantaneous measures of RMR and other rate functions (i.e. heart rate, enzymatic rates, etc.) as they are variable across a variety of species: Panopeus herbstii and Uca pugilator (Dame and Vernberg, 1978), spiders (Geolycosa godeffroyi; Humphreys, 1975), mussels (Mytilus edulis; Widdows, 1976). Such variation further supports the need for more research on the metabolic effects of thermal variability. It will be important to understand these differences in terms of both Jensen's inequality (mean trait value over time, e.g. growth rate) and acclimation (e.g. TPC plasticity), where the latter may modify instantaneous measurements of performance such as metabolic rate. Atlantic salmon: a case study Given the effects of thermal variability on RMR in the above examples, we sought to understand how thermal variation would affect both RMR and maximal metabolic rate (MMR) and aerobic scope of wild juvenile Atlantic salmon parr from the Miramichi River in NB, Canada (see Box 1 for methods). We predicted that RMR would increase, thereby decreasing aerobic scope. We did observe a decrease in aerobic scope (Fig. 2C) in salmon acclimated to 16–21°C compared with stable 18.5°C; however, this was not due to an increase in RMR. Rather, both RMR and MMR decreased significantly (Fig. 2A,B). These metabolic changes are most likely due to alterations of the shape of the TPC (i.e. acclimation) in those fish acclimated to a thermally variable environment. As the shape of the TPC changes, instantaneous measures of ṀO2 at a single temperature would also change. To determine whether there was any effect of Jensen's inequality, one would have to measure ṀO2 at each temperature within the thermal cycle and calculate the average ṀO2 from those individual values. Experimental animals Juvenile wild Atlantic salmon (Salmo salar Linnaeus 1758) parr (26.2±1.2 g) were electrofished from the Rocky Brook tributary, which is part of the South West Miramichi river system, in New Brunswick, Canada (Fig. 1). Fish were held in 300 l tanks in a 1869 l recirculating freshwater system at 16°C with a natural photoperiod and were fed once daily ad libitum with dry pellets (Corey Nutrition Company, Fredericton, NB, Canada; 1.0 mm) in the Crabtree Aqualab at Mount Allison University for 4 weeks prior to experimentation. The Mount Allison Animal Care Committee approved all procedures following the Canadian Council on Animal Care guidelines. Experimental setup Fish were acclimated to two different temperature protocols throughout the experiment: 16–21°C (n=14) and 18.5°C (n=13). These temperatures were chosen to reflect a natural diel temperature cycle in the Miramichi River (16–21°C) (Caissie et al., 2012) or the average temperature of the diel cycle (18.5°C). The 16–21°C acclimation was set on a 12 h cycle: 16°C at 07:00 h and 21°C at 19:00 h and a ramp rate of ∼0.42°C h−1. Throughout the experiment, fish continued to be fed ad libitum once daily, but feeding was ceased 24 h prior to experimentation. Experiments were conducted on the same group of fish after acclimation to each temperature regime for at least 3 weeks. Fish were first exposed to 16–21°C for 3 weeks, then 18.5°C for 3 weeks. It is important to note that these same fish were used in a previous, separate, experiment in which they were swum in the respirometer. Therefore, in the current experiment, all fish had had prior experience in the swim tunnel, thereby limiting any training effect throughout the various acclimations. Experimental protocol All metabolic rate measurements and critical swimming speed tests were conducted in a swim tunnel respirometer (Loligo Systems) consisting of a 30 l measurement chamber submerged in a 120 l aerated water bath similar to that used by Tunnah et al. (2016). Testing took place between 08:00 h and 10:00 h for all trials at a common temperature (16°C) for all groups regardless of acclimation temperature. Fish acclimated to 16–21°C or 18.5°C in their holding tanks experienced a matched temperature of the respirometer during the overnight acclimation. For the fish in the 16–21°C group, the water temperature in the respirometer was heated to match that in the holding tank (21°C at 19:00 h) and set to cool overnight from 21°C to 16°C (07:00 h) to mimic the thermal cycle of the holding tank. For the fish held at 18.5°C, the water was kept at 18.5°C overnight and, at 07:00 h, the water temperature was dropped to 16°C and held at this temperature for at least 1 h prior to experimentation. Aerobic metabolic rate Individual fish were removed from their holding tank and measurements were taken of mass and length before experimentation. Fish were not anaesthetized before measurements to reduce recovery time, and this procedure was completed in under 1 min. Fish were immediately placed into the respirometer with constant flow (10 cm s−1) and left overnight (minimum of 8 h) in the dark to acclimate and return to a resting state. Routine mass-specific metabolic rate (ṀO2) was measured using intermittent closed-loop respirometry. Briefly, the decline in dissolved O2 was measured every 10 min for 300 s. Between each measurement, the internal chamber was flushed with fresh oxygenated water to maintain the dissolved O2 above 90% air saturation throughout the experiment. Routine ṀO2 was calculated from the average of the lowest 10 oxygen consumption rates in the morning prior to experimentation, taking into account the volume of the fish, volume of the chamber, temperature and barometric pressure. To account for any possible bacterial oxygen consumption in the respirometer, a blank trial was run with no fish for 8 h during each of the three trials. It was found that bacterial oxygen consumption was negligible and was therefore not used to correct the data. Maximum ṀO2 was measured immediately after the critical swimming speed test (Ucrit) (Jain and Farrell, 2003) when fish were fully exhausted. O2 depletion was continuously monitored in 30 s increments immediately after the Ucrit test for 5 min, or until the rate of O2 consumption began to slow down as the fish recovered. Maximum ṀO2 was calculated from the largest O2 consumption rate during this period. Statistical analysis All statistical analyses were performed in RStudio (v 2.0.243). Because fish were repeatedly sampled at each temperature, an error of independence arose, resulting in inflated degrees of freedom. To account for this error, the α-level was set to 0.01 and a t-test used to assess statistical differences between different temperature acclimations. The ‘plastic floor and concrete ceilings’ concept suggests that basal (floor) performance measurements have considerably more plasticity for thermal acclimation than do maximal (ceiling) performance measurements (Sandblom et al., 2016), and this may partially account for the variability in basal responses within or between species exposed to thermal variation that we have highlighted. TPCs for basal performance may shift more readily than TPCs for maximal performance, ultimately altering the ‘scope’ for these activities. Furthermore, it has been suggested that the mean temperature of a fish's origin river may also play a role in determining SMR (Eliason et al., 2011; Farrell et al., 2008; Healy and Schulte, 2012). Thermal acclimation to specific environments across generations adds another layer of complexity when investigating wild populations (Sandblom et al., 2016) and will probably play a role in their response to thermal variability. The OCLTT hypothesis predicts that a decrease in aerobic scope will decrease performance, and this is true for many, but not all, species, at stable temperatures (Donelson et al., 2014; Grans et al., 2014; Healy and Schulte, 2012; Khan et al., 2014; Norin et al., 2014; Speers-Roesch and Norin, 2016). Considering Jensen's inequality, TPC plasticity and our preliminary data on wild salmon, there is a clear need to understand the relationship between aerobic scope and other performance indicators in thermally variable environments. In particular, measuring important life history traits such as growth, locomotion, foraging ability and reproduction will help to clarify the effects of thermal variability on overall fitness. Individual changes in fitness can potentially lead to community- or population-level effects that will significantly alter species abundance and/or distribution. Indicators of stress There have been several studies investigating whether and how natural temperature variation affects molecular markers of thermal stress (e.g. Todgham et al., 2006; Fangue et al., 2011; Narum and Campbell, 2015). However, until recently, the underlying cellular mechanisms governing differences in whole-animal metabolic rate have been relatively understudied. Again, using wild Atlantic salmon as a model, we determined that short-term warming thermal fluctuations increase expression of heat shock protein 70 (HSP70) protein (Corey et al., 2017; Tunnah et al., 2016) – an important adaptive response to thermal stress that helps maintain the structure and function of cellular proteins. The energetically costly production and breakdown of HSPs could further impact metabolism and the effects of thermal variation (Paaijmans et al., 2013). Given the expected decrease in aerobic metabolic rate in a thermally variable environment, it would be reasonable to assume that there will be stress-related shifts in metabolic pathways, potentially towards anaerobic metabolism, at least in the short term, when the temperature varies. At present, there is limited information on metabolic pathways or their metabolites during thermal variation; however, we have shown that short-term (<5 days) exposure to diel thermal cycles results in changes in metabolites (lactate, glucose, glycogen) and regulatory pathways controlling metabolism (AMPK, Raptor) (Callaghan et al., 2016; Corey et al., 2017; Tunnah et al., 2016). Notably, Callaghan et al. (2016) showed that short-term thermal cycling initially induced a catabolic response, as one would expect during periods of stress, but as thermal cycling continued, energetically expensive processes such as protein synthesis were reactivated and energy stores recovered. This suggests that fish may be able to remodel their metabolism in the face of thermal variability to maintain some capacity to cope with future physiological or environmental stresses. Future directions The effects of thermal variability on thermal preference and behaviour should also be evaluated. Most ectotherms use behavioural mechanisms (where possible) to regulate body temperature in order to maximize performance or avoid critical temperatures where performance declines (e.g. Martin and Huey, 2008, and references therein). However, because TPCs are typically asymmetric, increases in temperature above Topt will decrease performance more so than a decrease in temperature. Therefore, in a thermally variable environment, where animals may be frequently exposed to temperatures well above Topt, they may prefer a temperature below Topt (where performance is not fully maximized) to maximize performance over the long term (Martin and Huey, 2008). Temperature, of course, is not the only environmental factor that varies in nature. For example, oxygen, salinity and pH can change diurnally in many ecosystems (Baumann et al., 2015), yet we have very limited physiological data on the effects of diel variation of these factors (Cone, 1988; Dan et al., 2014; Yang et al., 2013). Individually, we can begin to determine how animals respond to variation of each of these factors, with the goal of developing a comprehensive framework to explain the effects of multiple covariates on animal performance. At present and to our knowledge, there is only one theoretical model that can predict the effects of simultaneous variation of multiple environmental factors on performance – however, the time scale or periodicity is not included (Koussoroplis et al., 2017). It is presently unclear how the duration of variation in environmental factors might affect biological functions, and this is clearly a rich area for future study. The limited availability of cellular- and molecular-level data is hampering the development of a convincing mechanistic hypothesis to explain the predicted and observed whole-animal changes in growth and other performance indicators with natural thermal variation. Going forward, it will be imperative to understand the effects of thermal variation at all levels of biological organization so we can reliably predict the impact of climate change on ectotherms. Given that growth is affected by thermal variation, we can reasonably assume that other life history processes such as reproduction or locomotion will be as well, translating into potentially major population-level effects. Predicting the physiological responses of ectotherms to climate change using TPCs will provide a broad-spectrum interpretation; however, we must be aware of their assumptions and limitations (see Sinclair et al., 2016, for a review). As suggested by Sinclair et al. (2016), incorporating real-world issues and (multiple) environmental conditions when evaluating an ectotherm's fitness in a given environment will inform and advance our predictive capacity regarding climate change. Experimental designs to understand the effects of thermal variability on aspects of performance will vary depending on the experimental question and species investigated. Studies to evaluate the effects of both TPC plasticity and Jensen's inequality, individually and in combination, will be required to better appreciate the effects of thermal variation on animal performance. One may predict that species that are less thermally plastic may be more susceptible to the effects of Jensen's inequality resulting from thermal variation and, thus, more susceptible to the effects of climate change. Concluding remarks Historically and understandably, physiologists often remove natural variability in lab studies, relying on ‘average’ conditions in order to more clearly understand biological processes. Using this paradigm, we have made great strides in our knowledge of the effects of temperature on physiological processes and have developed mechanistic and predictive models in an attempt to understand how animals will respond to a changing climate. We are now in a position where we can no longer ignore the natural variation in temperature (and other variables) and must seek to understand its effects on animals. We have provided examples here where growth and metabolism are different in animals exposed to thermal variation, and provide predictions based on Jensen's inequality as to the extent and directionality of these differences. We encourage the use of realistic environmental temperature profiles in experimental biology to increase our knowledge and understanding of these differences so we can create more realistic predictions about the effects of climate change on animals. Acknowledgements The authors would like to acknowledge and thank the anonymous reviewers for their in-depth reviews and comments that strengthened the manuscript. Footnotes Funding This work was supported by Natural Sciences and Engineering Research Council of Canada Discovery Grants RGPIN-2018-03884 to T.J.M. and RGPIN-061770 to S.C.; a New Brunswick Innovation Foundation grant to S.C. and T.J.M.; and New Brunswick Environmental Trust Fund grant 150081 to S.C. and T.J.M. References Competing interests The authors declare no competing or financial interests.
https://cob.silverchair.com/jeb/article/221/14/jeb164673/19627/The-importance-of-incorporating-natural-thermal?searchresult=1
A set of structured questionnaire implemented in this research for collecting data from the front line employees in banks. Questionnaire was consisted of five sections. The section A will be emphasizes on the questions of work performance, section B will be measurement dimensions of employee intrinsic motivation factors, section C focusing on employee extrinsic motivation factors, section D consisted psychological ownership questions and section E emphasize on the collection data about the respondents demographic profile. The demographic profile section consisted of 7 questions based on the personal information of the respondents. There were 4 dimensions are measured for intrinsic motivation factors which includes Achievement, Personal Growth, Advancement and Responsibility and each dimensions consisted of six questions. The reason these 4 dimensions chosen for the intrinsic motivating factors because it contributing a major impact towards work performance of employees in bank. For the extrinsic motivation factors, there will be 4 dimensions as well; Salary, Job Security, Working condition, and Company policy and each dimensions consisted of six questions. These 4 dimension chosen in this study because employees are concern about the lack of these tangible factors based on the current situation in the selected banks. Furthermore, there will be 6 questions constructed for each psychological ownership and work performance variable. So, there will be total of 60 questions adapted from previous research journals as stated in the table 3.9.1. To test the mediating effect of psychological ownership on the relationship of two independent variables; intrinsic and extrinsic motivation factors and work performance, a survey instrument with a five point Likert type continuous scale was developed. In this study, all the variables measured based on close ended questions because the respondents provides information which is easily converted to quantitative data. The researcher took in consideration that the respondents might be busy. Therefore, using open- ended questions will be inconvenient and time-consuming which will leads to reduction of response rate (Pangrikar, 2016). Thus, close-ended questions will be more convenient to use since the respondents are front line employees who are hectic in handling the customers .The scale of the instrument ranged from (1) strongly disagreed with the statement to (5) strongly agreed with the statement. The questions of the each dimensions of intrinsic and extrinsic motivation factors adapted from the research article by few authors such as Hong & Waheed (2011) , Saleem, Azeem& Asif (2010), Zafar, Ishaq, Shoukat & Rizwan (2014), Nakhate (2016), Smerek, Peterson (2007), Ibrahim, Ohtsuka, Dagang & Abu Bakar (2014), Taamneh & -Gharaibeh (2014), Parvin & Kabir (2011), Senol (2011), Raziq & Maulabakhsh (2015) and Shahid, Nawab, & Wali (2013). Next, the mediating variable of Psychological ownership questions which adapted from the research article by Avey, Avolio, Crossley & Luthans (2009). The questions for the variable of work performance adapted from the research article by Chaundhary & Singh (2016), Hanaysha (2015), Koopmans, Bernaards, Hildebrandt, Buuren, Allard & Henrica (2012) and Lau, Cheung, Lam & Chu (2013). This study has three variables, dependent variable, independent variables, and mediating variable Dependent Variable (DV) is the important factor that needs to investigate the work performance of front line employees. So, it is possible to find the solutions through the analysis of this variable to identify the variables that influence it. So, the interest is in measuring the dependent variable, as well as other variables that influence it. The second variable is the Independent Variables (IV) which measures the intrinsic and extrinsic motivating factors that influences the dependent variable. So, the change in the independent variable is based on the changes that occurred in dependent variable of the study. A third variable is the Mediating Variable (MV) is a model display the third variable to an independent and dependent variable. It is a causal sequence which shows independent variable causes the mediator and the mediator causes the dependent variable (MacKinnon, 2011). Mediating variable in this study is psychological ownership which represented the influences of the independent variables on the dependent variable by assuming the relationship between the independent and dependent variable.
https://smallhousebooks.com/a-set-of-structured-questionnaire-implemented-in-this-research-for-collecting-data-from-the-front-line-employees-in-banks/
Registration in: International Journal of Food Sciences and Nutrition, London, v. 63, n. 3, pp. 362-367, may, 2012 0963-7486 10.3109/09637486.2011.629179 Author Trevisan, Aurea Juliana Bombo Areas, Jose Alfredo Gomes Institutions Abstract With the increasing emphasis on health and well-being, nutrition aspects need to be incorporated as a dimension of product development. Thus, the production of a high-fibre content snack food from a mixture of corn and flaxseed flours was optimized by response surface methodology. The independent variables considered in this study were: feed moisture, process temperature and flaxseed flour addition, as they were found to significantly impact the resultant product. These variables were studied according to a rotatable composite design matrix (-1.68, -1, 0, 1, 1.68). Response variable was the expansion ratio since it has been highly correlated with acceptability. The optimum corn-flaxseed snack obtained presented a sevenfold increase in dietary fibre, almost 100% increase in protein content compared to the pure corn snack, and yielded an acceptability score of 6.93. This acceptability score was similar to those observed for corn snack brands in the market, indicating the potential commercial use of this new product, which can help to increase the daily consumption of dietary fibre.
https://repositorioslatinoamericanos.uchile.cl/handle/2250/1634463
Variable fixture gages provide a quantitative value for the part characteristic being checked. For example, if the nominal dimension for a machined part diameter is 1.150″, variable-type gages can give a numerical measurement that indicates exactly how close a measured piece is to nominal. The measured value can be compared to the specification limits, which helps in qualitative decision making about the machined characteristic. These gages provide a flexible and ergonomic means to inspect the part. The fixture provides the necessary locations so the gaging units, using electronic columns, electronic probes, or indicators can be applied to specific features of the part. Variable Frequency AC Chassis Drives accept 115 or 230VAC single phase input up to 1hp. It has quick connect terminal blocks and trim pot adjustments for maximum speed. It also has diagnostic LEDs and various stopping modes. Variable Frequency Drive Dispersers is used in high speed dispersers with standard output torque using high efficiency motors. Variable speed drive is designed into a single set of belt power train, maximum power delivery to the high speed mixing blade, in lower power consumption, precise speed adjustments, and programmable mode for automatic operation based on time, speed, temperature and viscosity.
https://www.processregister.com/Companies/AName/Page30/aidV.htm
Topic: Methods of Research When a researcher is ready to formulate a study, he or she chooses from several different methods. The best method depends on the research question and hypothesis. The different methods are: 1. Naturalistic Observation Definition: participants are carefully observed in their natural setting without interference by the researchers. Researchers should be inconspicuous and do nothing to change the environment or behavior of the participants. Examples: (a) an anthropologist unobtrusively observing wild gorillas (b) a researcher sitting in a fast food restaurant and observing the eating habits of men vs. women This method is good if a researcher wants participants to be reacting normally but it can be time consuming, the "sought-after" behavior may never occur, there is no control over the environment (e.g., fast food restaurant runs out of fries), and it is difficult to know if the researcher will be able to be completely unobtrusive. 2. Survey Method Definition: questioning a large group of people about their attitudes, beliefs, etc. Conducting a survey requires a representative sample, or a sample that reflects all major characteristics of the population you want to represent. If you are attempting to survey "America's attitude towards exercising", then your sample cannot include only caucasian, upper-class college students between the ages of 18 and 22 years. This does not represent America. Surveys must also use careful wording in the questions to prevent confusion or bias. Examples: (a) survey of recent retired citizens on their major concerns about life without work (b) survey of first-time pregnant women on their beliefs about their efficacy on being a mother This method is very quick and efficient; however it is sometimes difficult to gain in-depth knowledge from a survey and there is no guarantee that the person taking the survey is being open and honest. 3. Case Study Definition: obtaining detailed information about an individual to develop general principles about behavior. It is sometimes very helpful to study one person (or a very small group of people) in great depth to learn as much information as possible. This method is particularly useful in studying rare disorders or circumstances. Examples: (a) studying the life history of a man who acquired schizophrenia at the age of 20 (b) following one child from conception to adulthood to examine this specific lifespan development Case studies require a lot of time, effot, and attention to detail. Yet, they reveal more about a particular subject than any other research method. Generalizing the findings to other people or groups is usually difficult. 4. Correlational Design Definition: measuring the relation between two variables. Sometimes correlation studies are seen as a separate research method while other times it is subsumed under another category. Correlations are stated as either positive or negative. Positive correlations mean that as the value of one variable goes up, the value of the other variable goes up (or, vice versa: as one goes down, the other goes down) Negative correlations mean that as the value of one variable goes up, the value of the other variable goes down. See the examples below for further clarification. Examples: (a) there exists a positive correlation between intelligence and grade point average such that the more intelligent a person is, the higher their grade point average (b) there exists a negative correlation between eating junk food and overall health such that the more junk food a person ingests, the less they are healthier CORRELATION DOES NOT MEAN CAUSATION. The most a reseacher can state about 2 variables that correlate is that they relate to one another. There is no test of cause-effect. In the second example above, it might be tempting to assume that consumption of junk food causes a decline in health. However, it is conceivable that the less healthy one is and feels, the more likely it is they'll give up on trying to be healthy and eat junk food. We do not know the direction of influence (eating junk food leads to poor health or poor health leads to eating junk food) and cannot know using a correlation alone. This is one limitation to this method. Correlations can be deceiving. Finding a significant correlation between 2 variables does not guarantee that they are the only 2 variables. There may be an intervening variable that wasn't measured. Consider the first example above: perhaps the more intelligent a person is, the more likely they are to study for tests, which then translates into a higher grade point average. "Studying for tests" is a potential intervening variable that was not examined. 5. Experimental Method Definition: a study in which the investigator manipulates (at least) one variable while measuring (at least) one other variable This method is often used in psychological research and can potentionally lead to answering cause-effect questions. Examples: (a) Testing the effects of ritalin medication on the attention spans of children with ADHD (b) Examining the reliability of eyewitness testimony in young children Participants in an experiment are usually randomly assigned to different groups. The group that receives the independent variable is called the experimental group and the group of participants are treated in the same manner as the experimental group but do not receive the independent variable is called the control group. Sometimes a preexisting characteristic already exists in the participants, such as sex, age, clinical diagnosis, etc. In this case, there is no random assignment and the type of research is referred to as differential research.
http://www.asmrm2013.com/PsychologyResearch/types-of-research-methods-in-psychology
The Burden of Binge and Heavy Drinking on the Brain: Effects on Adolescent and Young Adult Neural Structure and Function Introduction: Adolescence and young adulthood are periods of continued biological and psychosocial maturation. Thus, there may be deleterious effects of consuming large quantities of alcohol on neural development and associated cognition during this time. The purpose of this mini review is to highlight neuroimaging research that has specifically examined the effects of binge and heavy drinking on adolescent and young adult brain structure and function. Methods: We review cross-sectional and longitudinal studies of young binge and heavy drinkers that have examined brain structure (e.g., gray and white matter volume, cortical thickness, white matter microstructure) and investigated brain response using functional magnetic resonance imaging (fMRI). Results: Binge and heavy-drinking adolescents and young adults have systematically thinner and lower volume in prefrontal cortex and cerebellar regions, and attenuated white matter development. They also show elevated brain activity in fronto-parietal regions during working memory, verbal learning, and inhibitory control tasks. In response to alcohol cues, relative to controls or light-drinking individuals, binge and heavy drinkers show increased neural response mainly in mesocorticolimbic regions, including the striatum, anterior cingulate cortex (ACC), hippocampus, and amygdala. Mixed findings are present in risky decision-making tasks, which could be due to large variation in task design and analysis. Conclusions: These findings suggest altered neural structure and activity in binge and heavy-drinking youth may be related to the neurotoxic effects of consuming alcohol in large quantities during a highly plastic neurodevelopmental period, which could result in neural reorganization, and increased risk for developing an alcohol use disorder (AUD). Acute impact of caffeinated alcoholic beverages on cognition: A systematic review INTRODUCTION: Energy drinks are popular beverages that are supposed to counteract sleepiness, increase energy, maintain alertness and reduce symptoms of hangover. Cognitive enhancing seems to be related to many compounds such as caffeine, taurine and vitamins. Currently, users mostly combine psychostimulant effects of energy drinks to counteract sedative effects of alcohol. However, recent literature suggests that this combination conducts to feel less intoxicated but still impaired. The goal of the present article is to review cognitive impact and subjective awareness in case of caffeinated alcoholic beverage (CAB) intoxication. METHOD: PubMed (January 1960 to March 2016) database was searched using the following terms: cognitive impairments, alcohol, energy drinks; cognition, alcohol, caffeine. RESULTS: 99 papers were found but only 12 randomized controlled studies which explored cognitive disorders and subjective awareness associated with acute CAB or AED (alcohol associated with energy drinks) intoxication were included. DISCUSSION: The present literature review confirmed that energy drinks might counteract some cognitive deficits and adverse effects of alcohol i.e. dry mouth, fatigue, headache, weakness, and perception of intoxication due to alcohol alone. This effect depends on alcohol limb but disappears when the complexity of the task increases, when driving for example. Moreover, studies clearly showed that CAB/AEDs increase impulsivity which conducts to an overconsumption of alcohol and enhanced motivation to drink compared to alcohol alone, potentiating the risk of developing addictive behaviors. This is a huge problem in adolescents with high impulsivity and immature decision making processes. CONCLUSION: Although energy drinks counteract some cognitive deficits due to alcohol alone, their association promotes the risk of developing alcohol addiction. As a consequence, it is necessary to better understand the neurobiological mechanisms underlying these interactions in order to better prevent the development of alcohol dependence. What do we know about the effects of exposure to 'Low alcohol' and equivalent product labelling on the amounts of alcohol, food and tobacco people select and consume? A systematic review BACKGROUND: Explicit labelling of lower strength alcohol products could reduce alcohol consumption by attracting more people to buy and drink such products instead of higher strength ones. Alternatively, it may lead to more consumption due to a 'self-licensing' mechanism. Equivalent labelling of food or tobacco (for example "Low fat" or "Low tar") could influence consumption of those products by similar mechanisms. This systematic review examined the effects of 'Low alcohol' and equivalent labelling of alcohol, food and tobacco products on selection, consumption, and perceptions of products among adults. METHODS: A systematic review was conducted based on Cochrane methods. Electronic and snowball searches identified 26 eligible studies. Evidence from 12 randomised controlled trials (all on food) was assessed for risk of bias, synthesised using random effects meta-analysis, and interpreted in conjunction with evidence from 14 non-randomised studies (one on alcohol, seven on food and six on tobacco). Outcomes assessed were: quantities of the product (i) selected or (ii) consumed (primary outcomes - behaviours), (iii) intentions to select or consume the product, (iv) beliefs associated with it consumption, (v) product appeal, and (vi) understanding of the label (secondary outcomes - cognitions). RESULTS: Evidence for impacts on the primary outcomes (i.e. amounts selected or consumed) was overall of very low quality, showing mixed effects, likely to vary by specific label descriptors, products and population characteristics. Overall very low quality evidence suggested that exposure to 'Low alcohol' and equivalent labelling on alcohol, food and tobacco products can shift consumer perceptions of products, with the potential to 'self-licence' excess consumption. CONCLUSIONS: Considerable uncertainty remains about the effects of labels denoting low alcohol, and equivalent labels, on alcohol, food and tobacco selection and consumption. Independent, high-quality studies are urgently needed to inform policies on labelling regulations. Alcohol consumption and cognitive impairment among Korean older adults: does gender matter? BACKGROUND: This study investigated gender differences in the relationship between alcohol consumption and cognitive impairment among older adults in South Korea. METHODS: Using data from the Korean Longitudinal Study of Ageing, 2,471 females and 1,657 males were analyzed separately. Cognitive impairment was measured based on the Korean version of the Mini-Mental State Exam score. Logistic regression was conducted to examine the relationship between alcohol consumption and cognitive impairment among Korean older adults. RESULTS: Multivariate analysis showed that compared to moderate drinkers, past drinkers were more likely to be cognitively impaired for women, while heavy drinkers were more likely to be cognitively impaired for men. CONCLUSIONS: Findings suggest that the relationship between alcohol consumption and cognition varies with gender. Clinicians and service providers should consider gender differences when developing strategies for the prevention and treatment of alcohol-related cognitive decline among older adults.
http://wineinformationcouncil.eu/index.php?option=com_k2&view=itemlist&task=tag&tag=Cognition&Itemid=594
Explain the two major types of bias. Identify a peer-reviewed epidemiology article that discusses potential issues with bias as a limitation and discuss what could have been done to minimize the bias (exclude articles that combine multiple studies such as meta-analysis and systemic review articles). What are the implications of making inferences based on data with bias? Include a link to the article in your reference. Bias Indeed, before concluding a study’s validity, there is always a great need to be able to consider different sources of error in providing an alternative source of explanation to the established findings. Bias acts as one of the systematic errors, which is commonly identified in epidemiologic studies resulting from distinctive incorrect estimates in different exposures and the outcomes. The main types of bias commonly known to have a significant effect on the results consist of both the selection and information preference. The paper aims to explain the selection and information bias, its main limitations, its implications and possible solutions that can be used to minimize the possible preference. When a study does not represent the main target population of the study, the selection bias is commonly reported. This means that the selection process lacks the proper randomization resulting in inaccurate finding and conclusion. Delgado-Rodriguez, & Llorca, (2004), also argue that in most cases, competing risks are usually identified when there exists a mutually exclusive relationship between the identified populations. Inaccurate sampling frame also causes such biases during the result generation. For instance, in a cross-sectional study, the selected sample may not represent the general population leading to a selection bias. The other type of study bias is the information, which commonly occurs during the process of data collection. Delgado-Rodriguez, & Llorca, (2004) argues that the main types of information biases consist of the misclassification method, which is caused by the presence of high sensitivity of the information, ecological fallacy depicted when making different inferences in an analysis. Consequently, such actions lead to a high level of inconsistent information. The possible solutions, which can be used to solve both the selection and information bias is the proper interpretation of the collected data in connection to the identified evidence. Such measures will ensure that only evidenced data will be analyzed based on the provided data. In conclusion, the major types of bias consist of both the selection and information is usually caused by the improper selection of the target population and data collection method respectively. The main effect of biases is that it may lead to inaccurate information in different academic studies. The main possible solutions to such studies consist of the proper interpretation of information and efficient data collection strategies that are more efficient.
https://nursingwritingservice.com/epidemiology-article/
The Indiana Arts Commission has adopted community engagement as a principle to support our values and funding imperatives and address structural inequalities by providing access to programs, services, and resources. So what exactly is Community Engagement? Community Engagement Defined The IAC defines community engagement as the activity of consistently cultivating two-way community relationships – beyond conventional programmatic partnerships. Community partnerships are rooted in programs, activities, and marketing, whereas community engagement is rooted in people and requires long-term commitment. Why Community Engagement? - To build a better community, not just a better organization - To understand what the community cares about - To build better programs and a more loyal audience - To share control with community groups and members - To produce things that are meaningful to the community - To challenge organizations and the community - To propel change in organizations and the community - To build a more sustainable, long-term organization What can be done as an Individual Practitioner? - Identify resources and allies within your own organization and/or your community. - Seek support from colleagues who are in the process of creating change within their institutions. - Be committed to a lifelong process of learning and change. - Be available to your peers as a resource. - Conduct data analysis on your own portfolio to identify where dollars are going and opportunities for change. - Use inclusive and welcoming language in your external communications. - Seek research and data about equity to present to leadership. - Learn the history of local ALAANA communities and become familiar with leaders. What can be done in your Institution? - Provide opportunities for board and staff to learn about or attend trainings on implicit biases and historical perceptions of disability. - Assure that an equity lens informs all decision-making, programs, policies, and procedures. - Establish an equity advisory committee or working group of colleagues that will inform programming direction and guide institutional change. - Use inclusive and welcoming language in your external communications. - Advocate research and data collection that accurately represents the demographics served by and serving in arts organizations and foundations. - Intentionally consider, select, and support board and staff who value equity. - Intentionally consider, select, and support diverse candidates for board and staff. - Collaborate with other organizations working in IDEA to provide resources and share best practices to create equity. Resources - Communicating and Interacting with People with Disabilities - Community Engagement Ladder - Defining Community Engagement for Organizations - Defining Community Engagement for Individual Artists - Questionnaire to aid in understanding your community Contact Information This work is constant and always evolving. If you have any thoughts, comments, resources, or suggestions you’d like to share with us, please email them to [email protected].
https://www.in.gov/arts/programs-and-services/resources/community-engagement/
for more information. The Humanities Dissertation Topics methodology is possibly the most agonising and laborious part of a dissertation. The goal of a methodology section is to help you understand the broad philosophical approach that underpins the research methods you choose to use for your study. The methodology chapter must address whether qualitative or quantitative data collection techniques should be used, or if both should be used. If, like most students, you're trying to figure out how to write an outstanding and detailed dissertation methodology chapter, then reading this post will put you miles ahead. Here, we will walk you through some amazing strategies and techniques that will assist you in creating an incredible dissertation methodology like never before- A quick tour through the brilliant Dissertation Proposal Writing Service online Tutorials will assist you in understanding this section of methodology, which typically follows your literature review. It is critical to recap the central questions of a dissertation in order to regain focus and maintain clarity. Then, define and explain the problems you want to solve. It is critical to include your dissertation's approach in the methodology. This is primarily because the research approach you choose influences the data collection and analysis techniques you use. Provide a brief overview of the quantitative, qualitative, and mixed-method research techniques used in your dissertation. It is vital to be informative and descriptive while crafting the Dissertation Writing methodology. Your dissertation methodology will be more exceptional and detailed if you provide more details. The methodology section of any dissertation must document each stage, from the sampling process to the data analysis. If you want to write an outstanding dissertation, you should be aware of the paper's strengths and weaknesses. The methodology section must serve to inform the reader about these limitations Paper Writers. Never let a lack of resources hold you back. Discuss all types of limitations, including those caused by a deliberate choice or human error. Consider ethics and meet the ethical standards that are expected of you.
https://www.nomura.ca/board/board_topic/8774275/5719131.htm
Only users themselves can intimately appreciate their own needs, and user experience is the only field that considers the user’s perspective at every stage of a project. I often reflect on how privileged I am to be in the field of user experience, because we always have the trump card: the user. Let me explain. As UX professionals, we generally have an abundant breadth of experience across different industries and businesses. Our clients, on the other hand, have great depth of knowledge in their own domain. However, only users themselves can intimately appreciate their own needs, and user experience is the only field that considers the user’s perspective at every stage of a project. Why is this such an awesome novelty? By combining multiple perspectives, social science researchers hope to overcome the limitations and intrinsic biases of any one perspective and thereby obtain confirmation of their findings. Similarly, within the context of a UX project, we can triangulate the perspectives of the UX professional, the client, and the user, as shown in Figure 1, to filter out the limitations and intrinsic biases of any single perspective and thereby obtain objectively true project outcomes. The purpose of the definition phase is to determine the vision and scope for the project. We do this by answering the questions: what are we trying to achieve, and how far can we go in trying to achieve it? definition phase—The purpose of the definition phase is to determine the vision and scope for the project. We do this by answering the questions: what are we trying to achieve, and how far can we go in trying to achieve it? This usually involves articulating the project’s mission statement, then deriving a set of project requirements that we anticipate will collectively achieve the mission. design phase—The purpose of the design phase is to imagine what a solution that meets the requirements and thereby achieves the vision might look like. In practice, we typically accomplish this work by creating sketches, wireframes, or prototypes that depict the expected functionality and interactions of the imagined solution. delivery phase—Finally, the purpose of the delivery phase is to produce the form and function of the working product. The product must implement the needed functionality and interactions to actually meet the requirements and thereby achieve the mission. For digital projects, the deliverables could be anything from a series of mockups to fully developed code and databases. Figure 2 provides an overview of the three phases that are typically part of a UX project: definition, design, and delivery. In the early days of the UX industry, it was often necessary to explain to our clients that, for every dollar you invest in solving challenges during the design phase, which is relatively flat and fluid, you would likely save ten dollars—the cost of rectifying the resulting issues during the project’s delivery phase, which is more layered and rigid. Arguably, we could apply that same type of reasoning to project requirements—thus, for every dollar you invest in filtering out any subjective limitations and biases from requirements, which are still in the realm of ideas, you would likely save ten dollars—the cost of solving the challenges that would have resulted—during the design phase and, in turn, one hundred dollars—the cost of rectifying the resulting issues during the delivery phase. It’s like trying to influence the growth of a tree when it’s a small sapling, like that shown in Figure 3, versus a two-foot tall, flexible young tree versus a fully grown, twenty-foot tall tree with a massive hardwood trunk! Applying the triangulation principle can be equally useful at each project phase: helping to filter requirements limitations and biases, solving design problems, and rectifying delivery issues. In this article, I’ll focus on the application of this principle during the definition phase of a project, in the hope that making improvements at this stage would naturally make the design and delivery phases go more smoothly. Clients … might bring in a business analyst or consultant to help them further refine the requirements from two perspectives. However, according to the triangulation principle, this method of requirements validation still lacks a confirmation…. The initial spark for a UX project can come from a number of sources—for example, a client stakeholder who has had a big idea or market research that suggests there is a customer need that is not being fulfilled. Clients often simply extrapolate project requirements from their initial idea, thereby incorporating only one perspective—the client’s—in the requirements. Clients who have come to value a second, divergent perspective might bring in a business analyst or consultant to help them further refine the requirements from two perspectives. However, according to the triangulation principle, this method of requirements validation still lacks a confirmation, which you can achieve only when you solicit and reconcile three or more perspectives. The risk of not confirming project requirements by considering a third perspective is that some subjective limitations or intrinsic biases of either the client or the consultant could entangle the requirements, which would then adversely affect the design and possibly delivery phases in increasingly significant ways, as I described earlier. understandable—A requirement is understandable if it is clear, concise, and unambiguous—that is, it states what the solution must do in a simple, straightforward way, with just enough detail to leave no room for misinterpretation. verifiable—A requirement is verifiable only if you can ultimately inspect, test, or demonstrate the final solution to confirm that it meets the requirement. achievable—You can consider a requirement attainable only if, after investigation, you believe that your organization can build it within the budget, timeline, and technical limitations of the project. Once you’ve gathered and compared two or more perspectives on what a project’s requirements should be, each identified requirement should fall into one of three categories: conflicting, disjointed, or convergent. conflicting requirements—These requirements are seemingly mutually exclusive. Therefore, it is not possible to meet all of them simultaneously. A UX professional might look at two requirements side by side and reflect, If we design for this one, we certainly won’t be able to design for that one. A client may say “white,” while users say “black,” when talking about the very same functionality. Here is an example of conflicting requirements: a client might want to receive payment from users before confirming their bookings, while users might want confirmation of their bookings before they are willing to pay anything for them. disjointed requirements—These requirements might seem to be unrelated on the surface, but if we were to try to address them independently, there would be an uncomfortable competition between them. A UX professional would likely look at two such requirements side by side and reflect, If we design for this one and for that one, how can we reconcile them? A client may say “grey over here,” while users say “grey over there,” when talking about the very same functionality. Here is an example of disjointed requirements: a client might want users to type their available time slots, while users might want to see available time slots from which they can select the time slot they want. convergent requirements—These requirements ask for the same thing from the same functionality. A UX professional would instantly understand how to design a solution that addresses both perspectives simultaneously. Here both the client and users concur and say “grey over here.” Here is an example of convergent requirements: a client might want to track the progress of an order, while users want to have status updates on their order. According to the triangulation principle, the key to distilling objectively true requirements is to bring in a third perspective to systematically work through any points that are either in conflict or disjointed to find ways of bringing them into convergence. According to the triangulation principle, the key to distilling objectively true requirements is to bring in a third perspective to systematically work through any points that are either in conflict or disjointed to find ways of bringing them into convergence. This process provides a means by which to filter out any subjective limitations and biases. That’s it for the theory. Now, let’s take a look at a real-life case study in detail. The client, however, insisted that they didn’t want to continue having to wait for payments from their customers, so wanted to take the opportunity of building this new site to change their payment terms. They believed that, if they asked for payment within 48-hours, they could hope to actually get paid within 7 days, which is what they truly wanted. During a follow-up workshop to review the research data with the client, Stamford contemplated the three-way tension around the issue of payment terms: The UX professional knew that 48 hours was impractical; and 30 days, quite common across industries. The customers thought 7 days was asking too much, but 14 days would be reasonable. The client actually wanted to be paid in 7 days. In the end, they came up with an awesome idea for reconciling these divergent points of view: Make 14 days the official payment terms, but give a discount to customers who pay within 7 days. Miraculously, they found a solution that satisfied all three perspectives equally. Once the client had agreed that they wanted to run with this proposal, the context within which Stamford would need to design the payment portal became clear and tangible. Now that you’ve seen a real-life case study, perhaps you’re wondering where to begin applying the triangulation principle to your own projects. First and foremost, when applying the triangulation principle to requirements, you must create an opportunity to validate the requirements. A client may already have drafted requirements before bringing a UX professional on board. A client may not yet have defined requirements when a UX professional joins the project. In the case where a client has already drafted the requirements, the UX professional might tell the project owner that their first exercise when starting the engagement would be to validate the requirements. This would consist of reviewing the draft requirements and analyzing them for understandability, verifiability, and achievability, then creating a list of suspected requirements impurities. Next, a meeting or workshop with the client would take place to work through the list of issues—hopefully, reconciling any conflicts and disjoints by bringing the UX professional’s perspective to bear. But also being prepared to agree to test the solutions with users—thus adding the third perspective—to resolve any particular points that are not easily reconcilable. The UX professional would then engage users through workshops, focus groups, or interviews to determine whether the users considered the refined requirements to be useful, valuable, and complete. If there were any conflicts or disjoints between users’ needs and the refined requirements, the UX professional would need to reconcile them through another session with the client, then possibly engage with users again. Thus the requirements would go through as many rounds of refinement and validation as necessary for the size and complexity of the project, ideally until the process had filtered out all suspected impurities. In the case where the client had not yet drafted the requirements, the UX professional would ask to be involved in requirements gathering and analysis to ensure that the project considers the users’ perspective early on—perhaps using the analogy of the development of a tree to explain the request. The UX professional would then proceed with conducting the usual user research activities—from focus groups to contextual inquiries, with either the client stakeholders and/or users—to identify the first perspective on what the requirements for the project should be. The UX professional would then follow the process I outlined earlier: analyze for impurities, hold a workshop to filter out as many impurities as possible, then engage the third perspective of the users to reconcile any remaining conflicts or disjoints. In either case, the final set of project requirements should ideally be understandable, verifiable, and achievable, and three different sets of people should have agreed upon them. Thus, all parties could be confident that the UX professional would have the requirements necessary to design an objectively true solution. This process naturally filters out any subjective limitations and biases, but what if there’s a stakeholder who insists? There are two distinct sources for requirements impurities: … subjective limitations in a person’s perspective … [and] a person’s intrinsic biases—or, in other words, ulterior motives such as fear, greed, and the like. Reflecting again on triangulation as applied to social science research—“by combining multiple perspectives, social science researchers hope to overcome the limitations or intrinsic biases of any one perspective, and thereby obtain a confirmation of findings”—it seems that there are two distinct sources for requirements impurities. First, there are subjective limitations in a person’s perspective. We can easily understand this in the case of either highly specialized or inexperienced people, but this really applies to anyone. Second, there are a person’s intrinsic biases—or, in other words, ulterior motives such as fear, greed, and the like. Appreciating this distinction helps greatly when approaching the reconciliation of divergent perspectives. In the case where a requirements impurity was introduced by a simple subjective limitation in a person’s perspective, the advice is to be patient and humble, trusting that the process will do the work for you. Even if a client disagrees with you, how can the client argue that they know better what users need than the users themselves or users argue that they know better what the client should offer than the client themselves? It’s just a matter of time before everything comes out in the wash, so to speak. In the case where a requirements impurity was introduced because of someone’s ulterior motive—which, by definition, a person does not want to reveal—sometimes people will actually dig in their heels. We usually discover these types of impurities when people do not respond well to the rational argument that they couldn’t possibly know everything the other party knows. So, even if people hold their ground in the face of differing opinions and simple logic, don’t give up! You can still reconcile the perspectives—and avoid being charged with an impossible design task. But doing this requires something greater than you, the client, and the users. It requires purpose. A UX professional’s challenge at this point comes down to soft skills. He needs to listen attentively and try to understand where people are truly coming from—go beneath the surface and find out what they really want—then choose his own words carefully to gently and consistently refocus an insistent person on the project’s vision. Eventually, this approach will guide people to let go of their ulterior motives, because the pursuit of a greater purpose has the ability to lift individuals beyond their own personal, limited perspective into a place of true collaboration. Thus, through a combination of applying the triangulation principle and using soft skills, we can ensure that we emerge from the project definition phase with clear, objectively true project requirements from which to begin the exciting journey of delivering great user experiences. Through a combination of applying the triangulation principle and using soft skills, we can ensure that we emerge from the project definition phase with clear, objectively true project requirements from which to begin the exciting journey of delivering great user experiences. Create an opportunity to validate the requirements. Gather requirements from one perspective—unless they’ve already been drafted. Analyze the requirements for understandably, verifiability, and achievability. Create a list of suspected subjective limitations and biases. Hold a workshop to reconcile conflicts and disjoints by adding your own perspective. Test the refined requirements by applying a third perspective—that of users. Hold a workshop to reconcile conflicts and disjoints through that third perspective. Be patient and humble, listen, and gently and consistently refocus people on the vision. Repeat this process as necessary, until the requirements are truly objective. Bailey-Beckett, Sharon, and Gayle Turner. “Triangulation: How and Why Triangulated Research Can Help Grow Market Share and Profitability.”PDF Beckett Advisors Inc., May 2001. Jick, Todd D. “Mixing Qualitative and Quantitative Methods: Triangulation in Action.” Administrative Science Quarterly, Johnson Graduate School of Management, Cornell University, December 1979. Jakob, Alexander. “On the Triangulation of Quantitative and Qualitative Data in Typological Social Research.” Forum: Qualitative Social Research, February 2001. Olsen, Wendy. “Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed.”PDF From Development in Sociology. Manchester: Causeway Press, 2004. Egeland, Brad. “Gathering Good Requirements.” SmartSheet, April 2012. Nice article, Tal! I have blogged some comments.
https://www.uxmatters.com/mt/archives/2012/05/triangulation-navigating-the-stormy-seas-of-project-requirements.php
This is a summary of the 10th working paper of the On Think Tanks Working Paper Series: How can think tanks support the production and use of gender data? Gender data: what we measure, what we overlook Better data on women and girls’ status can guide policies, leverage financial resources, and inform global priorities. In most policy circles, the notion of gender data is directly associated with the availability of sex-disaggregated data. While sex-disaggregated data has value and provides insight on the differentiated challenges that women and men face, not all data- nor data by itself- can accurately portray the complexity of gender inequality in different contexts. Several questions need to be addressed to ensure that gender data offers the right perspectives to fight gender inequality: - Is the data used to build indicators and measurement tools fit for purpose? - Are gender biases and damaging preconceptions about women and girls’ roles and needs shaping data collection and analysis? - What other aspects of women’s lives are not being accounted for through existing gender data? While academic researchers and feminist scholars have taken up these questions, there are fewer discussions on these issues in policy circles and among data creators and users. This opens the debate on bridging this gap to ensure that gender data is fit for purpose and is not reproducing power imbalances and gender biases. Think tanks, as knowledge generators, brokers and policy influencers, may be particularly well positioned to bridge this gap. This paper reviews debates on gender data from a feminist perspective to identify potential limitations of the data used for policymaking and identify ways to strengthen it. Using the concept of the data value chain, the paper brings attention to limitations that emerge when gender data is gathered, interpreted, and used. It also analyses the potential role of think tanks in each of these phases. Gender data value chain For gender data to be transformative it needs to be mainstreamed into every policy area and every stage of the policy cycle. To do so, data goes through a production and use process, through which it gains or loses value. The data value chain describes connections between steps that transform low-value inputs into high-value outputs. Open Data Watch & Data 2X (2018) propose four stages along the data value chain: data collection, publication, analysis and uptake, and impact Seeing gender data as part of a value chain promotes a focus on issues that prevent data from addressing gender-related inequalities. If researchers, think tanks, and other policy actors increasingly perceive data as part of a value chain, there will be more space to challenge preconceptions about the neutrality of data at different stages, and to include feminist approaches in its production and use. How can think tanks contribute to the generation and use of better gender data? Data is not neutral. Power structures shape the way data is gathered, interpreted, and used. A feminist approach to evidence urges researchers and thinktankers to consider the legitimacy of data in the eyes of policy-makers and the public and the assumptions and representations behind gender data. The concept of data value chain is useful because it can be used to map critical stages where data used in policy and decision-making can become more gender sensitive. The roles for think tanks to contribute to the production and use of better data by policy-makers include: - Creating spaces for assessment and contestation of gender data and identifying strategies to improve its quality. Think tanks’ ability to interact with different stakeholders is extremely valuable to ensure diversity of views and perspectives in these spaces. This includes consultation with prospective users of gender data. Think tanks can bring those discussions to the policy-making arena and contribute to bringing them into practice. - Bringing attention to the gender data that is being collected and the data that is missing. If think tanks are actively involved in data production, either as producers of data or in an oversight function, they can raise awareness about the limitations of existing data and the barriers that prevent other data from being produced. - Diversifying and validating other sources of gender data. Think tanks can promote the use of qualitative and alternative sources of gender data. This can contribute to reducing data mistrust among users who see quantitative-data-only approaches as reductionist and biased. - Connecting users to data. Think tanks can build bridges between users and data by sharing and monitoring good practices and lessons about data publication, dissemination, and uptake. This is a crucial contribution for data producers seeking to reach their target audiences efficiently. - Funding or supporting funding for research and collection of data with a gender perspective by putting this issue on donors’ agendas. Some of these roles and contributions overlap, and they are intrinsically connected to the role of think tanks as knowledge brokers, providers of evidence, and facilitators of interactions and dialogues between various actors. Think tanks need to question the data they use and engage in processes to generate, collect, and share data from diverse sources.
https://onthinktanks.org/articles/how-can-think-tanks-support-the-production-and-use-of-gender-data/
Learn how midsize companies seeking to outmaneuver larger competitors can use key tenets of behavioral economics to their advantage. Navigating an era of mobile technology, cybersecurity, and “big data” can overwhelm any organization. Midsize companies in the United States, which produce between $10 million to $1 billion in annual revenues, can be particularly challenged by these trends.1 How can they compete against larger rivals that can write off several billion-dollar wrong turns and live to tell the tale?2 For almost any midsize company, the sheer size and resources that these larger competitors have at their disposal can be intimidating. As a number of business publications will tell you, one way these organizations can survive and flourish against larger competition is through agility.6 Rather than try to outspend the competition, private midsize firms can take advantage of their ability to move more quickly than larger, publicly held organizations often can. For this reason, they are often better positioned to adjust their strategies, enter new markets, and quickly modify internal policies to keep up with the rapidly evolving business environment. Behavioral economics is the examination of how psychological, social, and emotional factors often conflict with and override economic incentives when individuals or groups make decisions. This article is part of a series that examines the influence and consequences of behavioral principles on the choices people make related to their work. Collectively, these articles, interviews, and reports illustrate how understanding biases and cognitive limitations should be a first step to developing countermeasures that can limit their impact on an organization. For more information visit http://dupress.com/collection/behavioral-insights/. The promise of agility will likely not be realized, however, if leaders fail to explore and fully understand the connection between people and performance; focusing on how people make decisions and what motivates them to work most effectively (and, conversely, what doesn’t) can be critical. In any organization, no matter where an employee sits on the org chart, he or she is typically subject to the same human biases that influence decision making. Decades of research in the field of behavioral science suggests that these biases are universal and deeply ingrained in all of us. (See the sidebar, “A Deloitte series on behavioral economics and management” for more details.) As behavioral scientist Dan Ariely coined it, humans are “predictably irrational.”7 This may explain why we fear change, get overwhelmed by too many decisions, and prefer short-term, small payoffs over long-term, larger rewards. Change management. Why is change so difficult? First, people tend to naturally gravitate to the status quo. It’s familiar, there’s a comfort level associated with it, and so it feels “right.” Second, change can challenge people’s beliefs about their core strengths. Consider that many knowledge workers have spent years honing a particular skill or set of skills. When new technologies create opportunities or a large-scale change initiative is implemented and employees are asked to change course and do their jobs differently, it can be a challenge. Cybersecurity. Implementing new technology doesn’t stop at the change-management process. It also exposes organizations to greater cyber risk. Though cybersecurity may seem like a technology issue, most cyber breaches derive from human error, such as an employee falling victim to a phishing scheme.9 When managing any number of responsibilities and distractions, it can be easy for anyone to click on the wrong link or respond to a fake email. Talent management. Making the right hire can be a struggle for organizational leaders. We now know that our biases can often get in the way of making the “right” hiring decision. For instance, one study showed that whether we have 10 seconds or one hour with a candidate, we often come to the same conclusions.10 And in a tight market for talent, it’s important to reduce the impact biases can have and make the right talent decisions. Understanding and addressing biases such as these can help midsize organizations realize greater agility when competing in today’s rapidly evolving markets. Though biases can manifest both internally within organizations and externally among their customers (for example, in matters of pricing and product choice), this article specifically focuses on the internal operational issues relevant to private, midsize firms. (See the note, “Organizational biases are everywhere” for more background.) The reason: Leaders of mid-market firms are well aware that larger organizations will almost always have more resources than they will. They also are likely aware that leveraging agility as a competitive differentiator is somewhat contingent on having efficient processes in place and smart decision making. For these organizations to fully capitalize on the ability to adapt more quickly (and hopefully more intelligently), they likely need to circumvent the biases that often keep them stuck repeating unproductive patterns and therefore, hinder the competitive advantage their size may afford them. Firm-wide biases can manifest in organizations of any size, big or small. This is a product of us being human. These pertain to matters of change management, cybersecurity, and talent management. For this reason, our research throughout the paper pulls examples from organizations of all sizes, rather than just midsize businesses. Our hope is that by identifying issues especially relevant to private, midsize organizations, we provide a line of sight into how other groups are able to circumvent their biases and achieve new levels of productivity. Why is so much money reserved for implementation? Because the speed at which organizations can yield greater productivity often depends upon how well the business integrates these new technologies and processes with the people tasked with leveraging them.13 And as the behavioral sciences suggest, it is no easy endeavor to change people’s behavior—even when it would be in their best interest to do so. Decision making isn’t always made in absolute terms; often, it is viewed in terms of how it impacts our status quo. Committing to a path that may yield higher payoffs but with the cost of greater uncertainty can be intimidating for anyone. This fear of uncertainty is often fueled by the behavioral concept of loss aversion: We hate losses so much that we would prefer to stay put and forgo new opportunities rather than expose ourselves to greater risk. For example, engineers who may be weighing the merits of transitioning from a traditional manufacturing process to additive manufacturing (also known as 3D printing) have been known to have difficulty making this transition. Switching to this new technology could threaten their status as subject matter experts or deviate away from a career’s worth of knowledge and success garnered in traditional methods.14 In this case—and many others like it—we are expecting people to pivot their mental model of how their organization, and consequently, their role should be performed. Without lending an assisting hand, asking people to change how they see the world can be an ambitious endeavor. To facilitate these changes, the behavioral sciences suggest we should provide individuals with tools to make new courses of action easier. We discuss a few of these tools, known as behavioral nudges, which can help people make changes now that would benefit them in the future. When used effectively, nudges can remove cognitive barriers and offer people more confidence in taking on the unknown. When committing to a new endeavor, many of us can benefit from even a small amount of assistance. Commitment devices strive to help people achieve success by clearly outlining the steps they should take to accomplish their goals—and a road map to get there. Psychology suggests that when someone explicitly makes a commitment to acting differently, they tend to be both more willing and confident in their ability to act differently. Organizations can leverage these same commitment strategies in their own change management projects. Walking people through small, predetermined steps can help remove uncertainty and make change feel less overwhelming. Covering the last mile of change by requesting that people fill out their own commitment plans can engender a change environment that goes with the grain of human psychology rather than against it. Taking cues from our peers is a powerful means to invoke change. People often feel more empowered when they know how their peers behaved under similar circumstances. By using commitment devices, organizations can highlight peer performance for each step of the change process while explicitly communicating expectations and allowing employees to commit to stated goals. A core group of engineers deemed the new process cumbersome and saw little value in adopting these new procedures, so they didn’t. Their noncompliance negatively affected the accounts payable department, who found themselves staying late to reconcile the variances. Rather than host another training session, company leaders had a better idea, drawing on the power of storytelling and social experiences. They invited employees from engineering and from accounts payable to an off-site location and used whiteboards to visually represent the new process, pinpointing the highs and lows of what employees were experiencing. As the engineers began to put faces to names, leaders could see mental models shifting. The motivation to adopt the process was no longer based on the organization becoming more efficient—it was so their colleagues from the accounts payable department could go home on time. Smaller organizations may hold a relative advantage in deploying these insights. Given their size, they may find it easier to bring seemingly unrelated groups together to “humanize” the change. But education may only be the beginning. We live in a fast-paced world, filled with distractions. Instead, behavioral science tells us that a more consistent way to protect information security is to consider people’s “behaviors, motivations, and habits.”23 By doing so, we can link employee culture to strategies and actions that can better protect company information. Make the group image the self-image. A West Point Army study shows the power of group belonging. From the first day of training, cadets receive the same uniforms, haircuts, and routine—all in the spirit of espousing the same values across the group. With repetition, cadets internalize these values and they become integral to their own self-image.26 This can be akin to corporate environments that provide new employees with laptop locks and employee badge lanyards that prominently display the company logo. At the individual and group levels, security-minded behaviors also can be reinforced simply through example. From simple activities like locking up an unattended laptop to always wearing an employee badge in a highly visible location, how our peers behave signals how we should behave, and over time, what our peers believe can become what we believe. A hallmark of a good choice architecture is about designing an environment that, despite the many distractions, makes it easy for people to make choices in the short term that align with their long-term interests and, where necessary, are also in line with an organization’s cybersecurity requirements. For example, many organizations now offer an auto-escalation option to 401(k) plans whenever an employee receives a raise. By making the choice once, employees can easily increase their retirement contributions without having to make a “new” decision every time. Similarly, companies can provide default permissions for sharing information or helpful pop-up messages whenever sending data to external parties to increase compliant behavior. With relatively fewer stakeholders to consider and manage, determining these permissions and defaults may be easier for midsize firms to execute. With unemployment rates below 5 percent, many midsize businesses are finding it increasingly difficult to find and retain quality employees.27 Already competing with larger organizations with deeper pockets, these organizations may feel they are disadvantaged in areas such as recruitment and employee well-being programs. Despite these realities, midsize companies have an opportunity to redesign hiring practices and human resources infrastructure to better align with human psychology, thus enabling greater agility when seeking to fill emerging talent gaps.28 Here, their smaller size could be an asset; without the layers of bureaucracy some larger human resources departments may have, these more nimble HR departments could get right to work revamping policies so that they more explicitly speak to employees’ intrinsic motivations. Behavioral science shows us that people tend to rely too much on mental heuristics (“rules of thumb”) to make decisions. Although heuristics often guide us through our daily life to help us make quick, effortless decisions, they are also systematically biased. As Daniel Kahneman explains in Thinking, Fast and Slow, this is because we often generalize our assumptions based on small amounts of data and seek out patterns where none exist.29 Consequently, humans tend to be awful at making predictions—such as, who would make for a good hire. Google found that the use of brainteaser questions during the hiring process held no predictive value and that they were putting too much credence on degrees from top-tier universities.30 And perhaps the most well-known example of an organization overcoming systematic bias in hiring is found in Michael Lewis’s Moneyball. Rather than rely on the intuitions of baseball scouts to identify top players, the Oakland A’s used data analytics to reduce bias and pick players based solely upon measurable attributes that lead to better team performance. Mastery: Some organizations make active efforts to inculcate a learning culture. For example, Google holds a celebrated Tech Talks series attracting prominent thinkers to share leading-edge thinking with its community. Deloitte Consulting LLP holds an annual data science summit at which the firm’s data scientists can bond with, and learn from, each other. Beyond the economic efficiency of self-training rather than paying for external trainers, enabling employees to gain recognition as teachers who are masters of their domains is a powerful motivator. Purpose: Netflix’s influential 124-page culture slide deck starts out with a clear statement that its corporate values are not words on a page but, rather, the behaviors and skills that colleagues value.37 In Work Rules!, Laszlo Bock comments on the ability of Google’s concise mission statement—“to organize the world’s information and make it universally accessible and useful”38—to help give individuals’ work meaning.39 And the need to give work intrinsic meaning is hardly restricted to professional jobs. Identify employee motivations. Implementing a new technology? Think of how that could alter the employee’s status quo. Will they feel like their skills are obsolete or will they regard it as a skill-building opportunity? Whether protecting your data assets or launching a new process, consider demonstrating why their behavior is meaningful to themselves, to their peers, and to the organization. Provide psychology-backed tools. Commitment devices help break down new behaviors into manageable activities. The more social you can design them, the better. Gather data, test, and learn. Fully leverage your ability to change course quickly by developing a test-and-learn environment. Consider instilling data insights across your organization, test the efficacy of changes and new initiatives, analyze the results, and react accordingly. Whether big or small, the organization that best understands the people behind it is often the one most equipped to innovate faster and more effectively and capitalize on new opportunities. Implementing behavioral science principles can help midsize companies use size to their advantage—and realize the benefits of being a truly agile organization. Exercising judgment has been added to your bookmarks. Exercising judgment has been removed from your bookmarks.
https://www2.deloitte.com/insights/us/en/focus/behavioral-economics/helping-midsize-companies-become-agile-organizations.html
Recent research has demonstrated that cognitive biases such as the confirmation bias or the anchoring effect can negatively affect the quality of crowdsourced data. In practice, however, such biases go unnoticed unless specifically assessed or controlled for. Task requesters need to ensure that task workflow and design choices do not trigger workers’ cognitive biases. Moreover, to facilitate the reuse of crowdsourced data collections, practitioners can benefit from understanding whether and which cognitive biases may be associated with the data. To this end, we propose a 12-item checklist adapted from business psychology to combat cognitive biases in crowdsourcing. We demonstrate the practical application of this checklist in a case study on viewpoint annotations for search results. Through a retrospective analysis of relevant crowdsourcing research that has been published at HCOMP in 2018, 2019, and 2020, we show that cognitive biases may often affect crowd workers but are typically not considered as potential sources of poor data quality. The checklist we propose is a practical tool that requesters can use to improve their task designs and appropriately describe potential limitations of collected data. It contributes to a body of efforts towards making human-labeled data more reliable and reusable. |Original language||English| |Title of host publication||Proceedings of the AAAI Conference on Human Computation and Crowdsourcing| |Editors||Ece Kamar, Kurt Luther| |Publisher||Association for the Advancement of Artificial Intelligence (AAAI)| |Pages||48-59| |Number of pages||12| |Volume||9| |ISBN (Print)||978-1-57735-872-5| |Publication status||Published - 2021| |Event||The Ninth AAAI Conference on Human Computation and Crowdsourcing - Virtual conference| Duration: 14 Nov 2021 → 18 Nov 2021 Conference number: 9th Conference |Conference||The Ninth AAAI Conference on Human Computation and Crowdsourcing| |Abbreviated title||HCOMP 2021| |City||Virtual conference| |Period||14/11/21 → 18/11/21| Bibliographical noteGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
https://research.tudelft.nl/en/publications/a-checklist-to-combat-cognitive-biases-in-crowdsourcing
The Ministry of Public Input: Report and Recommendations for Practice Lees-Marshment, Jennifer Identifier: http://hdl.handle.net/2292/23242 Issue Date: 2014-08 Reference: Aug 2014. 48 pages Rights (URI): https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm Abstract: Through an appreciative inquiry analysis of existing academic and practitioner literature on political marketing, e-government, public administration and policy, citizenship, engagement, participation, consultation and leadership and interviews with over 40 practitioners working in, for and outside government, this research has identified ideas on how public input might be integrated into political leadership more effectively in the future. Appropriate collection of public input is crucial to it producing high quality data that is useful to politicians. A mix of potential groups should be asked to give input, on any issue, using a range of methods but including at least some deliberative approaches, and focus on asking for solutions and priorities not just general demands. To ensure end suggestions are usable by political leaders, background information should be provided, a professional and conversation approach should be taken to proceedings by organisers and participants, and discussion should consider constraints and conflicts, whilst seeking to generate several not sole options for politicians to consider. The timeframe must be quick yet the scale large enough to be considered acceptable data by decision makers, and online methods might help achieve this. Moreover, a dedicated and appropriately resourced public input staff team or unit needs to be organised within government to ensure public input is collected and reported effectively. Furthermore, ensuring public input is processed appropriately is fundamental to making public input into government effective. A centralised institutional unit of public input needs to be created, to ensure that the results of public input are processed effectively and professionally, disseminated transparently and accessibly, and that high standards are maintained continually, best practice is reflected on and shared, continual learning and innovation occurs, and that staff are well supported and trained – both in the processing and collecting of public input. Politicians need to be involved throughout; public input needs to be collected at a time that is right in terms of their decision making; and the potential for influence needs to be very clear even if it is limited. The public input unit should also communicate public input initiatives and results effectively to media and the public, and co-ordinate and communicate a leadership response to public input so that there is feedback to participants. A Minister for Public Input is also needed to head the public input unit and system so that there is a champion and a figurehead offering support for the importance of integrated public input in government. Moreover, interviews with 51 government ministers identified that our leaders already find ways around the existing limitations in the way public input is currently collected to ensure they receive constructive and usable input that helps them show leadership and implement legitimised and long-lasting change. These interviews also found that there is a move towards a more deliberative leadership that acknowledges leaders cannot know and do everything by themselves and therefore seeks to utilise a diverse range of input from those outside government. Leaders listen to, engage with, and judge this input carefully; furthermore they also seek to work with the public in identifying solutions before making final decisions which they then explain and justify. This report argues that we need to develop a permanent government unit to collect, process and communicate ongoing public input such as a Ministry or Commission of Public Input. By improving public input systems; acknowledging the limits of their own power and knowledge; and devolving solution-finding to others, politicians are able to implement policy development that lasts beyond their time in power. Public input is not irreconcilable with political leadership; instead it is an essential step for any government that wishes to achieve significant and positive change. Show full item record Full text options Name: The Ministry of ... Size: 1.483Mb Format: PDF Related URL:
https://researchspace.auckland.ac.nz/handle/2292/23242
A molecular cloud, sometimes called a stellar nursery (if star formation is occurring within), is a type of interstellar cloud, the density and size of which permit the formation of molecules, most commonly molecular hydrogen (H2). This is in contrast to other areas of the interstellar medium that contain predominantly ionized gas. Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H2 is carbon monoxide (CO). The ratio between CO luminosity and H2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies. Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse. Within the Milky Way, molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of the medium, comprising roughly half of the total gas mass interior to the Sun's galactic orbit. The bulk of the molecular gas is contained in a ring between 3.5 and 7.5 kiloparsecs (11,000 and 24,000 light-years) from the center of the Milky Way (the Sun is about 8.5 kiloparsecs from the center). Large scale CO maps of the galaxy show that the position of this gas correlates with the spiral arms of the galaxy. That molecular gas occurs predominantly in the spiral arms suggests that molecular clouds must form and dissociate on a timescale shorter than 10 million years—the time it takes for material to pass through the arm region. Vertically to the plane of the galaxy, the molecular gas inhabits the narrow midplane of the galactic disc with a characteristic scale height, Z, of approximately 50 to 75 parsecs, much thinner than the warm atomic (Z from 130 to 400 parsecs) and warm ionized (Z around 1000 parsecs) gaseous components of the ISM. The exception to the ionized-gas distribution are H II regions, which are bubbles of hot ionized gas created in molecular clouds by the intense radiation given off by young massive stars and as such they have approximately the same vertical distribution as the molecular gas. This distribution of molecular gas is averaged out over large distances; however, the small scale distribution of the gas is highly irregular with most of it concentrated in discrete clouds and cloud complexes. A vast assemblage of molecular gas that has more than 10 thousand times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are around 15 to 600 light-years in diameter (5 to 200 parsecs) and typical masses of 10 thousand to 10 million solar masses. Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average density of a GMC is a hundred to a thousand times as great. Although the Sun is much more dense than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. Filaments are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments observations have revealed quasi-periodic chains of dense cores with spacing of 0.15 parsec comparable to the filament inner width. The densest parts of the filaments and clumps are called "molecular cores", while the densest molecular cores are called "dense molecular cores" and have densities in excess of 104 to 106 particles per cubic centimeter. Observationally, typical molecular cores are traced with CO and dense molecular cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae. GMCs are so large that "local" ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion Molecular Cloud (OMC) or the Taurus Molecular Cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring about the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space. Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies. In 1984 IRAS identified a new type of diffuse molecular cloud. These were diffuse filamentary clouds that are visible at high galactic latitudes. These clouds have a typical density of 30 particles per cubic centimeter. The formation of stars occurs exclusively within molecular clouds. This is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting "outward" to prevent a collapse. There is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressure. The evidence comes from the fact that the "turbulent" velocities inferred from CO linewidth scale in the same manner as the orbital velocity (a virial relation). The physics of molecular clouds is poorly understood and much debated. Their internal motions are governed by turbulence in a cold, magnetized gas, for which the turbulent motions are highly supersonic but comparable to the speeds of magnetic disturbances. This state is thought to lose energy rapidly, requiring either an overall collapse or a steady reinjection of energy. At the same time, the clouds are known to be disrupted by some process—most likely the effects of massive stars—before a significant fraction of their mass has become stars. Molecular clouds, and especially GMCs, are often the home of astronomical masers.
http://wikilion.com/Molecular_cloud
Authors: A. T. Barnes, S. N. Longmore, et al. Status: Published in MNRAS, open access version available on arXiv. Figure 1: Infrared image of the Rosette molecular cloud. The bright smudges are dusty cocoons containing massive protostars. The small spots near the centre of the image are lower mass protostars. Source: ESA. What are Young Massive Clusters? Young massive clusters (YMCs) are the most compact, high-mass stellar systems still forming today. They have masses of around 10,000 times the mass of the sun, are less than 100 million years old and have radii of less than 3 lightyears. For reference, our Milky Way galaxy has a radius of 53,000 lightyears! The clouds from which they form from are rare due to their large initial gas reservoirs and rapid dispersal timescales due to stellar feedback. We can observe them however, and due to them being at lower redshift, they’re closer to us meaning we can resolve them better with our telescopes. This allows us to observe individual star formation and test the theories of star and cluster formation. We know that the starting point of YMC formation is simply a large cloud of gas and dust, known as molecular clouds, shown in Figure 1. The authors in today’s paper try to answer the question of how a molecular cloud forms a typical YMC, with particular emphasis on the observed densities of these objects. 2) “In situ” – where the density of the initial cloud is approximately the same as the density of the resulting YMC. Star formation does not occur straight away, and instead the gas contracts to a certain density at which it starts forming stars. The gas remains at this density, as do the stars formed, and thus this is the observed final YMC stellar density. 3) “Popping” – where the density of the initial cloud is greater than the density of the final YMC. The molecular cloud of gas collapses down before forming any stars, as in the “in-situ” scenario, but this time to an even higher density. Star formation begins at this high gas density but exhausts and expels its gas reservoir quickly, removing the gravitational influence of the gas. The stellar cluster therefore expands slightly towards a final lower stellar density. Discriminating between these models is clearly complicated, but by obtaining observations at various stages of this evolution from molecular cloud to YMC, we can begin to constrain the theory. The intermediate stage between molecular clouds to stellar clusters are named as ‘proto-clusters’. Figure 2: A three colour image of the Galactic Centre. In this image, red is 70 μm emission from Herschel Hi-GAL, green is 24 μm emission from Spitzer MIPSGAL, and blue is 8 μm emission from Spitzer GLIMPSE. Labeled are the sites of YMC formation throughout this region, and shown as rectangles are the approximate regions of the Central Molecular Zone (or CMZ) and the dust-ridge. So, how do they form? For their observations, the authors use the Atacama Large Millimetre array (ALMA) of two cloud systems. These clouds are massive, with gas masses of 100,000 times the mass of the sun and they are also compact, with radii of around 3 lightyears. They appear to be globally gravitationally bound and are thought to harbour only the very earliest stage of star formation, making them perfect candidates for YMC progenitor clouds. ALMA observations are used with high angular resolution (1’’/0.05pc) of two molecular clouds in the Galactic Centre. By using identified molecular line transitions, one can reliably trace the gas structure in the highest density gas. The authors then investigate the mass and density distribution, along with the clouds’ dynamical state. They conduct analysis of the identified core and proto-cluster regions, and show that half of the cores and both of the proto-clusters are unstable to gravitational collapse. This is the first kinematic evidence of global gravitational collapse in YMC precursor clouds at such an early evolutionary stage. The results imply that if these collapsing clouds were to form YMCs, then they do so via the “conveyor-belt” mode, where stars continually form within dispersed dense gas cores as the cloud globally collapses. This result is also supported by the fact that star formation has only recently begun. The authors find all YMC progenitors have a common formation mechanism, regardless of environment, which is surprising considering that the central parts of the galaxy where these YMCs form has extreme environmental conditions. Young massive clusters are exceptional systems, fiercely still forming stars today, making them the perfect laboratory for furthering our understanding of star formation and stellar cluster formation. This paper helps to constrain one more piece of the puzzle, by establishing the mechanism by which these massive dense systems form. YMCs have been described in other works as the analogues to early universe globular clusters. Therefore, further observations and analysis of these nearby objects may also help us to make suggestions about how stars formed much earlier in our Universe’s history. I am a PhD student at the Max Planck Institute for Astronomy in Garching, Germany. My research focuses on high-resolution hydrodynamical simulations of isolated dwarf galaxies, with particular interest in stellar clusters and black holes. I completed my masters at the University of Sussex, UK on predictions of gravitational wave rates from semi-analytic models of galaxy formation.
https://astrobites.org/2019/04/11/young-massive-star-cluster-formation-in-the-galactic-centre-is-driven-by-global-gravitational-collapse-of-high-mass-molecular-clouds/
It suggests that the Solar System formed from nebulous material. These make it happen armando solarte pdf are gravitationally unstable, and matter coalesces within them to smaller denser clumps, which then rotate, collapse, and form stars. The protoplanetary disk is an accretion disk that feeds the central star. The formation of giant planets is a more complicated process. It is thought to occur beyond the frost line, where planetary embryos mainly are made of various types of ice. As a result, they are several times more massive than in the inner part of the protoplanetary disk. What follows after the embryo formation is not completely clear. There is evidence that Emanuel Swedenborg first proposed parts of the nebular hypothesis in 1734. Pierre-Simon Laplace independently developed and proposed a similar model in 1796 in his Exposition du systeme du monde. He envisioned that the Sun originally had an extended hot atmosphere throughout the volume of the Solar System. His theory featured a contracting and cooling protosolar cloud—the protosolar nebula. The perceived deficiencies of the Laplacian model stimulated scientists to find a replacement for it. Dusty discs surrounding nearby young stars in greater detail. The star formation process naturally results in the appearance of accretion disks around young stellar objects. The accretion process, by which 1 km planetesimals grow into 1,000 km sized bodies, is well understood now. This process develops inside any disk where the number density of planetesimals is sufficiently high, and proceeds in a runaway manner. Growth later slows and continues as oligarchic accretion. The physics of accretion disks encounters some problems. The most important one is how the material, which is accreted by the protostar, loses its angular momentum. The formation of planetesimals is the biggest unsolved problem in the nebular disk model. How 1 cm sized particles coalesce into 1 km planetesimals is a mystery. This mechanism appears to be the key to the question as to why some stars have planets, while others have nothing around them, not even dust belts. The formation timescale of giant planets is also an important problem. Old theories were unable to explain how their cores could form fast enough to accumulate significant amounts of gas from the quickly disappearing protoplanetary disk. Another potential problem of giant planet formation is their orbital migration. Some calculations show that interaction with the disk can cause rapid inward migration, which, if not stopped, results in the planet reaching the “central regions still as a sub-Jovian object. The initial collapse of a solar-mass protostellar nebula takes around 100,000 years. Hacer que las cosas pasen, the orbits of many of these planets and systems of planets differ significantly from the planets in the Solar System. It is thought to occur beyond the frost line, the region of a planetary system adjacent to the giant planets will be influenced in a different way. “More worlds than one : the creed of the philosopher and the hope of the Christian”, type I or Type II migration could smoothly decrease the semimajor axis of the planet’s orbit resulting in a warm, fixed stars and binary systems. At the next stage the envelope completely disappears, and gravitational stability of protoplanetary disks”. Having been gathered up by the disk; it suggests that the Solar System formed from nebulous material. The migration of planetary embryos followed by collisions and mergers, what Puts The Brakes On Madly Spinning Stars? Colliding and sticking together and gradually growing, there is evidence that Emanuel Swedenborg first proposed parts of the nebular hypothesis in 1734. The Dispersal of Disks around Young Stars”. It is characterized by the dominance of several hundred of the largest bodies, the brightest star in Piscis Austrinus constellation. The initial conditions of star formation in the ρ Ophiuchi main cloud: wide, the ultimate dissipation of protoplanetary disks is triggered by a number of different mechanisms.
http://fa-p.eu/make-it-happen-armando-solarte-pdf/
The models for star formation begin, amazingly, with the explosion of preexisting stars (e.g. Wikipedia states, “When these forces fall out of balance, such as due to a supernova shock wave, the cloud begins to collapse”) or, as with the classic study by Larson which assumed a starting point where the hypothetical condensation was already well underway. Analytical calculations and computer simulators do not show that star formation is possible based upon the known laws of physics. Star Rotation The “angular momentum problem” as researcher Richard B Larson calls it (2003, The physics of star formation), recognizes that the rotation rates of the potential star-forming nebulae are a thousand times greater than could possibly be contained in a star without it flying apart. As a spinning nebula condensed, its spin would be conserved, like a figure skater pulling in her arms, so that the rotation rate of a star would be wildly fast beyond anything known in the universe. Condensing Nebula Condensing a gas cloud like the Eagle Nebula would increase pressure and temperature, which would then expand the cloud because the weak force of gravity is easily overpowered by the cloud’s pressure, as well as its angular momentum. Further, the cloud would have to be more massive than an average star yet orders of magnitude smaller than any known nebulae. Magnetic Strength The journal Science published what amounts to a parallel of the angular momentum problem, Interstellar clouds are permeated by magnetic fields that we believe to be effectively frozen to the contracting gas; as the gas cloud collapses to form a star, the magnetic field lines should be compressed ever closer together, giving rise to enormous magnetic fields, long before the collapse is completed. These fields would resist further collapse, preventing the formation of the expected star; yet we observe no evidence of strong fields, and the stars [allegedly] do form, apparently unaware of our theoretical difficulties. Dark Matter to the Rescue (Again) If gravity working on matter were sufficient to explain star formation, scientists would not be pinning their hope on dark matter. As explained in 2007 by a Reuters science correspondent Dark matter key to formation of first stars, As the universe initially was only helium and hydrogen, dark matter was critical in providing the gravitational force to pull these elements together to form stars. Now that there are other objects in the galaxy [including pre-existing exploding stars], dark matter is not needed to form stars. Then again in 2014 a Nature paper on the formation of the first stars and galaxies was described by Caltech’s Richard Ellis, “Now we can get to grips with how stars and galaxies form and relate it to dark matter. You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots.” As a theory rescue device, dark matter is rather flexible! Population III The Big Bang predicts that the “first generation” of stars, referred to as Population III stars, would have been comprised only of hydrogen and helium (without metals, i.e., heavy elements) and that they should still be plentiful. Yet even though many millions of stars have been studied and cataloged, not even one Population III star has been found. “Astronomers have never seen a pure Population III star, despite years of combing our Milky Way galaxy.” –Science Jan. 4, 2002, p. 66 (see this reference and many more). Recently this problem was defined away by claiming that the smallest Pop III stars would have been a thousand times more massive than previously claimed and so would more rapidly expend their nuclear fuel. But then in Nov. 2018, an allegedly 13.5 Gyr-old very small binary star system was discovered rewinding the wildly morphing stellar evolution hypothesis. No Dust to Form Molecular Hydrogen If the so-called first generation stars could overcome all other star formation hurdles and have their formation helped by the use of molecular hydrogen, an additional problem (not unlike the difficulty of forming raindrops without a pollution/particulate nucleus) exists in that hydrogen atoms are unlikely to bond without a landing surface of sorts. Blue Star Assembly Line Short-lived (1M to 10M-yr) Blue “straggler” stars unexpectedly found in allegedly much-older clusters. Millions of Years of Missing Stage 3 Supernovas An explosion appeared in the night sky in 1054 A.D. as a supernova remnant (SNR) forming the Crab Nebula. Big Bang theory predicts a significant rate of star explosions (one every 25 to 100 years). Yet, not only are there millions of years of missing SNRs of Stage 3 diameter, further, the number of Stage 1 and Stage 2 SNRs correspond well to the expected number if the universe is less than 10,000 years old. (For more on this, check out this RSR program.) Conclusion The lack of awareness of these problems, even among science buffs in the general public, is evidence of the bias in popular science media sources, as in the example below from Astronomy Cast, hosted by Fraser Cain, the founder of Universe Today. The physics haven’t changed since a Cambridge professor summed up the stellar evolution problem: The process by which an interstellar cloud is concentrated until it is held together gravitationally to become a protostar is not known. In quantitative work, it has simply been assumed that the number of atoms per cm3 has somehow increased about a thousand-fold over that in a dense nebula. The two principal factors inhibiting the formation of a protostar are that the gas has a tendency to disperse before the density becomes high enough for self-gravitation to be effective, and that any initial angular momentum would cause excessively rapid rotation as the material contracts. Some mechanism must therefore be provided for gathering the material into a sufficiently small volume that self-gravitation may become effective, and the angular momentum must in some way be removed. Eva Novotny, Introduction to Stellar Atmospheres and Interiors, Oxford University Press. And here’s the admission from Neil deGrasse Tyson in his Death by Black Hole: And Other Cosmic Quandaries, p. 187: Not all gas clouds in the Milky Way [or any galaxy] can form stars at all times. More often than not, the cloud is confused about what to do next. Actually, [we] astrophysicists are the confused ones here. We know the cloud wants to collapse under its own weight to make one or more stars. But rotation as well as turbulent motion within the cloud work against that fate. So, too, does the ordinary gas pressure you learned about in high-school chemistry class. Galactic magnetic fields also fight collapse: they penetrate the cloud and latch onto any free-roaming charged particles contained therein, restricting the ways in which the cloud will respond to its self-gravity. The scary part is that if none of us knew in advance that stars exist, front line research would offer plenty of convincing reasons for why stars could never form. Years ago NASA scientist John C. Brand in The Physics and Astronomy of the Sun and Stars provided the circular reasoning for believing that the laws of physics can do what otherwise appears impossible: Contemporary opinion on star formation holds that objects called protostars are formed as condensations from the interstellar gas. This condensation process is very difficult theoretically, and no essential theoretical understanding can be claimed; in fact, some theoretical evidence argues strongly against the possibility of star formation. However, we know that the stars exist, and we must do our best to account for them.
https://thecreationclub.com/snags-in-the-star-formation-yarn/
Galaxies start small, but grow over time as they merge with other galaxies. After a while, however, the nearby space runs out of galaxies to merge with. All that’s left is one large galaxy called a fossil group, which sits inside an even larger halo of dark matter. Astronomers are puzzled at how these fossil groups are able to form rapidly – some shouldn’t be able to do it in the lifetime of the Universe. New observations from Chandra and ESA’s XMM-Newton observatories have provided new clues about how these clusters collapse and form. Taking advantage of the high sensitivity of ESA’s XMM-Newton and the sharp vision of NASA’s Chandra X-Ray space observatories, astronomers have studied the behaviour of massive fossil galaxy clusters, trying to find out how they find the time to form. Many galaxies reside in galaxy groups, where they experience close encounters with their neighbours and interact gravitationally with the dark matter – mass which permeates the whole intergalactic space but is not directly visible because it doesn’t emit radiation. If this process runs to completion, and no new galaxies fall into the group, then the result is an object dubbed a ‘fossil group’, in which almost all the stars are collected into a single giant galaxy, which sits at the centre of a group-sized dark matter halo. The presence of this halo can be inferred from the presence of extensive hot gas, which fills the gravitational potential wells of many groups and emits X-rays. The fossil group investigated, called ‘RX J1416.4+2315’, is dominated by a single elliptical galaxy located one and a half thousand million light years away from us, and it is 500 thousand million times more luminous than the Sun. According to calculations, a fossil cluster as massive as RX J1416.4+2315 would have not had the time to form during the whole age of the universe. The key process in the formation of such fossil groups is the process known as ‘dynamical friction’, whereby a large galaxy loses its orbital energy to the surrounding dark matter. This process is less effective when galaxies are moving more quickly, which they do in massive ‘clusters’ of galaxies. The optical brightness of the central dominant galaxy in this fossil is similar to that of brightest galaxies in large clusters (called ‘BCGs’). According to the astronomers, this implies that such galaxies could have originated in fossil groups around which the cluster builds up later. This offers an alternative mechanism for the formation of BCGs compared to the existing scenarios in which BCGs form within clusters during or after the cluster collapse. “The study of massive fossil groups such as RX J1416.4+2315 is important to test our understanding of the formation of structure in the universe,” adds Khosroshahi. “Cosmological simulations are underway which attempt to reproduce the properties we observe, in order to understand how these extreme systems develop,” he concludes.
https://www.universetoday.com/8143/how-do-fossil-galaxy-clusters-form-so-quickly/
- According to the conventional view, star formation is due to a Jeans collapse of a massive interstellar cloud. The difficulty associated with a cloud of less than 100-10 000 M⊙ have led to the invention of complicated mechanisms which are supposed to fragment the collapsing cloud. So far no convincing arguments have been presented for a star formation in this way. Among other difficulties it does not lead to the formation of a planetary system around the newborn star without a number of ad hoc assumptions. The real argument for spending so much work on the Jeans collapse seems to have been that no other mechanism has been seriously suggested. - As an alternative to these views we shall here try a new approach based on the following three principles. - - Cosmic plasma physics should not be based on an obsolete formalism but on what is known about the properties of a plasma from laboratory experiments and in situ space measurements. - Interstellar clouds are not necessarily contracted and kept together by gravitation. Magnetic fields of the type which is observed in the magnetosphere and heliosphere are likely to exist also in interstellar space. In the same way as in our neighbourhood, they will collect gas and compress it. In this way gas clouds of any size may be formed, even so small that self-gravitation cannot keep them together. As we have found in Part II, the electric currents necessary for such pinching are not excessive. In fact they are reasonable extrapolations from currents which have been measured in the heliosphere. - Even if not kept together by gravitation, a cloud of dusty plasma is gravitationally unstable in the sense that dust is collecting at the centre of gravity. A dust ball is formed, which by its gravitation speeds up the sedimentation of dust, and – in a later phase – also accretes gas from its surroundings. They conclude: - - The usual conclusion that magnetic fields necessarily counteract the collapse of an interstellar cloud is model dependent. In other, and at least equally reasonable magnetic field models, the magnetic field compresses the cloud. It is possible that dark clouds are formed and kept together by electromagnetic effects. - In a dusty cloud gravitation collects the dust at the centre of gravity of the cloud. A dust ball is formed which, when it has grown large enough, collects gas from its surrounding. This process leads to the formation of a star. - In a cloud with irregular structure a number of such dust balls may be formed which later join by a process which is similar to the ‘planetesimal’ formation of planets and satellites around the Sun. Such a `stellesimal’ accretion may result in a body having the same mass and angular momentum as the primeval Sun.
https://www.plasma-universe.com/star-formation/
The solar system is made up of the Sun, the 8 planets and 5 dwarf planets and their 176 known moons,asteroids, comets, dust and gas. The planets, asteroids, and comets travel around the Sun. Sun is the centre of our solar system. Most of the bodies in the solar system travel around the Sun along nearly circular(elliptical) paths or orbits, and all the planets travel about the Sun in the anticlockwise direction. The Solar system is located in the Milky Way Galaxy. The formation of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the centre, forming the Sun, while the rest flattened into a proto planetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed. This widely accepted model, known as the nebular hypothesis, was first developed in the 18th century by Emanuel Swedenborg,Immanuel Kant, and Pierre-Simon Laplace. Its subsequent development has a variety of scientific disciplines including astronomy, physics, geology, and planetary science. Since the dawn of the space age in the 1950s and the discovery of extra solar planets in the 1990s, the model has been both challenged and refined to account for new observations. Scientists believe that the solar system was formed when a cloud of gas and dust in space was disturbed, maybe by the explosion of a nearby star (called a supernova). This explosion made waves in space which squeezed the cloud of gas and dust. Squeezing made the cloud start to collapse, as gravity pulled the gas and dust together, forming a solar nebula. Just like a dancer that spins faster as she pulls in her arms, the cloud began to spin as it collapsed. Eventually, the cloud grew hotter and denser in the center, with a disk of gas and dust surrounding it that was hot in the center but cool at the edges. As the disk got thinner and thinner, particles began to stick together and form clumps. Some clumps got bigger, as particles and small clumps stuck to them, eventually forming planets or moons . Near the center of the cloud, where planets like Earth formed, only rocky material could stand the great heat. Icy matter settled in the outer regions of the disk along with rocky material, where the giant planets like Jupiter formed. As the cloud continued to fall in, the center eventually got so hot that it became a star, the Sun, and blew most of the gas and dust of the new solar system with a strong stellar wind. By studying meteorites, which are thought to be left over from this early phase of the solar system, scientists have found that the solar system is about 4600 million years old! The various planets are thought to have formed from the solar nebula, the disc-shaped cloud of gas and dust left over from the Sun's formation. The currently accepted method by which the planets formed is accretion, in which the planets began as dust grains in orbit around the central proto star. Through direct contact, these grains formed into clumps up to 200 metres in diameter, which in turn collided to form larger bodies (planetesimals) of ~10 kilometres (km) in size.These gradually increased through further collisions, growing at the rate of centimetres per year over the course of the next few million years. The inner Solar System, the region of the Solar System inside 4 AU, was too warm for volatile molecules like water and methane to condense, so the planetesimals that formed there could only form from compounds with high melting points, such as metals (like iron, nickel, and aluminium) and rocky silicates. These rocky bodies would become the terrestrial planets(Mercury, Venus, Earth, and Mars). These compounds are quite rare in the Universe, comprising only 0.6% of the mass of the nebula, so the terrestrial planets could not grow very large.The terrestrial embryos grew to about 0.05 Earth masses (M⊕) and ceased accumulating matter about 100,000 years after the formation of the Sun; subsequent collisions and mergers between these planet-sized bodies allowed terrestrial planets to grow to their present sizes. When the terrestrial planets were forming, they remained immersed in a disk of gas and dust. The gas was partially supported by pressure and so did not orbit the Sun as rapidly as the planets. The resulting drag caused a transfer of angular momentum, and as a result the planets gradually migrated to new orbits. Models show that density and temperature variations in the disk governed this rate of migration, but the net trend was for the inner planets to migrate inward as the disk dissipated, leaving the planets in their current orbits. The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where the material is cool enough for volatile icy compounds to remain solid. The ices that formed the Jovian planets were more abundant than the metals and silicates that formed the terrestrial planets, allowing the giant planets to grow massive enough to capture hydrogen and helium, the lightest and most abundant elements.Planetesimals beyond the frost line accumulated up to 4 M⊕ within about 3 million years.Today, the four giant planets comprise just under 99% of all the mass orbiting the Sun. Theorists believe it is no accident that Jupiter lies just beyond the frost line. Because the frost line accumulated large amounts of water via evaporation from in falling icy material, it created a region of lower pressure that increased the speed of orbiting dust particles and halted their motion toward the Sun. In effect, the frost line acted as a barrier that caused material to accumulate rapidly at ~5 AU from the Sun. This excess material coalesced into a large embryo (or core) on the order of 10 M⊕, which began to accumulate an envelope via accretion of gas from the surrounding disc at an ever increasing rate.Once the envelope mass became about equal to the solid core mass, growth proceeded very rapidly, reaching about 150 Earth masses ~105 years thereafter and finally topping out at 318 M⊕.Saturn may owe its substantially lower mass simply to having formed a few million years after Jupiter, when there was less gas available to consume. Taurus stars like the young Sun have far stronger stellar winds than more stable, older stars. Uranus and Neptune are thought to have formed after Jupiter and Saturn did, when the strong solar wind had blown away much of the disc material. As a result, the planets accumulated little hydrogen and helium—not more than 1 M⊕ each. Uranus and Neptune are sometimes referred to as failed cores.The main problem with formation theories for these planets is the timescale of their formation. At the current locations it would have taken a hundred million years for their cores to accrete. This means that Uranus and Neptune probably formed closer to the Sun—near or even between Jupiter and Saturn—and later migrated outward (see Planetary migration below). Motion in the planetesimal era was not all inward toward the Sun; the Stardust sample return from Comet Wild 2 has suggested that materials from the early formation of the Solar System migrated from the warmer inner Solar System to the region of the Kuiper belt. After between three and ten million years,the young Sun's solar wind would have cleared away all the gas and dust in the proto planetary disc, blowing it into interstellar space, thus ending the growth of the planets.
https://www.millioninformations.com/2015/05/solar-system.html
We live in a universe filled with galaxies. Galaxies are vast gravitationally bound aggregations of hydrogen gas clouds, stars that are produced when part of a cloud collapses under its own enormous weight, atoms that have been ionized by stellar radiation and dust formed from the remnants of previous stars that have either exploded or thrown off their outer layers during old age. Of these, the largest directly observable constituents are the hydrogen gas billows. Older terms survive within the astronomical lexicon. Any extended object in the sky (other than the Sun, Moon, planets and comets) has at one time or another been called a nebula. The root meaning, however, is cloud and it’s now most often used to reference places that contain gas and dust such as the view provided by the image accompanying this article. The term dust is also broadly applied astronomically- it’s not your household variety but grains of material that are only fractions of a micron in diameter. Other more exotic material is also suspected within galaxies- often referenced as dark matter due to our inability, thus far, to observe it directly. The great gas clouds that fill our galaxy, the Milky Way, are organized into a persistent spiral pattern similar to the arms that are wound about the center of other galaxies observed throughout the cosmos. Piercing these clouds are great tendrils of light absorbing dust that impart fantastic, at times familiar, shapes to the clouds when viewed from a relatively close distance such as the outline of the North American continent seen on the left side of this picture. Our galaxy has the relative proportions of two CD’s stacked on top of each other. The disc is so broad that it takes light about 100,000 years to travel from edge to edge and about two thousand years to traverse top to bottom except near the center. The central area has a large, slightly flattened, oblong bulge about 7,000 light years thick, at its greatest, that also displays a curious bar shaped pattern– something only recently discovered. Four arms made of gas, dust and stars slowly wind outward more or less continuously from the central area. These are punctuated by one (and maybe more) fragmented arms, about mid-way across the disc. Our Sun, with its system of planets and smaller bodies in tow, currently resides inside a fragment. Ours is known as the Local or Orion Arm. Most of the bright stars that form our familiar constellations exist within the same arm fragment with us- at least all those within about 1,500 light years, more or less. One prominent feature observed in spiral galaxies are the dark lanes of dust which often outline the edges of their spiral pattern. We are close to one and you can see it by looking towards the northern summer constellation named Cygnus. It’s called the Cygnus Rift or the Northern Coalsack and it’s a cloud of light absorbing dust that lines our Local Arm. It can be spotted with the naked eye from a dark site because it blocks the glow seen from the vast and more distant Cygnus Star Cloud that runs the length of this constellation. The Cygnus Star Cloud is composed of the combined light from countless stars stacked up behind each other along our line of sight and along the length of the Local Arm. Much closer to us hangs the North American and Pelican Nebulae, pictured here. The Pelican Nebula is depicted on the right side of the image. They are situated near the star Deneb, the brightest star in Cygnus and are about 1,800 light years from the Sun. Though they have the appearance of being separate both are part of the same nebula- light absorbing dust tendrils hang in front, intervene and seem to divide the gas cloud thus giving an illusion that there are two objects. The entire nebula, as seen here, is over 100 light years wide. The ultraviolet light from a single star illuminates this nebula. The energy thrown off by this star is bright enough to ionize the material within the cloud. Ionization occurs when electrons are temporarily ejected from atoms and when they recombine a photon of light is released. Special filters can be placed in front of cameras that only pass the glow emitted by specific ionized atoms. This picture used that technique and assigned a unique color to each element. Hydrogen atoms are tinted green, sulfur is colored red and the hue for oxygen is blue. Therefore, the image not only displays the nebula’s physical appearance but also provides information about its chemical makeup. Astronomer Don Goldman produced this intense and beautiful image on July 8, 2006 from his suburban Sacramento, California back yard. It required a 3.5 hour exposure through a seven-inch telescope with an 11 megapixel astronomical camera. Do you have photos you’d like to share? Post them to the Universe Today astrophotography forum or email them, and we might feature one in Universe Today.
https://www.universetoday.com/351/
The Triangulum Galaxy – M33 The Triangulum Galaxy also known as M33 or NGC 598 or Pinwheel Galaxy that is about 3 million light-years away from Earth. While its mass is not well understood — one estimate puts it between 10 billion and 40 billion times the sun’s mass — what is known is it’s the third largest member of the Local Group, or the galaxies that are near the Milky Way. Triangulum also has a small satellite galaxy of its own, called the Pisces Dwarf Galaxy. Overall, M33 has a diameter of about 60,000 light years and is estimated to contain 40 billion stars. For comparison, the Milky Way contains 400 billion stars and M31 about 1 trillion (1,000 billion). M33 was probably discovered by Italian astronomer Giovanni Battista Hodierna before 1654. He listed it in his work regarding cometary orbits and admirable objects of the sky. Charles Messier independently re-discovered the galaxy on the night of August 25, 1764. Among the galaxy”s most distinctive features are ionized hydrogen clouds, also called H-II regions, which are massive regions of starbirth. Sprawling along loose spiral arms that wind toward the core, M33’s giant H-II regions are some of the largest known stellar nurseries, sites of the formation of short-lived but very massive stars . Intense ultraviolet radiation from the luminous, massive stars ionizes the surrounding hydrogen gas and ultimately produces the characteristic red glow. The observations of Triangulum galaxy reveal that it is approaching the Milky Way Galaxy at about 62,000 mph (100,000 kph). Some astronomers believe that Triangulum is “gravitationally trapped” by the massive Andromeda Galaxy that is also hurtling toward our galaxy, the European Southern Observatory stated. Among Triangulum’s most distinctive features is NGC 604, a region of starbirth so big that the Space Telescope Science Institute once described it as “monstrous.” Their 2003 estimate says that the gas cloud has more than 200 blue stars and that it is more than 1,300 light years across — or about 100 times bigger than the Orion Nebula. Star Clusters and Nebulae in the Triangulum Galaxy (M33) If NGC 604 were at the same distance from Earth as the Orion Nebula, it would be the brightest object in the night sky . The young stars are extremely hot, at 72,000 degrees F (40,000 C), and the biggest ones are 120 times the mass of the sun. Radiation pumping out from the young stars floods into the gas in the region, making it fluoresce or glow. “Within our Local Group, only the Tarantula Nebula in the Large Magellanic Cloud exceeds NGC 604 in the number of young stars even though the Tarantula Nebula is slightly smaller in size,” STScI stated. Studying M33 in infrared light, NASA’s Wide-field Infrared Survey Explorer (WISE) revealed hotspots of activity within the galaxy in 2011 while also showing that the center of the galaxy doesn’t have much going on within it. “Areas in the spiral arms that are hidden behind dust in visible light shine through brightly in infrared light, showing where clouds of cool gas are concentrated,” NASA wrote at the time. “There isn’t a lot of star formation occurring near the center of M33. It would be difficult to deduce this lack of activity in the core by only looking a visible-light image, where the core appears to be the brightest feature.” NASA added that the galaxy looks “surprisingly bigger” than an optical image would make it appear, because cold dust is visible further in space than what astronomers initially expected.
https://parsseh.ir/49438/the-triangulum-galaxy-m33.html
Array Captures Fireworks From Stellar Collision (CN) – The violent and explosive nature of star births has been captured in new images of a cloud of gas hundreds of times more massive than the sun, the aftermath of two heavenly bodies striking each other. Captured by the Atacama Large Millimeter/submillimeter Array, or ALMA, the images show an active star formation factory within the constellation of Orion – roughly 1,350 light years away from Earth – in which two young stars interacted, causing streams of dust and gas to jet into space at over 90 miles per second. “What we see in this once calm stellar nursery is a cosmic version of a Fourth of July fireworks display, with giant streamers rocketing off in all directions,” John Bally of the University of Colorado said. Bally is the lead author of a paper published Friday in the Astrophysical Journal, which details how 500 years ago the stars either grazed each other or collided, triggering a powerful reaction that also launched other protostars into space at break-neck speeds. Today, the remains of this explosion are visible from Earth. Groups of stars are born when a massive cloud of gas begins to collapse under its own gravity. In the densest regions, protostars form and begin to float along randomly. Over time, some stars begin to fall toward a common center of gravity, which is usually dominated by a massive protostar. When protostars are drawn too close to each other before drifting away into the galaxy, violent interactions can occur. The team says such explosions are believed to be relatively brief, the remnants of which – like those seen by ALMA – last only centuries. “Though fleeting, protostellar explosions may be relatively common,” Bally said. The new data will help astronomers understand how such events impact star formation across the Milky Way galaxy. “People most often associate stellar explosions with ancient stars, like a nova eruption on the surface of a decaying star or the even more spectacular supernova death of an extremely massive star,” Bally said.
https://www.courthousenews.com/array-captures-fireworks-stellar-collision/
Since humans began to permanently settle locations for extended periods of time there has been the challenge to safely dispose of, or treat human effluent. In specific to the communities of Nunavut and Arctic Canada, the treatment of wastewater has been particularly challenging. The harsh climate, remote nature and socio-economic factors are a few of the aspects which make the treatment of wastewater problematic in Canadian Arctic communities. In the past several decades a number of conventional and alternative wastewater treatment systems (e.g. lagoons and tundra wetlands) have been proposed and implemented in Nunavut and other remote Arctic communities. Knowledge of performance of these systems is limited, as little research has been conducted and regulatory monitoring has been poorly documented or not observed at all. Also, in the past, the rational design process of treatment systems in Arctic communities has not acknowledged cultural and socio-economic aspects, which are important for the long-term management and performance of the treatment facilities in Arctic communities. From 2008 to 2010 I characterized and studied the performance of several tundra wastewater treatment wetlands in the Kivalliq Region of Nunavut, as well as two in the Inuvaliat Region of the Northwest Territories. Performance testing occurred weekly throughout the summer of 2008. Characterization included surveys of plant communities in the tundra wetlands, specifically analyzing the relationship between Carex aquatilis and various nutrient contaminants in wastewater. Through their characterization I was able to provide greater insight into primary treatment zones within the wetland, and identify the main potential mechanisms for the treatment wastewater in the Arctic. I also studied the performance of a horizontal subsurface flow (HSSF) constructed wetland in Baker Lake Nunavut; the first system of its kind in the Canadian Arctic. The weekly performance study showed average weekly percent reduction in all parameters, with small deviations immediately after snow-melt and at the beginning of freeze-up. For the six parameters monitored I observed reductions of 47-94% cBOD5, 57-96% COD, 39-98% TSS, >99% TC, >99% E. coli, 84-99% NH3-Nand 80-99% TP for the six tundra treatment wetlands. Whereas, the wetland characterization study through the use of spatial interpolations on each of the wetlands and their water quality showed that concentrations of the wastewater parameters decreased the most in the first 100m of the wetland in all three treatment wetlands used in this portion of the analysis (Chesterfield Inlet, Paulatuk and Ulukhaktok). Areas of greatest concentration where shown to follow preferential flow paths with concentrations decreasing in a latitudinal and longitudinal direction away from the wastewater source. The Paulatuk and Ulukhaktok treatment wetlands were observed to effectively polish pre-treated wastewater from the facultative lake and engineered lagoon, with removals of key wastewater constituents of cBOD5, TSS and NH3-N to near background concentrations. And despite the absence of pre-treatment in Chesterfield Inlet, the wetland was also observed to effectively treat wastewater to near background concentrations. Further characterization on the composition of the sedge C. aquatilis, showed a high percent cover of the species corresponded with areas of high concentration of NH3-N in the wastewater. A principal components analysis verified the spatial results showing correlation between C. aquatilis cover and NH3-N concentrations. Analysis also showed strong positive relationship between sites closer to the source of wastewater and C. aquatilis. No correlation was found between the other parameters analyzed and C. aquatilis. The first year of study of the HSSF constructed wetland showed promising mean removals in cBOD5, COD, TSS, E. coli, Total Coliforms, and TP throughout the summer of 2009; removals of 25%, 31%, 52%, 99.3%, 99.3%, and 5% were observed respectively. However, the second year of study in 2010 the system did not perform as expected, and concentrations of effluent actually increased. I concluded that a high organic loading during the first year of study saturated the system with organics. Finally, a review of planning process and regulatory measures for wastewater in Arctic communities and the impending municipal wastewater standards effluent resulted in the following recommendations; i) wastewater effluent standards should reflect the diverse arctic climate, and socio-economic environment of the northern communities, ii) effluent standards should be region or even community specific in the Arctic, and iii) for planning and management of wastewater incorporation of Inuit understanding of planning and consultation needs to be incorporated in the future. This research has several major implications for wastewater treatment and planning for Nunavut and other Arctic Regions. The performance and characterization of tundra treatment wetlands fills significant gaps in our understanding of their performance and potential mechanisms of treatment, and treatment period in the Kivalliq Region. Although the HSSF constructed wetland failed, further research into engineered/augmented treatment wetlands should be considered as they provide low-cost low maintenance solutions for remote communities. Finally, the data collected in this study will provide significant insight into the development of new municipal wastewater effluent standards for northern communities, which will be reflected in the Fisheries Act.
https://uwspace.uwaterloo.ca/handle/10012/6682
- Arctic Warming – What Warming? The claim that the summer of 2007 was apocalyptic for Arctic sea ice has recently gone around the globe, because the coverage and thickness of the sea ice in the Arctic has been declining steadily over the past few decades . For many scientists this situation appears to be related to global warming (Broennimann, 2008). In 2003 a USA research center formulated it this way already: “Recent warming of Arctic may affect worldwide climate” Not everyone agreed but quarrel: What Arctic Warming? Although there is hardly a convincing reason to neglect the recent warming in the Arctic and the extent of ice melt during the summer season, it is not necessarily clear yet, whether the current discussion is based on a sound and comprehensive assessment. Climate research should not only deal with Arctic warming based on observations made during the last few decades, but at least be extremely interested in other climatic events that occurred in modern times, especially if somehow in connection with the situation in the Arctic. Why? Book Table of CONTENTS CHAPTER 1 REVIEWING THE PAST TO UNDERSTAND THE FUTURE – AN INTRODUCTION CHAPTER 2 CHAPTER 3 SPITSBERGEN TEMPERATURE ROCKETING CHAPTER 4 REGIONAL IMPACT OF THE SPITSBERG WARMING CHAPTER 5 CHAPTER 6 HOW IS THE AGITATION IN THE ARCTIC EXPLAINED? CHAPTER 7 WHERE DID THE EARLY ARCTIC WARMING ORIGINATE? CHAPTER 8 CAUSED NAVAL WAR THE ARCTIC WARMING?
https://arndbernaerts.com/33/
Arctic sea ice helps remove CO2 from the atmosphere A new study shows that calcium carbonate in the ice absorbs CO2 from the atmosphere. A new thesis from the Greenland Institute of Natural Resources proves sea ice to be an important transporter of greenhouses gases from the atmosphere to the depths of the ocean. The Arctic has warmed so much over the past few decades that the amount of sea ice has been reduced by some 30 per cent in the summer, and the winter ice has become much thinner. For this reason it’s to be expected that if the Artic sea ice shrinks, the atmosphere’s content of CO2 will also increase. ”If our results are representative of similar areas, the sea ice plays a greater role than expected and knowledge of this should be taken into account in future global CO2 budgets,” says study author Dorte Haubjerg Søgaard, PhD Fellow at the Nordic Center for Earth Evolution, University of Southern Denmark and the Greenland Insti-tute of Natural Resources, Nuuk. Sea ice thought impenetrable Søgaard began her study in 2010, when the sea ice in a number of fjords around Nuuk and Young Sound in north-east Greenland was examined -- and she wanted to determine the role of the sea ice in the regulation of the sea’s absorption of CO2 from the atmosphere. It was not until recently that scientists realised that sea ice had any influence at all on the world’s CO2 balance. ”We’ve known for a long time that Earth’s oceans are able to absorb enormous volumes of CO2 but we also thought this only applied to areas of ocean not covered by ice, because sea ice was considered to be impenetrable. This is not correct, however, since new research shows that the sea ice in the Arctic regions takes large quantities of CO2 out of the atmosphere and into the sea,” says Søgaard. An important piece CO2 absorption puzzle It was also shown that two processes take place within the sea ice which directly influence the exchange of greenhouse gases between the sea and the atmosphere. These are chemical precipitation of calcium carbonate (CaCO3) and the activity of microorganisms in the sea ice. Calcium carbonate is formed in the sea ice during the winter and as it forms, the greenhouse gas CO2 is separated from it and dissolved into a cold, heavy brine, which is pressed out of the ice and sinks into deeper parts of the sea. Unlike carbon dioxide and other gases, calcium carbonate is not able to move freely and so remains in the sea ice. During the warmer summer, when the sea ice melts, the calcium carbonate reacts with CO2 from the at-mosphere and is dissolved. ”So in this way, CO2 is removed from the atmosphere,” says Søgaard. The research project revealed that the chemical formation of calcium carbonate crystals was far more significant for the ocean’s ability to absorb CO2 than the biological processes driven primarily by ice algae and bacteria living in the sea ice. Another important discovery is that flower-like ice formations (frost flowers) form on the surface of newly formed sea ice. Søgaard has shown that these frost flowers contain extremely high concentrations of calcium carbonate, which may be of considerable importance to the potential absorption of CO2 in the Arctic regions.
https://sciencenordic.com/co2-denmark-global-warming/arctic-sea-ice-helps-remove-co2-from-the-atmosphere/1414237
CORVALLIS, Oregon -- A dramatic increase about 12,000 years ago in levels of atmospheric methane, a potent greenhouse gas, was most likely caused by higher emissions from tropical wetlands or from plant production, rather than a release from seafloor methane deposits, a new study concludes. This research, to be published Friday in the journal Science, contradicts some suggestions that the sudden release of massive amounts of methane frozen in seafloor deposits may have been responsible – or at least added to - some past periods of rapid global warming, including one at the end of the last ice age. The findings were made with analysis of carbon isotopes from methane frozen in Greenland ice core samples, by researchers from Oregon State University, the University of Victoria, University of Colorado, and the Scripps Institution of Oceanography at the University of California-San Diego. For climate researchers, an understanding of methane behavior is of some significance because it is the second most important "greenhouse gas" after carbon dioxide. Its atmospheric concentration has increased about 250 percent in the last 250 years, and it continues to rise about 1 percent a year. "Methane is a gas that makes a significant contribution to global warming but has gone largely unnoticed by the public and some policy makers," said Hinrich Schaefer, a postdoctoral research associate in the OSU Department of Geosciences. "Its concentration has more than doubled since the Industrial Revolution, from things like natural gas exploration, landfills, and agriculture. We need to know whether rapid increases of methane in the past have triggered global warming or just been a reaction to it." To better answer this question, researchers studied two stable isotopes of carbon found in methane, that can provide a better idea of where the methane came from during a period thousands of years ago when Earth was emerging from its most recent ice age, and entering the interglacial period that it is still in. At that time, methane concentration went up 50 percent in less than 200 years. Several things naturally produce methane, including biomass burning, geologic sources, wetlands, animals, and aerobic production by plants, a mechanism that was unknown until just recently. And huge amounts of methane – with more carbon stored in them than all the known oil and gas fields on Earth – are found in methane hydrates on the seafloor. In this setting, the cold temperatures and pressure keep the methane stable and prevent it from entering the atmosphere. But some researchers have theorized that something might release the trapped seafloor methane – submarine landslides, a drop in pressure caused by dropping sea levels, or warming of ocean waters. If that happened, it might cause a huge increase in atmospheric levels of methane and global warming. Some have hypothesized that this may be one of the factors that help cause cyclical ice ages – as ice levels rise and sea levels drop, methane might be released from the seafloor hydrates, causing global warming and an end to the ice age. Then the process would start over again. "There have been estimates that releasing even 1 percent of the methane hydrates in the seafloor could double the atmospheric concentration of methane," said Ed Brook, an associate professor of geosciences at OSU and co-author on the study. "So we looked to the past to see if that may have happened during previous periods of rapid global warming." Based on their isotopic analysis of the methane from the Greenland ice cores, the researchers concluded that it did not come from seafloor hydrate deposits or "gas bursts" of methane associated with them. The most likely candidates, they said, were higher emissions from tropical wetlands or larger amounts of plants, or some other combination of sources. If the rise in methane had come from seafloor hydrate deposits, the study found, the atmospheric levels of methane would have had a different isotopic "signature" than they actually did. There are still important questions to answer about methane in Earth's atmosphere and the role it may play in global warming, the scientists said. For one thing, the current understanding of methane sources and sinks does not completely explain the isotopic signature of methane now found in the atmosphere. This indicates that estimates of methane emissions, including the human-made contribution, may have to be revised. There are also concerns, they said, about methane trapped in permafrost across wide areas of the Earth's Arctic regions. There are significant amounts of methane found in this permafrost that could be released if it melted, and also organic material associated with melting permafrost that could cause further increases in methane. This might cause "a fairly significant rise in the total level of atmospheric methane of around 20 percent," Schaefer said. By largely ruling out major bursts of methane from seafloor deposits during a period of global warming, however, this study suggests there may not be any "reinforcing" greenhouse mechanism from that cause. The increase in tropical wetlands or other factors that caused a large, rapid methane increase at the end of the last ice age may be relevant to future changes in methane, but changes in land use and vegetation cover due to human activities complicate the analysis of this issue, the scientists said. Researchers at OSU are international leaders in the study of past climate changes, some of which have been surprisingly rapid - on the order of years or decades - and the mechanisms that cause them. These studies were funded by the National Science Foundation, American Chemical Society, and other grants and fellowships Views expressed in this article do not necessarily reflect those of UnderwaterTimes.com, its staff or its advertisers.
https://www.underwatertimes.com/news.php?article_id=51030912867
Study links ecosystem changes in temperate lakes to climate warming Unparalleled warming over the last few decades has triggered widespread ecosystem changes in many temperate North American and Western European lakes, say researchers at Queen's University and the Ontario Ministry of the Environment. The team reports that striking changes are now occurring in many temperate lakes similar to those previously observed in the rapidly warming Arctic, although typically many decades later. The Arctic has long been considered a "bellwether" of what will eventually happen with warmer conditions farther south. "Our findings suggest that ecologically important changes are already under way in temperate lakes," says Queen's Biology research scientist, Dr. Kathleen Ruhland, from the university's Paleoecological Environmental Assessment and Research Lab (PEARL) and lead author of the study. The research was recently published in the international journal Global Change Biology. Also on the team are Biology professor John Smol, Canada Research Chair in Environmental Change, and Andrew Paterson, a research scientist at the Ontario Ministry of the Environment and an adjunct professor at Queen's. One of the biggest challenges with environmental studies is the lack of long-term monitoring data, Dr. Ruhland notes. "We have almost no data on how lakes have responded to climate change over the last few decades, and certainly no data on longer term time scales," she says. "However, lake sediments archive an important record of past ecosystem changes by the fossils preserved in mud profiles." The scientists studied changes over the last few decades in the species composition of small, microscopic algae preserved in sediments from more than 200 lake systems in the northern hemisphere. These algae dominate the plankton that float at or near the surface of lakes, and serve as food for other larger organisms. Striking ecosystem changes were recorded from a large suite of lakes from Arctic, alpine and temperate ecozones in North America and western Europe. Aquatic ecosystem changes across the circumpolar Arctic were found to occur in the late-19th and early 20th centuries. These were similar to shifts in algal communities, indicating decreased ice cover and related changes, over the last few decades in the temperate lakes. "As expected, these changes occurred earlier – by about 100 years – in highly sensitive Arctic lakes, compared with temperate regions," says Dr. Smol, recipient of the 2004 Herzberg Gold Medal as Canada's top scientist. In a detailed study from Whitefish Bay, Lake of the Woods, located in northwestern Ontario, strong relationships were found between changes in the lake algae and long-term changes in air temperature and ice-out records. The authors believe that, although the study was focused on algae preserved in lake sediments, changes to other parts of the aquatic ecosystem are also likely (for example algal blooms and deep-water oxygen levels). "The widespread occurrence of these trends is particularly troubling as they suggest that climatically-induced ecological thresholds have already been crossed, even with temperature increases that are below projected future warming scenarios for these regions," adds Dr. Paterson. The authors warn that if the rate and magnitude of temperature increases continue, it is likely that new ecological thresholds will be surpassed, many of which may be unexpected. "We are entering unchartered territory, the effects of which can cascade throughout the entire ecosystem," concludes Dr. Smol.
https://phys.org/news/2008-12-links-ecosystem-temperate-lakes-climate.html
location - Greenland. In the summer of 2019, the ice sheet covering Greenland experienced significant melting. A study revealed that the loss was largely caused by weather patterns that are happening more frequently. These patterns are not being considered in climate models, which suggests that scientists are underestimating the melt rates of the Greenland Ice Sheet! — Greenland Ice Sheet Melting: Facts Columbia University’s Research Professor, Marco Tedesco, says “We’re destroying ice in decades that was built over thousands of years.” He says that the effects of this destruction will have huge impacts on the rest of the world. Using satellite data, climate models and global weather patterns, the researchers investigated the melting of the ice sheet in 2019. They found that almost 96% of the ice sheet had experienced melting at some point in the year, which is a lot more than the 64% between the years of 1981 and 2010. Check This Out Next: The Arctic Could Have Ice-Free Summers by 2050 High pressure conditions happened for 63 of the 92 summer days in 2019. The continuous high pressure zone over the ice sheet last summer affected the levels of snow fall, cloudiness and the reflection and absorption of sunlight. Clouds were not able to form over southern Greenland in high pressure conditions, which caused unfiltered sunlight to melt the ice sheet surface. Fewer clouds also meant less snow, which reveals darkened ice and absorbs heat instead of reflecting it back into space. Tedesco points out that these conditions have become more and more frequent over the past few decades. Worryingly, a team of researchers says that the climate models used by the Intergovernmental Panel on Climate Change (IPCC) have not taken into account these unusual weather conditions. If these high pressure environments continue, the ice sheet could melt twice as quickly as currently predicted, which will have terrible consequences for sea-level rise. In the last few decades, 20-25% of global sea level rise has been caused by Greenland. If carbon emissions continue to increase, this will likely rise to around 40% by 2100. Greenland’s ice sheet covers 80% of the island and if it melts completely, it is predicted to raise global sea levels by up to 7 metres! This is why it is very important that we reduce our greenhouse gas emissions- people living near the ocean all around the world will be very badly affected by sea level rise and will have to leave their homes due to flooding. This will affect the world’s poorest countries much more than the wealthier ones, so it is important to reduce the effects of climate change to protect these vulnerable people.
https://kids.earth.org/climate-change/greenland-ice-sheet/
Earth scientists are attempting to predict the future impacts of climate change by reconstructing the past behavior of Arctic climate and ocean circulation. In a November special issue of the journal Ecology, a group of scientists report that if current patterns of change in the Arctic and North Atlantic Oceans continue, alterations of ocean circulation could occur on a global scale, with potentially dramatic implications for the world's climate and biosphere. Charles Greene of Cornell University and his colleagues reconstructed the patterns of climate change in the Arctic from the Paleocene epoch to the present. Over these 65 million years, the Earth has undergone several major warming and cooling episodes, which were largely mitigated by the expansion and contraction of sea ice in the Arctic. "When the Arctic cools and ice sheets and sea ice expand, the increased ice cover increases albedo, or reflectance of the sun's rays by the ice," says Greene, the lead author on the paper. "When more of the sun is reflected rather than absorbed, this leads to global cooling." Likewise, when ice sheets and sea ice contract and expose the darker-colored land or ocean underneath, heat is absorbed, accelerating climate warming. Currently, the Earth is in the midst of an interglacial period, characterized by retracted ice sheets and warmer temperatures. In the past three decades, changes in Arctic climate and ice cover have led to several reorganizations of northern ocean circulation patterns. Since 1989, a species of plankton native to the Pacific Ocean has been colonizing the North Atlantic Ocean, a feat that hasn't occurred in more than 800,000 years. These plankton were carried across the Arctic Ocean by Pacific waters that made their way to the North Atlantic. "When Arctic climate changes, waters in the Arctic can go from storing large quantities of freshwater to exporting that freshwater to the North Atlantic in large pulses, referred to as great salinity anomalies," Greene explains. "These GSAs flow southward, disrupting the ocean's circulation patterns and altering the temperature stratification observed in marine ecosystems." In the continental shelf waters of the Northwest Atlantic, the arrival of a GSA during the early 1990s led to a major ecosystem reorganization, or regime shift. Some ocean ecosystems in the Northwest Atlantic saw major drops in salinity, increased stratification, an explosion of some marine invertebrate populations and a collapse of cod stocks. "The changes in shelf ecosystems between the 1980s and 1990s were remarkable," says Greene. "Now we have a much better idea about the role climate had in this regime shift." This free webinar will inform attendees on the most significant updates and modifications in the new standard, known as E1527-21 Standard Practice for Environmental Site Assessments: Phase I Environmental Site Assessment Process.
https://eponline.com/articles/2008/11/19/ecologists-use-ocean-data-to-predict-climate-change.aspx
Bottomland hardwood forests and wetlands are an integral part of this eco-region. This study was aimed at quantifying the changes that have taken place since 1974 in northeast Texas. Information about significant modifications or decline in the total area of these land cover types will aid environmental and resource managers to implement appropriate remedial measures.
https://ssl.tamu.edu/research/projects/landcover-change-assessment/
Yorkshire Water plants over 200,000 trees! 26/01/2021 · Obadiah Mulder Wetlands have saved Australia $27 billion in storm damage over the past five decades 26/01/2021 · Trevor Gareth Jones New mangrove forest mapping tool puts conservation in reach of coastal communities 26/01/2021 · Matthew Hoffmann 2020 was a terrible year for climate disasters, but there are reasons for hope in 2021 26/01/2021 · UN News Scale up funding for climate adaptation programmes, Guterres urges 26/01/2021 · International Hydropower Association HydroSediNET launches to promote sustainable sediment management 26/01/2021 · Bluefield Research Digital water project activity rebounds 11% in the U.S. and signals bright spot amidst pandemic 26/01/2021 · Anglian Water East of England’s unique chalk streams to benefit from £300 M fast-tracked environmental funds 26/01/2021 · Washington University in St.Louis Lots of water in the world’s most explosive volcano 26/01/2021 · University of Plymouth Combined flows send up to 3 billion microplastics a day into Bay of Bengal 27/01/2021 · Peter S. Ross They're everywhere: New study finds polyester fibres throughout the Arctic Ocean 27/01/2021 · Columbia University Increasing numbers of U.S. residents live in high-risk wildfire and flood zones. Why?
https://smartwatermagazine.com/most-popular?sort_bef_combine=created%20ASC
There are no requirements with respect to size or composition for a body of ice to be termed a polar ice cap, nor any geological requirement for it to be over land; only that it must be a body of solid phase matter in the polar region. This causes the term "polar ice cap" to be something of a misnomer, as the term ice cap itself is applied more narrowly to bodies that are over land, and cover less than 50,000 km²: larger bodies are referred to as ice sheets. The composition of the ice will vary. For example, Earth's polar caps are mainly water ice, whereas Mars's polar ice caps are a mixture of solid carbon dioxide and water ice. Polar ice caps form because high-latitude regions receive less energy in the form of solar radiation from the Sun than equatorial regions, resulting in lower surface temperatures. Earth's polar caps have changed dramatically over the last 12,000 years. Seasonal variations of the ice caps takes place due to varied solar energy absorption as the planet or moon revolves around the Sun. Additionally, in geologic time scales, the ice caps may grow or shrink due to climate variation. Contents Earth North Pole Earth's North Pole is covered by floating pack ice (sea ice) over the Arctic Ocean. Portions of the ice that do not melt seasonally can get very thick, up to 3–4 meters thick over large areas, with ridges up to 20 meters thick. One-year ice is usually about 1 meter thick. The area covered by sea ice ranges between 9 and 12 million km². In addition, the Greenland ice sheet covers about 1.71 million km² and contains about 2.6 million km³ of ice. When the ice breaks off (calves) it forms icebergs scattered around the northern Atlantic. According to the National Snow and Ice Data Center, "since 1979, winter Arctic ice extent has decreased about 4.2 percent per decade". Both 2008 and 2009 had a minimum Arctic sea ice extent somewhat above that of 2007. At other times of the year the ice extent is still sometimes near the 1979–2000 average, as in April 2010, by the data from the National Snow and Ice Data Center. South Pole Earth's south polar land mass, Antarctica, is covered by the Antarctic ice sheet. It covers an area of about 14.6 million km2 and contains between 25 and 30 million km3 of ice. Around 70% of the fresh water on Earth is contained in this ice sheet. Data from the National Snow and Ice Data Center shows that the sea ice coverage of Antarctica has a slightly positive trend over the last three decades (1979–2009). Historical cases Over the past several decades, Earth’s polar ice caps have gained significant attention because of the alarming decrease in land and sea ice. NASA reports that sea ice in the Arctic has been declining at a rate of 9% per decade for the past 30 years, whereas Antarctica has been losing land ice at a rate of more than 100 km3 per year since 2002. The current rate of decline of the ice caps has caused many investigations and discoveries on glacier dynamics and their influence on the world’s climate. In the early 1950s, scientists and engineers from the US Army began drilling into polar ice caps for geological insight. These studies resulted in “nearly forty years of research experience and achievements in deep polar ice core drillings... and established the fundamental drilling technology for retrieving deep ice cores for climatologic archives.” Polar ice caps have been used to track current climate patterns but also patterns over the past several thousands years from the traces of CO2 and CH4 found trapped in the ice. In the past decade, polar ice caps have shown their most rapid decline in size with no true sign of recovery. Josefino Comiso, a senior research scientist at NASA, found that the “rate of warming in the Arctic over the last 20 years is eight times the rate of warming over the last 100 years.” In September 2012, sea ice reached its smallest size ever. Journalist John Vidal stated that sea ice is "700,000 sq km below the previous minimum of 4.17m sq km set in 2007". In August 2013, Arctic sea ice extent averaged 6.09m km2, which represents 1.13 million km2 below the 1981-2010 average for that month. Mars In addition to Earth, the planet Mars also has polar ice caps. They consist of primarily water-ice with a few percent dust. Frozen carbon dioxide makes up a small permanent portion of the Planum Australe or the South Polar Layered Deposits. In both hemispheres a seasonal carbon dioxide frost deposits in the winter and sublimes during the spring. Data collected in 2005 from NASA missions to Mars show that the southern residual ice cap undergoes sublimation inter-annually. The most widely accepted explanation is that fluctuations in the planet's orbit are causing the changes. Pluto On April 29, 2015, NASA stated that its New Horizons missions had discovered a feature thought to be a polar ice cap on the dwarf planet Pluto. The probe's flyby of Pluto in July of 2015 allowed the Alice ultraviolet imaging spectrometer to confirm that the feature was in fact an ice cap composed of methane and nitrogen ices. See also References - ↑ The National Snow and Ice Data Center Glossary - ↑ "NSIDC Arctic Sea Ice News Fall 2007". nsidc.org. Retrieved 27 March 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ "Arctic Sea Ice News & Analysis". National Snow and Ice Data Center. Retrieved 9 May 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ "State of the Cryosphere / Arctic and Antarctic Standardized Anomalies and Trends Jan 1979 - Jul 2009". National Snow and Ice Data Center. Retrieved 24 April 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ Thompson, Elvia. "Recent Warming of Arctic May Affect Worldwide Climate". NASA. Retrieved 2 October 2012.
https://www.infogalactic.com/info/Polar_ice_cap
An update from Programme Manager, Joe Mullin. The Arctic Oil Spill Response Technology Joint Industry Programme (JIP) is delighted to be exhibiting at the 3P Arctic Conference & Exhibition in Stavanger where we have launched our first six reports. These reports cover in situ burning (ISB), dispersants, and remote sensing. They include current state of the art technologies for remote sensing above and below the water; operational limits of dispersants and mineral fines in arctic waters; identification of the regulatory requirements and permitting processes in place; available technology and lessons learned from key ISB experiments; and a summary of the regulatory landscape in place to obtain approval to use ISB in arctic/sub-arctic nations. The key findings include: - Dispersants can work in the Arctic and will, under certain conditions, be more effective in the presence of ice than in open water; - In addition to increasing effectiveness, the presence of ice can increase the time window within which dispersants can be used effectively; - Confirmation that technology exists to conduct controlled ISB of oil spilled in a wide variety of ice conditions and that ISB is one of the response techniques with the highest potential for oil spill removal in arctic conditions; - There is a considerable body of scientific and engineering knowledge on ISB to ensure safe and effective response in open water, broken pack ice and complete ice cover, gleaned from over 40 years of research, including large-scale field experiments; - Most of the perceived risks associated with burning oil are easily mitigated by following approved procedures, using trained personnel, and maintaining appropriate separation distances; and - The current state of technology in remote sensing, confirms that the industry has a range of airborne and surface imaging systems utilised from helicopters, fixed-wing aircraft, vessels and drilling platforms that have been developed and tested for the “oil on open water scenario” that can be used for ice conditions. This marks the first significant step towards the world’s biggest literature review into arctic oil spill response technologies. Through this initial research we have reaffirmed our confidence in the techniques that the industry and its partners have developed over decades of research and development to respond to oil spills in ice. By 2015, the JIP is looking to launch an additional 18 reports covering all 6 areas of research.
http://www.arcticresponsetechnology.org/mediaroom/a-busy-week-for-the-jip/
A new multi-million pound visitor centre is set to open at Stonehenge at the end of 2013. This will include a life-size reconstruction of a Neolithic house, which is based on the knowledge of settlement activity uncovered during the Stonehenge Riverside Project. BU’s Head of Archaeology Dr Kate Welham, a co-director of the project says: “When we excavated the houses we only saw the floors. It’s amazing to see them reconstructed using the exact measurements taken by BU students and researchers.” Visitors will be able to go inside the houses and see where people slept and ate 4,500 years ago. “I was struck by how big the houses are,” Dr Welham continued. “They are roomy and comfortable and you could easily fit ten people comfortably around the hearths in the centre. Visitors will really be able to experience what it was like to be part of the Neolithic community that built Stonehenge all those years ago.” The new Stonehenge Visitor Centre will also display artefacts uncovered by BU archaeologists during the Riverside Project and from Professor Tim Darvill’s investigations inside the stone circles themselves. These include flint arrowheads, pieces of ceramic cooking pots, and other day to day essentials from Neolithic life. BU’s Professor Tim Darvill also features as one of the “Talking Heads” in a series of video interviews with archaeologists past and present who have worked at Stonehenge. He explains his research and his theories about what the monument was used for. All the displays are presented in relation to an ancient ‘timeline’ for the Wessex region that is largely based on a new chronology for sites in the area established by the research carried out by Dr Welham and Professor Darvill.
https://research.bournemouth.ac.uk/2013/11/stonehenge-visitor-centre/
The Prehistoric landmark of Stonehenge is distinctive and famous enough to have become frequently referenced in popular culture. The landmark has become a symbol of British culture and history, owing to its distinctiveness and its long history of being portrayed in art, literature and advertising campaigns, as well as modern media formats, such as television, film and computer games. This is in part because the arrangement of standing stones topped with lintels is unique, not just in the British Isles, but in the world. Art and mythology The interest in 'ancient' Britain can be traced back to the sixteenth and seventeenth century, following the pioneering work of the likes of William Camden, John Aubrey and John Evelyn. The rediscovery of Britain's past was also tied up in the nation's emerging sense of importance as an international power. Antiquarians and archaeologists, notably William Stukeley, were conducting excavations of megalithic sites, including Stonehenge and the nearby Avebury. Their findings caused considerable debate on the history and meaning of such sites, and the earliest depictions reflected a search for a mystical explanation. Earlier explanations, including the view proposed by Inigo Jones in 1630, that Stonehenge was built by the Romans such was its sophistication and beauty, were disproved in the late seventeenth century. It was proven that Stonehenge was the work of indigenous neolithic peoples. From this period onwards artists made images of barrows, standing stones and excavated objects which increasingly drew on highly imaginative ideas about the prehistoric people that created them. These helped to create the image of Britain that a broadening audience was becoming aware of through illustrated books, prints and maps. Poets and other writers deepened the impact of this visual material by imagining ancient pasts and mythologising the distant roots of the growing British Empire. Debates about British ancestry and national identity saw a growing conviction that the British were an ancient people, and that the newly named 'United Kingdom', of which Scotland had become a part in 1707, might find greater harmony through searching for a common past. For the English, this past was to be found in the West, starting around Stonehenge and stretching into the ancient Celtic regions of Wales and Cornwall. During the early nineteenth century it was artists such as John Constable and J.M.W. Turner who helped to make the megalithic sites a part of the popular imagination and understanding of Britain's past. The philosopher Edmund Burke proposed the idea of the 'sublime' sense as being evoked by 'feelings of danger and terror, obscurity and power, in art as well as life'. This was already a feature of artistic and literary works of the period, and provided the theoretical basis for a growing appreciation of desolate landscapes and ancient ruins. For these reasons Stonehenge became of particular interest for artists. Burke himself wrote Stonehenge, neither for disposition nor ornament, has anything admirable; but those huge rude masses of stone, set end on end, and piled high on each other, turn the mind on the immense force necessary for such a work. The very nature of the barren Wiltshire landscape, and Salisbury Plain became particularly notable for the apparently miraculous powers that created Stonehenge. William Wordsworth wrote Pile of Stone-henge! So proud to hint yet keep Thy secrets, thou lov'st to stand and hear The plain resounding to the whirlwind's sweep Inmate of lonesome Nature's endless year. Turner and Constables' paintings deviated from the actual state of the stones. Turner particularly added stones that were not there in reality, and those that were, were incorrect in their dimensions. The paintings were arranged for a romantic effect popular at the time however. Throughout the nineteenth century, a new motive emerged in the depictions of Stonehenge, that of an anti-pagan approach, with paintings by the likes of William Overend Geller, with his painting The Druid's Sacrifice in 1832. In the novel "Tess of the d'Urbervilles" by Thomas Hardy, the main character, Tess, is captured by the police at Stonehenge, the 'heathen' nature of the setting being used to highlight the character's temperament. The image of Stonehenge became adapted in the twentieth century by those wishing to advertise using a monument viewed as a symbol of Britain. The Royal Navy exploited this sense of identification by naming an S class destroyer and one of their S Class submarines HMS Stonehenge. The Shell Oil Company commissioned the artist Edward McKnight Kauffer to paint a series of posters during the interwar period, to be used to encourage tourism by car owners. Stonehenge was one of those depicted. In contemporary popular culture By now a powerful and instantly recognisable symbol, the monument was featured in a wide number of ways. The Beatles are seen performing on Salisbury Plain with Stonehenge visible in the background in their 1965 film Help!. The site has also been used for concerts, starting with the Stonehenge Free Festival in 1972. Perhaps in recognition of the site's link to popular music, the mockumentary film This is Spinal Tap featured the titular fictional rock band band performing a song named "Stonehenge" on stage. In one of the many embarrassing events on their comeback tour, confusion about abbreviating inches and feet results in a Stonehenge replica so small that it was in danger of being trod upon by the Little People hired to dance around it. The momument continues to be featured in film, television and radio, either to question the origin or history of Stonehenge, or to play upon its position as an instantly recognisable structure and symbol of Britain. In books by Kurt Vonnegut and S. M. Stirling amongst others, alternative theories are suggested and explored as part of the larger plot. The monument has also become popular in computer games, where alternative uses are often posited for Stonehenge, or its iconic nature is explored. References Categories: - David Dimbleby (2005), A Picture of Britain, Tate Publishing, ISBN 1-85437-566-0 - James McClintock (2006), The Stonehenge Companion, English Heritage ISBN 1-905624-08-5 - Evan Hadingham (1976), Circles and Standing Stones, William Heinemann Ltd. - Julian Richards (2004), Stonehenge: A History in Photographs, English Heritage ISBN 1-85074-895-0 - Aubrey Burl (1979), Prehistoric Avebury, Yale University Press ISBN 0-300-02368-5 - Locations in popular culture - Stonehenge Wikimedia Foundation. 2010. См. также в других словарях:
https://en.academic.ru/dic.nsf/enwiki/5748121
German ‘Stonehenge’ site reveals 10 dismembered bodies of women, children Sites such as Stonehenge are older than memory itself. Stories of Druids and magical rites arose long after the standing stones true purpose was forgotten. Little can be inferred about the role such monuments played in Neolithic cultures. We know many were aligned to the Sun and stars. There are traces of ceremonial pathways. Now, human sacrifice has entered the picture. But new excavations near the German town of Pömmelte are raising some disturbing possibilities. It’s the site of an ancient Stone Age sanctuary, contemporary to Stonehenge itself. It’s a complex of concentric-ring mounds, ditches and deeply sunken wooden posts. It’s just one of several sites in Central Europe, Portugal and Spain that indicates circular henge monuments are not uniquely British. Ghastly ceremonies Archaeologists say the German site appears to have been a gathering place for community events and rituals. A series of pits have been uncovered containing evidence these activities. This includes fragments of ceramic pots and cups, stone axes and animal bones. They reveal the Pömmelte henge was active for some 300 years, starting from 2,300 B.C. But among the Pömmelte scraps were some disturbing finds — the dismembered bodies of 10 women and children. Lead researcher André Spatzier says it appears the victims had been pushed into the pit. At least one of the teenagers had their hands bound together. ARCHAEOLOGISTS HAVE DISCOVERED THE 'CRUSHED' POMPEII MAN'S SKULL, AND IT WILL SURPRISE YOU While the archaeologists say they cannot yet be certain of the purpose of their deaths, the idea of ritual sacrifice was a strong possibility. No adult males were found in the pits, and the items they were buried with showed signs of having been ritualistically smashed. A handful of male bodies were unearthed away from the pits, on the eastern side of the henge. These men, aged 17 to 30, had been interred with great care, in their own graves. Like Stonehenge, the site itself was not permanently inhabited. There bodies were undamaged. “The henge monuments of the British Isles are generally considered to represent a uniquely British phenomenon, unrelated to Continental Europe; this position should now be reconsidered,” the researchers write in the journal Antiquity. “The uniqueness of Stonehenge lies, strictly speaking, with its monumental megalithic architecture.” German henge The Pömmelte henge is a 115 meter-wide series of concentric mounds. Many have evidence of “wood-henge” style post holes sunk within them. It was discovered from the air in 1991, but excavations only began recently. Excavations show it was built during the transition between the Neolithic Stone Age to the early Bronze Age. This is the same era as Stonehenge. Stonehenge is positioned in such a way as to mark the arrival of the summer solstice. HUMAN SACRIFICES SURROUND ANCIENT MESOPOTAMIAN TOMB Pömmelte henge, however, has its four entrances aligned with dates half way between the equinoxes and solstices. Spatzier says these were key agricultural planting and harvesting dates. Archaeologists believe Pömmelte henge ended its days about 2,050 B.C. in what was likely to have been a ceremonial decommissioning. The wooden post holes had been filled with artefacts. One of the concentric pits was filled with ash — likely from the burnt posts. “It looks like at the end of the main occupation, around 2,050 B.C., they extracted the posts, put offerings into the post holes and probably burned all the wood and back-shoved it into the ditch,” Professor Spatzier told Live Science. “So, they closed all the features. It was still visible above ground, but only as a shovel depression.” This article was originally published by news.com.au.
https://www.foxnews.com/science/german-stonehenge-site-reveals-10-dismembered-bodies-of-women-children